You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ignite.apache.org by ALEKSEY KUZNETSOV <al...@gmail.com> on 2017/03/07 11:19:04 UTC

distributed transaction of non-single coordinator

Hi all! Im designing distributed transaction which can be started at one
node, and continued at other one. Has anybody thoughts on it ?
-- 

*Best Regards,*

*Kuznetsov Aleksey*

Re: distributed transaction of non-single coordinator

Posted by ALEKSEY KUZNETSOV <al...@gmail.com>.
Yakov, I have couple of questions regarding tests proposal.Thx

пт, 30 июн. 2017 г. в 19:17, ALEKSEY KUZNETSOV <al...@gmail.com>:

> Thanks! Do you think all test scenarios results, presented in table(in
> ticket comments) , are acceptable ?
>
> пт, 30 июн. 2017 г., 18:28 Yakov Zhdanov <yz...@gridgain.com>:
>
>> Alex, I have commented in the ticket. Please take a look.
>>
>> Thanks!
>> --
>> Yakov Zhdanov, Director R&D
>> *GridGain Systems*
>> www.gridgain.com
>>
>> 2017-06-29 17:27 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:
>>
>> > I've attached HangTest. I suppose it should not hang, am i right ?
>> >
>> > чт, 29 июн. 2017 г. в 14:54, ALEKSEY KUZNETSOV <
>> alkuznetsov.sb@gmail.com>:
>> >
>> > > Igntrs.
>> > > Im rewieving all usages of threadId of
>> > > transaction.(IgniteTxAdapter#threadID). What is the point of usage
>> > threadId
>> > > in mvcc entry ?
>> > >
>> > > пн, 3 апр. 2017 г. в 9:47, ALEKSEY KUZNETSOV <
>> alkuznetsov.sb@gmail.com>:
>> > >
>> > >> so what do u think on my idea?
>> > >>
>> > >> пт, 31 Мар 2017 г., 11:05 ALEKSEY KUZNETSOV <
>> alkuznetsov.sb@gmail.com>:
>> > >>
>> > >>> sorry for misleading you. We planned to support multi-node
>> > transactions,
>> > >>> but failed.
>> > >>>
>> > >>> пт, 31 мар. 2017 г. в 10:51, Alexey Goncharuk <
>> > >>> alexey.goncharuk@gmail.com>:
>> > >>>
>> > >>> Well, now the scenario is more clear, but it has nothing to do with
>> > >>> multiple coordinators :) Let me think a little bit about it.
>> > >>>
>> > >>> 2017-03-31 9:53 GMT+03:00 ALEKSEY KUZNETSOV <
>> alkuznetsov.sb@gmail.com
>> > >:
>> > >>>
>> > >>> > so what do u think on the issue ?
>> > >>> >
>> > >>> > чт, 30 Мар 2017 г., 17:49 ALEKSEY KUZNETSOV <
>> > alkuznetsov.sb@gmail.com
>> > >>> >:
>> > >>> >
>> > >>> > > Hi ! Thanks for help. I've created ticket :
>> > >>> > > https://issues.apache.org/jira/browse/IGNITE-4887
>> > >>> > > and a commit :
>> > >>> > >
>> > >>>
>> https://github.com/voipp/ignite/commit/aa3487bd9c203394f534c605f84e06
>> > >>> > 436b638e5c
>> > >>> > > We really need this feature
>> > >>> > >
>> > >>> > > чт, 30 мар. 2017 г. в 11:31, Alexey Goncharuk <
>> > >>> > alexey.goncharuk@gmail.com
>> > >>> > > >:
>> > >>> > >
>> > >>> > > Aleksey,
>> > >>> > >
>> > >>> > > I doubt your approach works as expected. Current transaction
>> > recovery
>> > >>> > > protocol heavily relies on the originating node ID in its
>> internal
>> > >>> logic.
>> > >>> > > For example, currently a transaction will be rolled back if you
>> > want
>> > >>> to
>> > >>> > > transfer a transaction ownership to another node and original tx
>> > >>> owner
>> > >>> > > fails. An attempt to commit such a transaction on another node
>> may
>> > >>> fail
>> > >>> > > with all sorts of assertions. After transaction ownership
>> changed,
>> > >>> you
>> > >>> > need
>> > >>> > > to notify all current transaction participants about this
>> change,
>> > >>> and it
>> > >>> > > should also be done failover-safe, let alone that you did not
>> add
>> > any
>> > >>> > tests
>> > >>> > > for these cases.
>> > >>> > >
>> > >>> > > I back Denis here. Please create a ticket first and come up with
>> > >>> clear
>> > >>> > > use-cases, API and protocol changes design. It is hard to reason
>> > >>> about
>> > >>> > the
>> > >>> > > changes you've made when we do not even understand why you are
>> > making
>> > >>> > these
>> > >>> > > changes and how they are supposed to work.
>> > >>> > >
>> > >>> > > --AG
>> > >>> > >
>> > >>> > > 2017-03-30 10:43 GMT+03:00 ALEKSEY KUZNETSOV <
>> > >>> alkuznetsov.sb@gmail.com>:
>> > >>> > >
>> > >>> > > > So, what do u think on my idea ?
>> > >>> > > >
>> > >>> > > > ср, 29 мар. 2017 г. в 10:35, ALEKSEY KUZNETSOV <
>> > >>> > alkuznetsov.sb@gmail.com
>> > >>> > > >:
>> > >>> > > >
>> > >>> > > > > Hi! No, i dont have ticket for this.
>> > >>> > > > > In the ticket i have implemented methods that change
>> > transaction
>> > >>> > status
>> > >>> > > > to
>> > >>> > > > > STOP, thus letting it to commit transaction in another
>> thread.
>> > In
>> > >>> > > another
>> > >>> > > > > thread you r going to restart transaction in order to commit
>> > it.
>> > >>> > > > > The mechanism behind it is obvious : we change thread id to
>> > >>> newer one
>> > >>> > > in
>> > >>> > > > > ThreadMap, and make use of serialization of txState,
>> > transactions
>> > >>> > > itself
>> > >>> > > > to
>> > >>> > > > > transfer them into another thread.
>> > >>> > > > >
>> > >>> > > > >
>> > >>> > > > > вт, 28 мар. 2017 г. в 20:15, Denis Magda <dmagda@apache.org
>> >:
>> > >>> > > > >
>> > >>> > > > > Aleksey,
>> > >>> > > > >
>> > >>> > > > > Do you have a ticket for this? Could you briefly list what
>> > >>> exactly
>> > >>> > was
>> > >>> > > > > done and how the things work.
>> > >>> > > > >
>> > >>> > > > > —
>> > >>> > > > > Denis
>> > >>> > > > >
>> > >>> > > > > > On Mar 28, 2017, at 8:32 AM, ALEKSEY KUZNETSOV <
>> > >>> > > > alkuznetsov.sb@gmail.com>
>> > >>> > > > > wrote:
>> > >>> > > > > >
>> > >>> > > > > > Hi, Igniters! I 've made implementation of transactions of
>> > >>> > non-single
>> > >>> > > > > > coordinator. Here you can start transaction in one thread
>> and
>> > >>> > commit
>> > >>> > > it
>> > >>> > > > > in
>> > >>> > > > > > another thread.
>> > >>> > > > > > Take a look on it. Give your thoughts on it.
>> > >>> > > > > >
>> > >>> > > > > >
>> > >>> > > > > https://github.com/voipp/ignite/pull/10/commits/
>> > >>> > > > 3a3d90aa6ac84f125e4c3ce4ced4f269a695ef45
>> > >>> > > > > >
>> > >>> > > > > > пт, 17 мар. 2017 г. в 19:26, Sergi Vladykin <
>> > >>> > > sergi.vladykin@gmail.com
>> > >>> > > > >:
>> > >>> > > > > >
>> > >>> > > > > >> You know better, go ahead! :)
>> > >>> > > > > >>
>> > >>> > > > > >> Sergi
>> > >>> > > > > >>
>> > >>> > > > > >> 2017-03-17 16:16 GMT+03:00 ALEKSEY KUZNETSOV <
>> > >>> > > > alkuznetsov.sb@gmail.com
>> > >>> > > > > >:
>> > >>> > > > > >>
>> > >>> > > > > >>> we've discovered several problems regarding your
>> > >>> "accumulation"
>> > >>> > > > > >>> approach.These are
>> > >>> > > > > >>>
>> > >>> > > > > >>>   1. perfomance issues when transfering data from
>> temporary
>> > >>> cache
>> > >>> > > to
>> > >>> > > > > >>>   permanent one. Keep in mind big deal of concurent
>> > >>> transactions
>> > >>> > in
>> > >>> > > > > >>> Service
>> > >>> > > > > >>>   commiter
>> > >>> > > > > >>>   2. extreme memory load when keeping temporary cache in
>> > >>> memory
>> > >>> > > > > >>>   3. As long as user is not acquainted with ignite,
>> working
>> > >>> with
>> > >>> > > > cache
>> > >>> > > > > >>>   must be transparent for him. Keep this in mind. User's
>> > >>> node can
>> > >>> > > > > >> evaluate
>> > >>> > > > > >>>   logic with no transaction at all, so we should deal
>> with
>> > >>> both
>> > >>> > > types
>> > >>> > > > > of
>> > >>> > > > > >>>   execution flow : transactional and
>> > >>> non-transactional.Another
>> > >>> > one
>> > >>> > > > > >>> problem is
>> > >>> > > > > >>>   transaction id support at the user node. We would have
>> > >>> handled
>> > >>> > > all
>> > >>> > > > > >> this
>> > >>> > > > > >>>   issues and many more.
>> > >>> > > > > >>>   4. we cannot pessimistically lock entity.
>> > >>> > > > > >>>
>> > >>> > > > > >>> As a result, we decided to move on building distributed
>> > >>> > > transaction.
>> > >>> > > > We
>> > >>> > > > > >> put
>> > >>> > > > > >>> aside your "accumulation" approach until we realize how
>> to
>> > >>> solve
>> > >>> > > > > >>> difficulties above .
>> > >>> > > > > >>>
>> > >>> > > > > >>> чт, 16 мар. 2017 г. в 16:56, Sergi Vladykin <
>> > >>> > > > sergi.vladykin@gmail.com
>> > >>> > > > > >:
>> > >>> > > > > >>>
>> > >>> > > > > >>>> The problem "How to run millions of entities, and
>> millions
>> > >>> of
>> > >>> > > > > >> operations
>> > >>> > > > > >>> on
>> > >>> > > > > >>>> a single Pentium3" is out of scope here. Do the math,
>> plan
>> > >>> > > capacity
>> > >>> > > > > >>>> reasonably.
>> > >>> > > > > >>>>
>> > >>> > > > > >>>> Sergi
>> > >>> > > > > >>>>
>> > >>> > > > > >>>> 2017-03-16 15:54 GMT+03:00 ALEKSEY KUZNETSOV <
>> > >>> > > > > alkuznetsov.sb@gmail.com
>> > >>> > > > > >>> :
>> > >>> > > > > >>>>
>> > >>> > > > > >>>>> hmm, If we have millions of entities, and millions of
>> > >>> > operations,
>> > >>> > > > > >> would
>> > >>> > > > > >>>> not
>> > >>> > > > > >>>>> this approache lead to memory overflow and perfomance
>> > >>> > degradation
>> > >>> > > > > >>>>>
>> > >>> > > > > >>>>> чт, 16 мар. 2017 г. в 15:42, Sergi Vladykin <
>> > >>> > > > > >> sergi.vladykin@gmail.com
>> > >>> > > > > >>>> :
>> > >>> > > > > >>>>>
>> > >>> > > > > >>>>>> 1. Actually you have to check versions on all the
>> values
>> > >>> you
>> > >>> > > have
>> > >>> > > > > >>> read
>> > >>> > > > > >>>>>> during the tx.
>> > >>> > > > > >>>>>>
>> > >>> > > > > >>>>>> For example if we have [k1 => v1, k2 => v2] and do:
>> > >>> > > > > >>>>>>
>> > >>> > > > > >>>>>> put(k1, get(k2) + 5)
>> > >>> > > > > >>>>>>
>> > >>> > > > > >>>>>> We have to remember the version for k2. This logic
>> can
>> > be
>> > >>> > > > > >> relatively
>> > >>> > > > > >>>>> easily
>> > >>> > > > > >>>>>> encapsulated in a framework atop of Ignite. You need
>> to
>> > >>> > > implement
>> > >>> > > > > >> one
>> > >>> > > > > >>>> to
>> > >>> > > > > >>>>>> make all this stuff usable.
>> > >>> > > > > >>>>>>
>> > >>> > > > > >>>>>> 2. I suggest to avoid any locking here, because you
>> > easily
>> > >>> > will
>> > >>> > > > end
>> > >>> > > > > >>> up
>> > >>> > > > > >>>>> with
>> > >>> > > > > >>>>>> deadlocks. If you do not have too frequent updates
>> for
>> > >>> your
>> > >>> > > keys,
>> > >>> > > > > >>>>>> optimistic approach will work just fine.
>> > >>> > > > > >>>>>>
>> > >>> > > > > >>>>>> Theoretically in the Committer Service you can start
>> a
>> > >>> thread
>> > >>> > > for
>> > >>> > > > > >> the
>> > >>> > > > > >>>>>> lifetime of the whole distributed transaction, take a
>> > >>> lock on
>> > >>> > > the
>> > >>> > > > > >> key
>> > >>> > > > > >>>>> using
>> > >>> > > > > >>>>>> IgniteCache.lock(K key) before executing any
>> Services,
>> > >>> wait
>> > >>> > for
>> > >>> > > > all
>> > >>> > > > > >>> the
>> > >>> > > > > >>>>>> services to complete, execute optimistic commit in
>> the
>> > >>> same
>> > >>> > > thread
>> > >>> > > > > >>>> while
>> > >>> > > > > >>>>>> keeping this lock and then release it. Notice that
>> all
>> > the
>> > >>> > > Ignite
>> > >>> > > > > >>>>>> transactions inside of all Services must be
>> optimistic
>> > >>> here to
>> > >>> > > be
>> > >>> > > > > >>> able
>> > >>> > > > > >>>> to
>> > >>> > > > > >>>>>> read this locked key.
>> > >>> > > > > >>>>>>
>> > >>> > > > > >>>>>> But again I do not recommend you using this approach
>> > >>> until you
>> > >>> > > > > >> have a
>> > >>> > > > > >>>>>> reliable deadlock avoidance scheme.
>> > >>> > > > > >>>>>>
>> > >>> > > > > >>>>>> Sergi
>> > >>> > > > > >>>>>>
>> > >>> > > > > >>>>>>
>> > >>> > > > > >>>>>>
>> > >>> > > > > >>>>>>
>> > >>> > > > > >>>>>>
>> > >>> > > > > >>>>>>
>> > >>> > > > > >>>>>>
>> > >>> > > > > >>>>>> 2017-03-16 12:53 GMT+03:00 ALEKSEY KUZNETSOV <
>> > >>> > > > > >>> alkuznetsov.sb@gmail.com
>> > >>> > > > > >>>>> :
>> > >>> > > > > >>>>>>
>> > >>> > > > > >>>>>>> Yeah, now i got it.
>> > >>> > > > > >>>>>>> There are some doubts on this approach
>> > >>> > > > > >>>>>>> 1) During optimistic commit phase, when you assure
>> no
>> > one
>> > >>> > > altered
>> > >>> > > > > >>> the
>> > >>> > > > > >>>>>>> original values, you must check versions of other
>> > >>> dependent
>> > >>> > > keys.
>> > >>> > > > > >>> How
>> > >>> > > > > >>>>>> could
>> > >>> > > > > >>>>>>> we obtain those keys(in an automative manner, of
>> > course)
>> > >>> ?
>> > >>> > > > > >>>>>>> 2) How could we lock a key before some Service A
>> > >>> introduce
>> > >>> > > > > >> changes?
>> > >>> > > > > >>>> So
>> > >>> > > > > >>>>> no
>> > >>> > > > > >>>>>>> other service is allowed to change this
>> key-value?(sort
>> > >>> of
>> > >>> > > > > >>>> pessimistic
>> > >>> > > > > >>>>>>> blocking)
>> > >>> > > > > >>>>>>> May be you know some implementations of such
>> approach ?
>> > >>> > > > > >>>>>>>
>> > >>> > > > > >>>>>>> ср, 15 мар. 2017 г. в 17:54, ALEKSEY KUZNETSOV <
>> > >>> > > > > >>>>> alkuznetsov.sb@gmail.com
>> > >>> > > > > >>>>>>> :
>> > >>> > > > > >>>>>>>
>> > >>> > > > > >>>>>>>> Thank you very much for help.  I will answer later.
>> > >>> > > > > >>>>>>>>
>> > >>> > > > > >>>>>>>> ср, 15 мар. 2017 г. в 17:39, Sergi Vladykin <
>> > >>> > > > > >>>>> sergi.vladykin@gmail.com
>> > >>> > > > > >>>>>>> :
>> > >>> > > > > >>>>>>>>
>> > >>> > > > > >>>>>>>> All the services do not update key in place, but
>> only
>> > >>> > generate
>> > >>> > > > > >>> new
>> > >>> > > > > >>>>> keys
>> > >>> > > > > >>>>>>>> augmented by otx and store the updated value in the
>> > same
>> > >>> > cache
>> > >>> > > > > >> +
>> > >>> > > > > >>>>>> remember
>> > >>> > > > > >>>>>>>> the keys and versions participating in the
>> transaction
>> > >>> in
>> > >>> > some
>> > >>> > > > > >>>>> separate
>> > >>> > > > > >>>>>>>> atomic cache.
>> > >>> > > > > >>>>>>>>
>> > >>> > > > > >>>>>>>> Follow this sequence of changes applied to cache
>> > >>> contents by
>> > >>> > > > > >> each
>> > >>> > > > > >>>>>>> Service:
>> > >>> > > > > >>>>>>>>
>> > >>> > > > > >>>>>>>> Initial cache contents:
>> > >>> > > > > >>>>>>>>            [k1 => v1]
>> > >>> > > > > >>>>>>>>            [k2 => v2]
>> > >>> > > > > >>>>>>>>            [k3 => v3]
>> > >>> > > > > >>>>>>>>
>> > >>> > > > > >>>>>>>> Cache contents after Service A:
>> > >>> > > > > >>>>>>>>            [k1 => v1]
>> > >>> > > > > >>>>>>>>            [k2 => v2]
>> > >>> > > > > >>>>>>>>            [k3 => v3]
>> > >>> > > > > >>>>>>>>            [k1x => v1a]
>> > >>> > > > > >>>>>>>>            [k2x => v2a]
>> > >>> > > > > >>>>>>>>
>> > >>> > > > > >>>>>>>>         + [x => (k1 -> ver1, k2 -> ver2)] in some
>> > >>> separate
>> > >>> > > > > >>> atomic
>> > >>> > > > > >>>>>> cache
>> > >>> > > > > >>>>>>>>
>> > >>> > > > > >>>>>>>> Cache contents after Service B:
>> > >>> > > > > >>>>>>>>            [k1 => v1]
>> > >>> > > > > >>>>>>>>            [k2 => v2]
>> > >>> > > > > >>>>>>>>            [k3 => v3]
>> > >>> > > > > >>>>>>>>            [k1x => v1a]
>> > >>> > > > > >>>>>>>>            [k2x => v2ab]
>> > >>> > > > > >>>>>>>>            [k3x => v3b]
>> > >>> > > > > >>>>>>>>
>> > >>> > > > > >>>>>>>>        + [x => (k1 -> ver1, k2 -> ver2, k3 ->
>> ver3)]
>> > in
>> > >>> some
>> > >>> > > > > >>>>> separate
>> > >>> > > > > >>>>>>>> atomic cache
>> > >>> > > > > >>>>>>>>
>> > >>> > > > > >>>>>>>> Finally the Committer Service takes this map of
>> > updated
>> > >>> keys
>> > >>> > > > > >> and
>> > >>> > > > > >>>>> their
>> > >>> > > > > >>>>>>>> versions from some separate atomic cache, starts
>> > Ignite
>> > >>> > > > > >>> transaction
>> > >>> > > > > >>>>> and
>> > >>> > > > > >>>>>>>> replaces all the values for k* keys to values taken
>> > >>> from k*x
>> > >>> > > > > >>> keys.
>> > >>> > > > > >>>>> The
>> > >>> > > > > >>>>>>>> successful result must be the following:
>> > >>> > > > > >>>>>>>>
>> > >>> > > > > >>>>>>>>            [k1 => v1a]
>> > >>> > > > > >>>>>>>>            [k2 => v2ab]
>> > >>> > > > > >>>>>>>>            [k3 => v3b]
>> > >>> > > > > >>>>>>>>            [k1x => v1a]
>> > >>> > > > > >>>>>>>>            [k2x => v2ab]
>> > >>> > > > > >>>>>>>>            [k3x => v3b]
>> > >>> > > > > >>>>>>>>
>> > >>> > > > > >>>>>>>>        + [x => (k1 -> ver1, k2 -> ver2, k3 ->
>> ver3)]
>> > in
>> > >>> some
>> > >>> > > > > >>>>> separate
>> > >>> > > > > >>>>>>>> atomic cache
>> > >>> > > > > >>>>>>>>
>> > >>> > > > > >>>>>>>> But Committer Service also has to check that no one
>> > >>> updated
>> > >>> > > the
>> > >>> > > > > >>>>>> original
>> > >>> > > > > >>>>>>>> values before us, because otherwise we can not give
>> > any
>> > >>> > > > > >>>>> serializability
>> > >>> > > > > >>>>>>>> guarantee for these distributed transactions. Here
>> we
>> > >>> may
>> > >>> > need
>> > >>> > > > > >> to
>> > >>> > > > > >>>>> check
>> > >>> > > > > >>>>>>> not
>> > >>> > > > > >>>>>>>> only versions of the updated keys, but also
>> versions
>> > of
>> > >>> any
>> > >>> > > > > >> other
>> > >>> > > > > >>>>> keys
>> > >>> > > > > >>>>>>> end
>> > >>> > > > > >>>>>>>> result depends on.
>> > >>> > > > > >>>>>>>>
>> > >>> > > > > >>>>>>>> After that Committer Service has to do a cleanup
>> (may
>> > be
>> > >>> > > > > >> outside
>> > >>> > > > > >>> of
>> > >>> > > > > >>>>> the
>> > >>> > > > > >>>>>>>> committing tx) to come to the following final
>> state:
>> > >>> > > > > >>>>>>>>
>> > >>> > > > > >>>>>>>>            [k1 => v1a]
>> > >>> > > > > >>>>>>>>            [k2 => v2ab]
>> > >>> > > > > >>>>>>>>            [k3 => v3b]
>> > >>> > > > > >>>>>>>>
>> > >>> > > > > >>>>>>>> Makes sense?
>> > >>> > > > > >>>>>>>>
>> > >>> > > > > >>>>>>>> Sergi
>> > >>> > > > > >>>>>>>>
>> > >>> > > > > >>>>>>>>
>> > >>> > > > > >>>>>>>> 2017-03-15 16:54 GMT+03:00 ALEKSEY KUZNETSOV <
>> > >>> > > > > >>>>> alkuznetsov.sb@gmail.com
>> > >>> > > > > >>>>>>> :
>> > >>> > > > > >>>>>>>>
>> > >>> > > > > >>>>>>>>>   - what do u mean by saying "
>> > >>> > > > > >>>>>>>>> *in a single transaction checks value versions for
>> > all
>> > >>> the
>> > >>> > > > > >> old
>> > >>> > > > > >>>>> values
>> > >>> > > > > >>>>>>>>>    and replaces them with calculated new ones *"?
>> > Every
>> > >>> > time
>> > >>> > > > > >>> you
>> > >>> > > > > >>>>>>> change
>> > >>> > > > > >>>>>>>>>   value(in some service), you store it to *some
>> > special
>> > >>> > > > > >> atomic
>> > >>> > > > > >>>>>> cache*
>> > >>> > > > > >>>>>>> ,
>> > >>> > > > > >>>>>>>> so
>> > >>> > > > > >>>>>>>>>   when all services ceased working, Service
>> commiter
>> > >>> got a
>> > >>> > > > > >>>> values
>> > >>> > > > > >>>>>> with
>> > >>> > > > > >>>>>>>> the
>> > >>> > > > > >>>>>>>>>   last versions.
>> > >>> > > > > >>>>>>>>>   - After "*does cleanup of temporary keys and
>> > values*"
>> > >>> > > > > >>> Service
>> > >>> > > > > >>>>>>> commiter
>> > >>> > > > > >>>>>>>>>   persists them into permanent store, isn't it ?
>> > >>> > > > > >>>>>>>>>   - I cant grasp your though, you say "*in case of
>> > >>> version
>> > >>> > > > > >>>>> mismatch
>> > >>> > > > > >>>>>> or
>> > >>> > > > > >>>>>>>> TX
>> > >>> > > > > >>>>>>>>>   timeout just rollbacks*". But what versions
>> would
>> > it
>> > >>> > > > > >> match?
>> > >>> > > > > >>>>>>>>>
>> > >>> > > > > >>>>>>>>>
>> > >>> > > > > >>>>>>>>> ср, 15 мар. 2017 г. в 15:34, Sergi Vladykin <
>> > >>> > > > > >>>>>> sergi.vladykin@gmail.com
>> > >>> > > > > >>>>>>>> :
>> > >>> > > > > >>>>>>>>>
>> > >>> > > > > >>>>>>>>>> Ok, here is what you actually need to implement
>> at
>> > the
>> > >>> > > > > >>>>> application
>> > >>> > > > > >>>>>>>> level.
>> > >>> > > > > >>>>>>>>>>
>> > >>> > > > > >>>>>>>>>> Lets say we have to call 2 services in the
>> following
>> > >>> > order:
>> > >>> > > > > >>>>>>>>>> - Service A: wants to update keys [k1 => v1,
>>  k2 =>
>> > >>> v2]
>> > >>> > > > > >> to
>> > >>> > > > > >>>>> [k1
>> > >>> > > > > >>>>>> =>
>> > >>> > > > > >>>>>>>>> v1a,
>> > >>> > > > > >>>>>>>>>>  k2 => v2a]
>> > >>> > > > > >>>>>>>>>> - Service B: wants to update keys [k2 => v2a, k3
>> =>
>> > >>> v3]
>> > >>> > > > > >> to
>> > >>> > > > > >>>> [k2
>> > >>> > > > > >>>>>> =>
>> > >>> > > > > >>>>>>>>> v2ab,
>> > >>> > > > > >>>>>>>>>> k3 => v3b]
>> > >>> > > > > >>>>>>>>>>
>> > >>> > > > > >>>>>>>>>> The change
>> > >>> > > > > >>>>>>>>>>    from [ k1 => v1,   k2 => v2,     k3 => v3   ]
>> > >>> > > > > >>>>>>>>>>    to     [ k1 => v1a, k2 => v2ab, k3 => v3b ]
>> > >>> > > > > >>>>>>>>>> must happen in a single transaction.
>> > >>> > > > > >>>>>>>>>>
>> > >>> > > > > >>>>>>>>>>
>> > >>> > > > > >>>>>>>>>> Optimistic protocol to solve this:
>> > >>> > > > > >>>>>>>>>>
>> > >>> > > > > >>>>>>>>>> Each cache key must have a field `otx`, which is
>> a
>> > >>> unique
>> > >>> > > > > >>>>>>> orchestrator
>> > >>> > > > > >>>>>>>> TX
>> > >>> > > > > >>>>>>>>>> identifier - it must be a parameter passed to all
>> > the
>> > >>> > > > > >>> services.
>> > >>> > > > > >>>>> If
>> > >>> > > > > >>>>>>>> `otx`
>> > >>> > > > > >>>>>>>>> is
>> > >>> > > > > >>>>>>>>>> set to some value it means that it is an
>> > intermediate
>> > >>> key
>> > >>> > > > > >> and
>> > >>> > > > > >>>> is
>> > >>> > > > > >>>>>>>> visible
>> > >>> > > > > >>>>>>>>>> only inside of some transaction, for the
>> finalized
>> > key
>> > >>> > > > > >> `otx`
>> > >>> > > > > >>>> must
>> > >>> > > > > >>>>>> be
>> > >>> > > > > >>>>>>>>> null -
>> > >>> > > > > >>>>>>>>>> it means the key is committed and visible for
>> > >>> everyone.
>> > >>> > > > > >>>>>>>>>>
>> > >>> > > > > >>>>>>>>>> Each cache value must have a field `ver` which
>> is a
>> > >>> > version
>> > >>> > > > > >>> of
>> > >>> > > > > >>>>> that
>> > >>> > > > > >>>>>>>>> value.
>> > >>> > > > > >>>>>>>>>>
>> > >>> > > > > >>>>>>>>>> For both fields (`otx` and `ver`) the safest way
>> is
>> > >>> to use
>> > >>> > > > > >>>> UUID.
>> > >>> > > > > >>>>>>>>>>
>> > >>> > > > > >>>>>>>>>> Workflow is the following:
>> > >>> > > > > >>>>>>>>>>
>> > >>> > > > > >>>>>>>>>> Orchestrator starts the distributed transaction
>> with
>> > >>> `otx`
>> > >>> > > > > >> =
>> > >>> > > > > >>> x
>> > >>> > > > > >>>>> and
>> > >>> > > > > >>>>>>>> passes
>> > >>> > > > > >>>>>>>>>> this parameter to all the services.
>> > >>> > > > > >>>>>>>>>>
>> > >>> > > > > >>>>>>>>>> Service A:
>> > >>> > > > > >>>>>>>>>> - does some computations
>> > >>> > > > > >>>>>>>>>> - stores [k1x => v1a, k2x => v2a]  with TTL = Za
>> > >>> > > > > >>>>>>>>>>      where
>> > >>> > > > > >>>>>>>>>>          Za - left time from max Orchestrator TX
>> > >>> duration
>> > >>> > > > > >>>> after
>> > >>> > > > > >>>>>>>> Service
>> > >>> > > > > >>>>>>>>> A
>> > >>> > > > > >>>>>>>>>> end
>> > >>> > > > > >>>>>>>>>>          k1x, k2x - new temporary keys with field
>> > >>> `otx` =
>> > >>> > > > > >> x
>> > >>> > > > > >>>>>>>>>>          v2a has updated version `ver`
>> > >>> > > > > >>>>>>>>>> - returns a set of updated keys and all the old
>> > >>> versions
>> > >>> > > > > >> to
>> > >>> > > > > >>>> the
>> > >>> > > > > >>>>>>>>>> orchestrator
>> > >>> > > > > >>>>>>>>>>       or just stores it in some special atomic
>> cache
>> > >>> like
>> > >>> > > > > >>>>>>>>>>       [x => (k1 -> ver1, k2 -> ver2)] TTL = Za
>> > >>> > > > > >>>>>>>>>>
>> > >>> > > > > >>>>>>>>>> Service B:
>> > >>> > > > > >>>>>>>>>> - retrieves the updated value k2x => v2a because
>> it
>> > >>> knows
>> > >>> > > > > >>>> `otx`
>> > >>> > > > > >>>>> =
>> > >>> > > > > >>>>>> x
>> > >>> > > > > >>>>>>>>>> - does computations
>> > >>> > > > > >>>>>>>>>> - stores [k2x => v2ab, k3x => v3b] TTL = Zb
>> > >>> > > > > >>>>>>>>>> - updates the set of updated keys like [x => (k1
>> ->
>> > >>> ver1,
>> > >>> > > > > >> k2
>> > >>> > > > > >>>> ->
>> > >>> > > > > >>>>>>> ver2,
>> > >>> > > > > >>>>>>>> k3
>> > >>> > > > > >>>>>>>>>> -> ver3)] TTL = Zb
>> > >>> > > > > >>>>>>>>>>
>> > >>> > > > > >>>>>>>>>> Service Committer (may be embedded into
>> > Orchestrator):
>> > >>> > > > > >>>>>>>>>> - takes all the updated keys and versions for
>> `otx`
>> > =
>> > >>> x
>> > >>> > > > > >>>>>>>>>>       [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)]
>> > >>> > > > > >>>>>>>>>> - in a single transaction checks value versions
>> for
>> > >>> all
>> > >>> > > > > >> the
>> > >>> > > > > >>>> old
>> > >>> > > > > >>>>>>> values
>> > >>> > > > > >>>>>>>>>>       and replaces them with calculated new ones
>> > >>> > > > > >>>>>>>>>> - does cleanup of temporary keys and values
>> > >>> > > > > >>>>>>>>>> - in case of version mismatch or TX timeout just
>> > >>> rollbacks
>> > >>> > > > > >>> and
>> > >>> > > > > >>>>>>> signals
>> > >>> > > > > >>>>>>>>>>        to Orchestrator to restart the job with
>> new
>> > >>> `otx`
>> > >>> > > > > >>>>>>>>>>
>> > >>> > > > > >>>>>>>>>> PROFIT!!
>> > >>> > > > > >>>>>>>>>>
>> > >>> > > > > >>>>>>>>>> This approach even allows you to run independent
>> > >>> parts of
>> > >>> > > > > >> the
>> > >>> > > > > >>>>> graph
>> > >>> > > > > >>>>>>> in
>> > >>> > > > > >>>>>>>>>> parallel (with TX transfer you will always run
>> only
>> > >>> one at
>> > >>> > > > > >> a
>> > >>> > > > > >>>>> time).
>> > >>> > > > > >>>>>>>> Also
>> > >>> > > > > >>>>>>>>> it
>> > >>> > > > > >>>>>>>>>> does not require inventing any special fault
>> > tolerance
>> > >>> > > > > >>> technics
>> > >>> > > > > >>>>>>> because
>> > >>> > > > > >>>>>>>>>> Ignite caches are already fault tolerant and all
>> the
>> > >>> > > > > >>>> intermediate
>> > >>> > > > > >>>>>>>> results
>> > >>> > > > > >>>>>>>>>> are virtually invisible and stored with TTL,
>> thus in
>> > >>> case
>> > >>> > > > > >> of
>> > >>> > > > > >>>> any
>> > >>> > > > > >>>>>>> crash
>> > >>> > > > > >>>>>>>>> you
>> > >>> > > > > >>>>>>>>>> will not have inconsistent state or garbage.
>> > >>> > > > > >>>>>>>>>>
>> > >>> > > > > >>>>>>>>>> Sergi
>> > >>> > > > > >>>>>>>>>>
>> > >>> > > > > >>>>>>>>>>
>> > >>> > > > > >>>>>>>>>> 2017-03-15 11:42 GMT+03:00 ALEKSEY KUZNETSOV <
>> > >>> > > > > >>>>>>> alkuznetsov.sb@gmail.com
>> > >>> > > > > >>>>>>>>> :
>> > >>> > > > > >>>>>>>>>>
>> > >>> > > > > >>>>>>>>>>> Okay, we are open for proposals on business
>> task. I
>> > >>> mean,
>> > >>> > > > > >>> we
>> > >>> > > > > >>>>> can
>> > >>> > > > > >>>>>>> make
>> > >>> > > > > >>>>>>>>> use
>> > >>> > > > > >>>>>>>>>>> of some other thing, not distributed
>> transaction.
>> > Not
>> > >>> > > > > >>>>> transaction
>> > >>> > > > > >>>>>>>> yet.
>> > >>> > > > > >>>>>>>>>>>
>> > >>> > > > > >>>>>>>>>>> ср, 15 мар. 2017 г. в 11:24, Vladimir Ozerov <
>> > >>> > > > > >>>>>> vozerov@gridgain.com
>> > >>> > > > > >>>>>>>> :
>> > >>> > > > > >>>>>>>>>>>
>> > >>> > > > > >>>>>>>>>>>> IMO the use case makes sense. However, as Sergi
>> > >>> already
>> > >>> > > > > >>>>>>> mentioned,
>> > >>> > > > > >>>>>>>>> the
>> > >>> > > > > >>>>>>>>>>>> problem is far more complex, than simply
>> passing
>> > TX
>> > >>> > > > > >> state
>> > >>> > > > > >>>>> over
>> > >>> > > > > >>>>>> a
>> > >>> > > > > >>>>>>>>> wire.
>> > >>> > > > > >>>>>>>>>>> Most
>> > >>> > > > > >>>>>>>>>>>> probably a kind of coordinator will be required
>> > >>> still
>> > >>> > > > > >> to
>> > >>> > > > > >>>>> manage
>> > >>> > > > > >>>>>>> all
>> > >>> > > > > >>>>>>>>>> kinds
>> > >>> > > > > >>>>>>>>>>>> of failures. This task should be started with
>> > clean
>> > >>> > > > > >>> design
>> > >>> > > > > >>>>>>> proposal
>> > >>> > > > > >>>>>>>>>>>> explaining how we handle all these concurrent
>> > >>> events.
>> > >>> > > > > >> And
>> > >>> > > > > >>>>> only
>> > >>> > > > > >>>>>>>> then,
>> > >>> > > > > >>>>>>>>>> when
>> > >>> > > > > >>>>>>>>>>>> we understand all implications, we should move
>> to
>> > >>> > > > > >>>> development
>> > >>> > > > > >>>>>>>> stage.
>> > >>> > > > > >>>>>>>>>>>>
>> > >>> > > > > >>>>>>>>>>>> On Wed, Mar 15, 2017 at 10:38 AM, ALEKSEY
>> > KUZNETSOV
>> > >>> <
>> > >>> > > > > >>>>>>>>>>>> alkuznetsov.sb@gmail.com> wrote:
>> > >>> > > > > >>>>>>>>>>>>
>> > >>> > > > > >>>>>>>>>>>>> Right
>> > >>> > > > > >>>>>>>>>>>>>
>> > >>> > > > > >>>>>>>>>>>>> ср, 15 мар. 2017 г. в 10:35, Sergi Vladykin <
>> > >>> > > > > >>>>>>>>>> sergi.vladykin@gmail.com
>> > >>> > > > > >>>>>>>>>>>> :
>> > >>> > > > > >>>>>>>>>>>>>
>> > >>> > > > > >>>>>>>>>>>>>> Good! Basically your orchestrator just takes
>> > some
>> > >>> > > > > >>>>>> predefined
>> > >>> > > > > >>>>>>>>> graph
>> > >>> > > > > >>>>>>>>>> of
>> > >>> > > > > >>>>>>>>>>>>>> distributed services to be invoked, calls
>> them
>> > by
>> > >>> > > > > >>> some
>> > >>> > > > > >>>>> kind
>> > >>> > > > > >>>>>>> of
>> > >>> > > > > >>>>>>>>> RPC
>> > >>> > > > > >>>>>>>>>>> and
>> > >>> > > > > >>>>>>>>>>>>>> passes the needed parameters between them,
>> > right?
>> > >>> > > > > >>>>>>>>>>>>>>
>> > >>> > > > > >>>>>>>>>>>>>> Sergi
>> > >>> > > > > >>>>>>>>>>>>>>
>> > >>> > > > > >>>>>>>>>>>>>> 2017-03-14 22:46 GMT+03:00 ALEKSEY KUZNETSOV
>> <
>> > >>> > > > > >>>>>>>>>>> alkuznetsov.sb@gmail.com
>> > >>> > > > > >>>>>>>>>>>>> :
>> > >>> > > > > >>>>>>>>>>>>>>
>> > >>> > > > > >>>>>>>>>>>>>>> orchestrator is a custom thing. He is
>> > responsible
>> > >>> > > > > >>> for
>> > >>> > > > > >>>>>>>> managing
>> > >>> > > > > >>>>>>>>>>>> business
>> > >>> > > > > >>>>>>>>>>>>>>> scenarios flows. Many nodes are involved in
>> > >>> > > > > >>>> scenarios.
>> > >>> > > > > >>>>>> They
>> > >>> > > > > >>>>>>>>>>> exchange
>> > >>> > > > > >>>>>>>>>>>>> data
>> > >>> > > > > >>>>>>>>>>>>>>> and folow one another. If you acquinted with
>> > BPMN
>> > >>> > > > > >>>>>>> framework,
>> > >>> > > > > >>>>>>>> so
>> > >>> > > > > >>>>>>>>>>>>>>> orchestrator is like bpmn engine.
>> > >>> > > > > >>>>>>>>>>>>>>>
>> > >>> > > > > >>>>>>>>>>>>>>> вт, 14 Мар 2017 г., 18:56 Sergi Vladykin <
>> > >>> > > > > >>>>>>>>>> sergi.vladykin@gmail.com
>> > >>> > > > > >>>>>>>>>>>> :
>> > >>> > > > > >>>>>>>>>>>>>>>
>> > >>> > > > > >>>>>>>>>>>>>>>> What is Orchestrator for you? Is it a thing
>> > >>> > > > > >> from
>> > >>> > > > > >>>>>>> Microsoft
>> > >>> > > > > >>>>>>>> or
>> > >>> > > > > >>>>>>>>>>> your
>> > >>> > > > > >>>>>>>>>>>>>> custom
>> > >>> > > > > >>>>>>>>>>>>>>>> in-house software?
>> > >>> > > > > >>>>>>>>>>>>>>>>
>> > >>> > > > > >>>>>>>>>>>>>>>> Sergi
>> > >>> > > > > >>>>>>>>>>>>>>>>
>> > >>> > > > > >>>>>>>>>>>>>>>> 2017-03-14 18:00 GMT+03:00 ALEKSEY
>> KUZNETSOV <
>> > >>> > > > > >>>>>>>>>>>>> alkuznetsov.sb@gmail.com
>> > >>> > > > > >>>>>>>>>>>>>>> :
>> > >>> > > > > >>>>>>>>>>>>>>>>
>> > >>> > > > > >>>>>>>>>>>>>>>>> Fine. Let's say we've got multiple servers
>> > >>> > > > > >>> which
>> > >>> > > > > >>>>>>> fulfills
>> > >>> > > > > >>>>>>>>>>> custom
>> > >>> > > > > >>>>>>>>>>>>>> logic.
>> > >>> > > > > >>>>>>>>>>>>>>>>> This servers compound oriented graph (BPMN
>> > >>> > > > > >>>> process)
>> > >>> > > > > >>>>>>> which
>> > >>> > > > > >>>>>>>>>>>>> controlled
>> > >>> > > > > >>>>>>>>>>>>>> by
>> > >>> > > > > >>>>>>>>>>>>>>>>> Orchestrator.
>> > >>> > > > > >>>>>>>>>>>>>>>>> For instance, *server1  *creates
>> *variable A
>> > >>> > > > > >>>> *with
>> > >>> > > > > >>>>>>> value
>> > >>> > > > > >>>>>>>> 1,
>> > >>> > > > > >>>>>>>>>>>>> persists
>> > >>> > > > > >>>>>>>>>>>>>> it
>> > >>> > > > > >>>>>>>>>>>>>>>> to
>> > >>> > > > > >>>>>>>>>>>>>>>>> IGNITE cache and creates *variable B *and
>> > >>> > > > > >> sends
>> > >>> > > > > >>>> it
>> > >>> > > > > >>>>>> to*
>> > >>> > > > > >>>>>>>>>> server2.
>> > >>> > > > > >>>>>>>>>>>>> *The
>> > >>> > > > > >>>>>>>>>>>>>>>>> latests receives *variable B*, do some
>> logic
>> > >>> > > > > >>> with
>> > >>> > > > > >>>>> it
>> > >>> > > > > >>>>>>> and
>> > >>> > > > > >>>>>>>>>> stores
>> > >>> > > > > >>>>>>>>>>>> to
>> > >>> > > > > >>>>>>>>>>>>>>>> IGNITE.
>> > >>> > > > > >>>>>>>>>>>>>>>>> All the work made by both servers must be
>> > >>> > > > > >>>> fulfilled
>> > >>> > > > > >>>>>> in
>> > >>> > > > > >>>>>>>>> *one*
>> > >>> > > > > >>>>>>>>>>>>>>> transaction.
>> > >>> > > > > >>>>>>>>>>>>>>>>> Because we need all information done, or
>> > >>> > > > > >>>>>>>>> nothing(rollbacked).
>> > >>> > > > > >>>>>>>>>>> The
>> > >>> > > > > >>>>>>>>>>>>>>>> scenario
>> > >>> > > > > >>>>>>>>>>>>>>>>> is managed by orchestrator.
>> > >>> > > > > >>>>>>>>>>>>>>>>>
>> > >>> > > > > >>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 17:31, Sergi
>> Vladykin <
>> > >>> > > > > >>>>>>>>>>>>>> sergi.vladykin@gmail.com
>> > >>> > > > > >>>>>>>>>>>>>>>> :
>> > >>> > > > > >>>>>>>>>>>>>>>>>
>> > >>> > > > > >>>>>>>>>>>>>>>>>> Ok, it is not a business case, it is your
>> > >>> > > > > >>> wrong
>> > >>> > > > > >>>>>>>> solution
>> > >>> > > > > >>>>>>>>>> for
>> > >>> > > > > >>>>>>>>>>>> it.
>> > >>> > > > > >>>>>>>>>>>>>>>>>> Lets try again, what is the business
>> case?
>> > >>> > > > > >>>>>>>>>>>>>>>>>>
>> > >>> > > > > >>>>>>>>>>>>>>>>>> Sergi
>> > >>> > > > > >>>>>>>>>>>>>>>>>>
>> > >>> > > > > >>>>>>>>>>>>>>>>>> 2017-03-14 16:42 GMT+03:00 ALEKSEY
>> > >>> > > > > >> KUZNETSOV
>> > >>> > > > > >>> <
>> > >>> > > > > >>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
>> > >>> > > > > >>>>>>>>>>>>>>>>> :
>> > >>> > > > > >>>>>>>>>>>>>>>>>>
>> > >>> > > > > >>>>>>>>>>>>>>>>>>> The case is the following, One starts
>> > >>> > > > > >>>>> transaction
>> > >>> > > > > >>>>>>> in
>> > >>> > > > > >>>>>>>>> one
>> > >>> > > > > >>>>>>>>>>>> node,
>> > >>> > > > > >>>>>>>>>>>>>> and
>> > >>> > > > > >>>>>>>>>>>>>>>>> commit
>> > >>> > > > > >>>>>>>>>>>>>>>>>>> this transaction in another jvm node(or
>> > >>> > > > > >>>>> rollback
>> > >>> > > > > >>>>>> it
>> > >>> > > > > >>>>>>>>>>>> remotely).
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>
>> > >>> > > > > >>>>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 16:30, Sergi
>> > >>> > > > > >>> Vladykin <
>> > >>> > > > > >>>>>>>>>>>>>>>> sergi.vladykin@gmail.com
>> > >>> > > > > >>>>>>>>>>>>>>>>>> :
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>> Because even if you make it work for
>> > >>> > > > > >> some
>> > >>> > > > > >>>>>>>> simplistic
>> > >>> > > > > >>>>>>>>>>>>> scenario,
>> > >>> > > > > >>>>>>>>>>>>>>> get
>> > >>> > > > > >>>>>>>>>>>>>>>>>> ready
>> > >>> > > > > >>>>>>>>>>>>>>>>>>> to
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>> write many fault tolerance tests and
>> > >>> > > > > >> make
>> > >>> > > > > >>>>> sure
>> > >>> > > > > >>>>>>> that
>> > >>> > > > > >>>>>>>>> you
>> > >>> > > > > >>>>>>>>>>> TXs
>> > >>> > > > > >>>>>>>>>>>>>> work
>> > >>> > > > > >>>>>>>>>>>>>>>>>>> gracefully
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>> in all modes in case of crashes. Also
>> > >>> > > > > >>> make
>> > >>> > > > > >>>>> sure
>> > >>> > > > > >>>>>>>> that
>> > >>> > > > > >>>>>>>>> we
>> > >>> > > > > >>>>>>>>>>> do
>> > >>> > > > > >>>>>>>>>>>>> not
>> > >>> > > > > >>>>>>>>>>>>>>> have
>> > >>> > > > > >>>>>>>>>>>>>>>>> any
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>> performance drops after all your
>> > >>> > > > > >> changes
>> > >>> > > > > >>> in
>> > >>> > > > > >>>>>>>> existing
>> > >>> > > > > >>>>>>>>>>>>>> benchmarks.
>> > >>> > > > > >>>>>>>>>>>>>>>> All
>> > >>> > > > > >>>>>>>>>>>>>>>>> in
>> > >>> > > > > >>>>>>>>>>>>>>>>>>> all
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>> I don't believe these conditions will
>> > >>> > > > > >> be
>> > >>> > > > > >>>> met
>> > >>> > > > > >>>>>> and
>> > >>> > > > > >>>>>>>> your
>> > >>> > > > > >>>>>>>>>>>>>>> contribution
>> > >>> > > > > >>>>>>>>>>>>>>>>> will
>> > >>> > > > > >>>>>>>>>>>>>>>>>>> be
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>> accepted.
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>>
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>> Better solution to what problem?
>> > >>> > > > > >> Sending
>> > >>> > > > > >>> TX
>> > >>> > > > > >>>>> to
>> > >>> > > > > >>>>>>>>> another
>> > >>> > > > > >>>>>>>>>>>> node?
>> > >>> > > > > >>>>>>>>>>>>>> The
>> > >>> > > > > >>>>>>>>>>>>>>>>>> problem
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>> statement itself is already wrong. What
>> > >>> > > > > >>>>>> business
>> > >>> > > > > >>>>>>>> case
>> > >>> > > > > >>>>>>>>>> you
>> > >>> > > > > >>>>>>>>>>>> are
>> > >>> > > > > >>>>>>>>>>>>>>>> trying
>> > >>> > > > > >>>>>>>>>>>>>>>>> to
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>> solve? I'm sure everything you need can
>> > >>> > > > > >>> be
>> > >>> > > > > >>>>> done
>> > >>> > > > > >>>>>>> in
>> > >>> > > > > >>>>>>>> a
>> > >>> > > > > >>>>>>>>>> much
>> > >>> > > > > >>>>>>>>>>>>> more
>> > >>> > > > > >>>>>>>>>>>>>>>> simple
>> > >>> > > > > >>>>>>>>>>>>>>>>>> and
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>> efficient way at the application level.
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>>
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>> Sergi
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>>
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>> 2017-03-14 16:03 GMT+03:00 ALEKSEY
>> > >>> > > > > >>>> KUZNETSOV
>> > >>> > > > > >>>>> <
>> > >>> > > > > >>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
>> > >>> > > > > >>>>>>>>>>>>>>>>>>> :
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>>
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>>> Why wrong ? You know the better
>> > >>> > > > > >>> solution?
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>>>
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 15:46, Sergi
>> > >>> > > > > >>>>> Vladykin <
>> > >>> > > > > >>>>>>>>>>>>>>>>>> sergi.vladykin@gmail.com
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>> :
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>>>
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>>>> Just serializing TX object and
>> > >>> > > > > >>>>>> deserializing
>> > >>> > > > > >>>>>>> it
>> > >>> > > > > >>>>>>>>> on
>> > >>> > > > > >>>>>>>>>>>>> another
>> > >>> > > > > >>>>>>>>>>>>>>> node
>> > >>> > > > > >>>>>>>>>>>>>>>>> is
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>>>> meaningless, because other nodes
>> > >>> > > > > >>>>>>> participating
>> > >>> > > > > >>>>>>>> in
>> > >>> > > > > >>>>>>>>>> the
>> > >>> > > > > >>>>>>>>>>>> TX
>> > >>> > > > > >>>>>>>>>>>>>> have
>> > >>> > > > > >>>>>>>>>>>>>>>> to
>> > >>> > > > > >>>>>>>>>>>>>>>>>> know
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>>> about
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>>>> the new coordinator. This will
>> > >>> > > > > >>> require
>> > >>> > > > > >>>>>>> protocol
>> > >>> > > > > >>>>>>>>>>>> changes,
>> > >>> > > > > >>>>>>>>>>>>> we
>> > >>> > > > > >>>>>>>>>>>>>>>>>>> definitely
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>>> will
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>>>> have fault tolerance and
>> > >>> > > > > >> performance
>> > >>> > > > > >>>>>> issues.
>> > >>> > > > > >>>>>>>> IMO
>> > >>> > > > > >>>>>>>>>> the
>> > >>> > > > > >>>>>>>>>>>>> whole
>> > >>> > > > > >>>>>>>>>>>>>>> idea
>> > >>> > > > > >>>>>>>>>>>>>>>>> is
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>> wrong
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>>>> and it makes no sense to waste time
>> > >>> > > > > >>> on
>> > >>> > > > > >>>>> it.
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>>>>
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>>>> Sergi
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>>>>
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>>>> 2017-03-14 10:57 GMT+03:00 ALEKSEY
>> > >>> > > > > >>>>>> KUZNETSOV
>> > >>> > > > > >>>>>>> <
>> > >>> > > > > >>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>>> :
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>>>>
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>>>>> IgniteTransactionState
>> > >>> > > > > >>>> implememntation
>> > >>> > > > > >>>>>>>> contains
>> > >>> > > > > >>>>>>>>>>>>>>>> IgniteTxEntry's
>> > >>> > > > > >>>>>>>>>>>>>>>>>>> which
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>>> is
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>>>>> supposed to be transferable
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>>>>>
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>>>>> пн, 13 мар. 2017 г. в 19:32,
>> > >>> > > > > >>> Dmitriy
>> > >>> > > > > >>>>>>>> Setrakyan
>> > >>> > > > > >>>>>>>>> <
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>> dsetrakyan@apache.org
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>>>> :
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>>>>>
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>>>>>> It sounds a little scary to me
>> > >>> > > > > >>> that
>> > >>> > > > > >>>>> we
>> > >>> > > > > >>>>>>> are
>> > >>> > > > > >>>>>>>>>>> passing
>> > >>> > > > > >>>>>>>>>>>>>>>>> transaction
>> > >>> > > > > >>>>>>>>>>>>>>>>>>>>> objects
>> > >>>
>> > >>> --
>> >
>> > *Best Regards,*
>> >
>> > *Kuznetsov Aleksey*
>> >
>>
> --
>
> *Best Regards,*
>
> *Kuznetsov Aleksey*
>
-- 

*Best Regards,*

*Kuznetsov Aleksey*

Re: distributed transaction of non-single coordinator

Posted by ALEKSEY KUZNETSOV <al...@gmail.com>.
Thanks! Do you think all test scenarios results, presented in table(in
ticket comments) , are acceptable ?

пт, 30 июн. 2017 г., 18:28 Yakov Zhdanov <yz...@gridgain.com>:

> Alex, I have commented in the ticket. Please take a look.
>
> Thanks!
> --
> Yakov Zhdanov, Director R&D
> *GridGain Systems*
> www.gridgain.com
>
> 2017-06-29 17:27 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:
>
> > I've attached HangTest. I suppose it should not hang, am i right ?
> >
> > чт, 29 июн. 2017 г. в 14:54, ALEKSEY KUZNETSOV <alkuznetsov.sb@gmail.com
> >:
> >
> > > Igntrs.
> > > Im rewieving all usages of threadId of
> > > transaction.(IgniteTxAdapter#threadID). What is the point of usage
> > threadId
> > > in mvcc entry ?
> > >
> > > пн, 3 апр. 2017 г. в 9:47, ALEKSEY KUZNETSOV <alkuznetsov.sb@gmail.com
> >:
> > >
> > >> so what do u think on my idea?
> > >>
> > >> пт, 31 Мар 2017 г., 11:05 ALEKSEY KUZNETSOV <alkuznetsov.sb@gmail.com
> >:
> > >>
> > >>> sorry for misleading you. We planned to support multi-node
> > transactions,
> > >>> but failed.
> > >>>
> > >>> пт, 31 мар. 2017 г. в 10:51, Alexey Goncharuk <
> > >>> alexey.goncharuk@gmail.com>:
> > >>>
> > >>> Well, now the scenario is more clear, but it has nothing to do with
> > >>> multiple coordinators :) Let me think a little bit about it.
> > >>>
> > >>> 2017-03-31 9:53 GMT+03:00 ALEKSEY KUZNETSOV <
> alkuznetsov.sb@gmail.com
> > >:
> > >>>
> > >>> > so what do u think on the issue ?
> > >>> >
> > >>> > чт, 30 Мар 2017 г., 17:49 ALEKSEY KUZNETSOV <
> > alkuznetsov.sb@gmail.com
> > >>> >:
> > >>> >
> > >>> > > Hi ! Thanks for help. I've created ticket :
> > >>> > > https://issues.apache.org/jira/browse/IGNITE-4887
> > >>> > > and a commit :
> > >>> > >
> > >>>
> https://github.com/voipp/ignite/commit/aa3487bd9c203394f534c605f84e06
> > >>> > 436b638e5c
> > >>> > > We really need this feature
> > >>> > >
> > >>> > > чт, 30 мар. 2017 г. в 11:31, Alexey Goncharuk <
> > >>> > alexey.goncharuk@gmail.com
> > >>> > > >:
> > >>> > >
> > >>> > > Aleksey,
> > >>> > >
> > >>> > > I doubt your approach works as expected. Current transaction
> > recovery
> > >>> > > protocol heavily relies on the originating node ID in its
> internal
> > >>> logic.
> > >>> > > For example, currently a transaction will be rolled back if you
> > want
> > >>> to
> > >>> > > transfer a transaction ownership to another node and original tx
> > >>> owner
> > >>> > > fails. An attempt to commit such a transaction on another node
> may
> > >>> fail
> > >>> > > with all sorts of assertions. After transaction ownership
> changed,
> > >>> you
> > >>> > need
> > >>> > > to notify all current transaction participants about this change,
> > >>> and it
> > >>> > > should also be done failover-safe, let alone that you did not add
> > any
> > >>> > tests
> > >>> > > for these cases.
> > >>> > >
> > >>> > > I back Denis here. Please create a ticket first and come up with
> > >>> clear
> > >>> > > use-cases, API and protocol changes design. It is hard to reason
> > >>> about
> > >>> > the
> > >>> > > changes you've made when we do not even understand why you are
> > making
> > >>> > these
> > >>> > > changes and how they are supposed to work.
> > >>> > >
> > >>> > > --AG
> > >>> > >
> > >>> > > 2017-03-30 10:43 GMT+03:00 ALEKSEY KUZNETSOV <
> > >>> alkuznetsov.sb@gmail.com>:
> > >>> > >
> > >>> > > > So, what do u think on my idea ?
> > >>> > > >
> > >>> > > > ср, 29 мар. 2017 г. в 10:35, ALEKSEY KUZNETSOV <
> > >>> > alkuznetsov.sb@gmail.com
> > >>> > > >:
> > >>> > > >
> > >>> > > > > Hi! No, i dont have ticket for this.
> > >>> > > > > In the ticket i have implemented methods that change
> > transaction
> > >>> > status
> > >>> > > > to
> > >>> > > > > STOP, thus letting it to commit transaction in another
> thread.
> > In
> > >>> > > another
> > >>> > > > > thread you r going to restart transaction in order to commit
> > it.
> > >>> > > > > The mechanism behind it is obvious : we change thread id to
> > >>> newer one
> > >>> > > in
> > >>> > > > > ThreadMap, and make use of serialization of txState,
> > transactions
> > >>> > > itself
> > >>> > > > to
> > >>> > > > > transfer them into another thread.
> > >>> > > > >
> > >>> > > > >
> > >>> > > > > вт, 28 мар. 2017 г. в 20:15, Denis Magda <dmagda@apache.org
> >:
> > >>> > > > >
> > >>> > > > > Aleksey,
> > >>> > > > >
> > >>> > > > > Do you have a ticket for this? Could you briefly list what
> > >>> exactly
> > >>> > was
> > >>> > > > > done and how the things work.
> > >>> > > > >
> > >>> > > > > —
> > >>> > > > > Denis
> > >>> > > > >
> > >>> > > > > > On Mar 28, 2017, at 8:32 AM, ALEKSEY KUZNETSOV <
> > >>> > > > alkuznetsov.sb@gmail.com>
> > >>> > > > > wrote:
> > >>> > > > > >
> > >>> > > > > > Hi, Igniters! I 've made implementation of transactions of
> > >>> > non-single
> > >>> > > > > > coordinator. Here you can start transaction in one thread
> and
> > >>> > commit
> > >>> > > it
> > >>> > > > > in
> > >>> > > > > > another thread.
> > >>> > > > > > Take a look on it. Give your thoughts on it.
> > >>> > > > > >
> > >>> > > > > >
> > >>> > > > > https://github.com/voipp/ignite/pull/10/commits/
> > >>> > > > 3a3d90aa6ac84f125e4c3ce4ced4f269a695ef45
> > >>> > > > > >
> > >>> > > > > > пт, 17 мар. 2017 г. в 19:26, Sergi Vladykin <
> > >>> > > sergi.vladykin@gmail.com
> > >>> > > > >:
> > >>> > > > > >
> > >>> > > > > >> You know better, go ahead! :)
> > >>> > > > > >>
> > >>> > > > > >> Sergi
> > >>> > > > > >>
> > >>> > > > > >> 2017-03-17 16:16 GMT+03:00 ALEKSEY KUZNETSOV <
> > >>> > > > alkuznetsov.sb@gmail.com
> > >>> > > > > >:
> > >>> > > > > >>
> > >>> > > > > >>> we've discovered several problems regarding your
> > >>> "accumulation"
> > >>> > > > > >>> approach.These are
> > >>> > > > > >>>
> > >>> > > > > >>>   1. perfomance issues when transfering data from
> temporary
> > >>> cache
> > >>> > > to
> > >>> > > > > >>>   permanent one. Keep in mind big deal of concurent
> > >>> transactions
> > >>> > in
> > >>> > > > > >>> Service
> > >>> > > > > >>>   commiter
> > >>> > > > > >>>   2. extreme memory load when keeping temporary cache in
> > >>> memory
> > >>> > > > > >>>   3. As long as user is not acquainted with ignite,
> working
> > >>> with
> > >>> > > > cache
> > >>> > > > > >>>   must be transparent for him. Keep this in mind. User's
> > >>> node can
> > >>> > > > > >> evaluate
> > >>> > > > > >>>   logic with no transaction at all, so we should deal
> with
> > >>> both
> > >>> > > types
> > >>> > > > > of
> > >>> > > > > >>>   execution flow : transactional and
> > >>> non-transactional.Another
> > >>> > one
> > >>> > > > > >>> problem is
> > >>> > > > > >>>   transaction id support at the user node. We would have
> > >>> handled
> > >>> > > all
> > >>> > > > > >> this
> > >>> > > > > >>>   issues and many more.
> > >>> > > > > >>>   4. we cannot pessimistically lock entity.
> > >>> > > > > >>>
> > >>> > > > > >>> As a result, we decided to move on building distributed
> > >>> > > transaction.
> > >>> > > > We
> > >>> > > > > >> put
> > >>> > > > > >>> aside your "accumulation" approach until we realize how
> to
> > >>> solve
> > >>> > > > > >>> difficulties above .
> > >>> > > > > >>>
> > >>> > > > > >>> чт, 16 мар. 2017 г. в 16:56, Sergi Vladykin <
> > >>> > > > sergi.vladykin@gmail.com
> > >>> > > > > >:
> > >>> > > > > >>>
> > >>> > > > > >>>> The problem "How to run millions of entities, and
> millions
> > >>> of
> > >>> > > > > >> operations
> > >>> > > > > >>> on
> > >>> > > > > >>>> a single Pentium3" is out of scope here. Do the math,
> plan
> > >>> > > capacity
> > >>> > > > > >>>> reasonably.
> > >>> > > > > >>>>
> > >>> > > > > >>>> Sergi
> > >>> > > > > >>>>
> > >>> > > > > >>>> 2017-03-16 15:54 GMT+03:00 ALEKSEY KUZNETSOV <
> > >>> > > > > alkuznetsov.sb@gmail.com
> > >>> > > > > >>> :
> > >>> > > > > >>>>
> > >>> > > > > >>>>> hmm, If we have millions of entities, and millions of
> > >>> > operations,
> > >>> > > > > >> would
> > >>> > > > > >>>> not
> > >>> > > > > >>>>> this approache lead to memory overflow and perfomance
> > >>> > degradation
> > >>> > > > > >>>>>
> > >>> > > > > >>>>> чт, 16 мар. 2017 г. в 15:42, Sergi Vladykin <
> > >>> > > > > >> sergi.vladykin@gmail.com
> > >>> > > > > >>>> :
> > >>> > > > > >>>>>
> > >>> > > > > >>>>>> 1. Actually you have to check versions on all the
> values
> > >>> you
> > >>> > > have
> > >>> > > > > >>> read
> > >>> > > > > >>>>>> during the tx.
> > >>> > > > > >>>>>>
> > >>> > > > > >>>>>> For example if we have [k1 => v1, k2 => v2] and do:
> > >>> > > > > >>>>>>
> > >>> > > > > >>>>>> put(k1, get(k2) + 5)
> > >>> > > > > >>>>>>
> > >>> > > > > >>>>>> We have to remember the version for k2. This logic can
> > be
> > >>> > > > > >> relatively
> > >>> > > > > >>>>> easily
> > >>> > > > > >>>>>> encapsulated in a framework atop of Ignite. You need
> to
> > >>> > > implement
> > >>> > > > > >> one
> > >>> > > > > >>>> to
> > >>> > > > > >>>>>> make all this stuff usable.
> > >>> > > > > >>>>>>
> > >>> > > > > >>>>>> 2. I suggest to avoid any locking here, because you
> > easily
> > >>> > will
> > >>> > > > end
> > >>> > > > > >>> up
> > >>> > > > > >>>>> with
> > >>> > > > > >>>>>> deadlocks. If you do not have too frequent updates for
> > >>> your
> > >>> > > keys,
> > >>> > > > > >>>>>> optimistic approach will work just fine.
> > >>> > > > > >>>>>>
> > >>> > > > > >>>>>> Theoretically in the Committer Service you can start a
> > >>> thread
> > >>> > > for
> > >>> > > > > >> the
> > >>> > > > > >>>>>> lifetime of the whole distributed transaction, take a
> > >>> lock on
> > >>> > > the
> > >>> > > > > >> key
> > >>> > > > > >>>>> using
> > >>> > > > > >>>>>> IgniteCache.lock(K key) before executing any Services,
> > >>> wait
> > >>> > for
> > >>> > > > all
> > >>> > > > > >>> the
> > >>> > > > > >>>>>> services to complete, execute optimistic commit in the
> > >>> same
> > >>> > > thread
> > >>> > > > > >>>> while
> > >>> > > > > >>>>>> keeping this lock and then release it. Notice that all
> > the
> > >>> > > Ignite
> > >>> > > > > >>>>>> transactions inside of all Services must be optimistic
> > >>> here to
> > >>> > > be
> > >>> > > > > >>> able
> > >>> > > > > >>>> to
> > >>> > > > > >>>>>> read this locked key.
> > >>> > > > > >>>>>>
> > >>> > > > > >>>>>> But again I do not recommend you using this approach
> > >>> until you
> > >>> > > > > >> have a
> > >>> > > > > >>>>>> reliable deadlock avoidance scheme.
> > >>> > > > > >>>>>>
> > >>> > > > > >>>>>> Sergi
> > >>> > > > > >>>>>>
> > >>> > > > > >>>>>>
> > >>> > > > > >>>>>>
> > >>> > > > > >>>>>>
> > >>> > > > > >>>>>>
> > >>> > > > > >>>>>>
> > >>> > > > > >>>>>>
> > >>> > > > > >>>>>> 2017-03-16 12:53 GMT+03:00 ALEKSEY KUZNETSOV <
> > >>> > > > > >>> alkuznetsov.sb@gmail.com
> > >>> > > > > >>>>> :
> > >>> > > > > >>>>>>
> > >>> > > > > >>>>>>> Yeah, now i got it.
> > >>> > > > > >>>>>>> There are some doubts on this approach
> > >>> > > > > >>>>>>> 1) During optimistic commit phase, when you assure no
> > one
> > >>> > > altered
> > >>> > > > > >>> the
> > >>> > > > > >>>>>>> original values, you must check versions of other
> > >>> dependent
> > >>> > > keys.
> > >>> > > > > >>> How
> > >>> > > > > >>>>>> could
> > >>> > > > > >>>>>>> we obtain those keys(in an automative manner, of
> > course)
> > >>> ?
> > >>> > > > > >>>>>>> 2) How could we lock a key before some Service A
> > >>> introduce
> > >>> > > > > >> changes?
> > >>> > > > > >>>> So
> > >>> > > > > >>>>> no
> > >>> > > > > >>>>>>> other service is allowed to change this
> key-value?(sort
> > >>> of
> > >>> > > > > >>>> pessimistic
> > >>> > > > > >>>>>>> blocking)
> > >>> > > > > >>>>>>> May be you know some implementations of such
> approach ?
> > >>> > > > > >>>>>>>
> > >>> > > > > >>>>>>> ср, 15 мар. 2017 г. в 17:54, ALEKSEY KUZNETSOV <
> > >>> > > > > >>>>> alkuznetsov.sb@gmail.com
> > >>> > > > > >>>>>>> :
> > >>> > > > > >>>>>>>
> > >>> > > > > >>>>>>>> Thank you very much for help.  I will answer later.
> > >>> > > > > >>>>>>>>
> > >>> > > > > >>>>>>>> ср, 15 мар. 2017 г. в 17:39, Sergi Vladykin <
> > >>> > > > > >>>>> sergi.vladykin@gmail.com
> > >>> > > > > >>>>>>> :
> > >>> > > > > >>>>>>>>
> > >>> > > > > >>>>>>>> All the services do not update key in place, but
> only
> > >>> > generate
> > >>> > > > > >>> new
> > >>> > > > > >>>>> keys
> > >>> > > > > >>>>>>>> augmented by otx and store the updated value in the
> > same
> > >>> > cache
> > >>> > > > > >> +
> > >>> > > > > >>>>>> remember
> > >>> > > > > >>>>>>>> the keys and versions participating in the
> transaction
> > >>> in
> > >>> > some
> > >>> > > > > >>>>> separate
> > >>> > > > > >>>>>>>> atomic cache.
> > >>> > > > > >>>>>>>>
> > >>> > > > > >>>>>>>> Follow this sequence of changes applied to cache
> > >>> contents by
> > >>> > > > > >> each
> > >>> > > > > >>>>>>> Service:
> > >>> > > > > >>>>>>>>
> > >>> > > > > >>>>>>>> Initial cache contents:
> > >>> > > > > >>>>>>>>            [k1 => v1]
> > >>> > > > > >>>>>>>>            [k2 => v2]
> > >>> > > > > >>>>>>>>            [k3 => v3]
> > >>> > > > > >>>>>>>>
> > >>> > > > > >>>>>>>> Cache contents after Service A:
> > >>> > > > > >>>>>>>>            [k1 => v1]
> > >>> > > > > >>>>>>>>            [k2 => v2]
> > >>> > > > > >>>>>>>>            [k3 => v3]
> > >>> > > > > >>>>>>>>            [k1x => v1a]
> > >>> > > > > >>>>>>>>            [k2x => v2a]
> > >>> > > > > >>>>>>>>
> > >>> > > > > >>>>>>>>         + [x => (k1 -> ver1, k2 -> ver2)] in some
> > >>> separate
> > >>> > > > > >>> atomic
> > >>> > > > > >>>>>> cache
> > >>> > > > > >>>>>>>>
> > >>> > > > > >>>>>>>> Cache contents after Service B:
> > >>> > > > > >>>>>>>>            [k1 => v1]
> > >>> > > > > >>>>>>>>            [k2 => v2]
> > >>> > > > > >>>>>>>>            [k3 => v3]
> > >>> > > > > >>>>>>>>            [k1x => v1a]
> > >>> > > > > >>>>>>>>            [k2x => v2ab]
> > >>> > > > > >>>>>>>>            [k3x => v3b]
> > >>> > > > > >>>>>>>>
> > >>> > > > > >>>>>>>>        + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)]
> > in
> > >>> some
> > >>> > > > > >>>>> separate
> > >>> > > > > >>>>>>>> atomic cache
> > >>> > > > > >>>>>>>>
> > >>> > > > > >>>>>>>> Finally the Committer Service takes this map of
> > updated
> > >>> keys
> > >>> > > > > >> and
> > >>> > > > > >>>>> their
> > >>> > > > > >>>>>>>> versions from some separate atomic cache, starts
> > Ignite
> > >>> > > > > >>> transaction
> > >>> > > > > >>>>> and
> > >>> > > > > >>>>>>>> replaces all the values for k* keys to values taken
> > >>> from k*x
> > >>> > > > > >>> keys.
> > >>> > > > > >>>>> The
> > >>> > > > > >>>>>>>> successful result must be the following:
> > >>> > > > > >>>>>>>>
> > >>> > > > > >>>>>>>>            [k1 => v1a]
> > >>> > > > > >>>>>>>>            [k2 => v2ab]
> > >>> > > > > >>>>>>>>            [k3 => v3b]
> > >>> > > > > >>>>>>>>            [k1x => v1a]
> > >>> > > > > >>>>>>>>            [k2x => v2ab]
> > >>> > > > > >>>>>>>>            [k3x => v3b]
> > >>> > > > > >>>>>>>>
> > >>> > > > > >>>>>>>>        + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)]
> > in
> > >>> some
> > >>> > > > > >>>>> separate
> > >>> > > > > >>>>>>>> atomic cache
> > >>> > > > > >>>>>>>>
> > >>> > > > > >>>>>>>> But Committer Service also has to check that no one
> > >>> updated
> > >>> > > the
> > >>> > > > > >>>>>> original
> > >>> > > > > >>>>>>>> values before us, because otherwise we can not give
> > any
> > >>> > > > > >>>>> serializability
> > >>> > > > > >>>>>>>> guarantee for these distributed transactions. Here
> we
> > >>> may
> > >>> > need
> > >>> > > > > >> to
> > >>> > > > > >>>>> check
> > >>> > > > > >>>>>>> not
> > >>> > > > > >>>>>>>> only versions of the updated keys, but also versions
> > of
> > >>> any
> > >>> > > > > >> other
> > >>> > > > > >>>>> keys
> > >>> > > > > >>>>>>> end
> > >>> > > > > >>>>>>>> result depends on.
> > >>> > > > > >>>>>>>>
> > >>> > > > > >>>>>>>> After that Committer Service has to do a cleanup
> (may
> > be
> > >>> > > > > >> outside
> > >>> > > > > >>> of
> > >>> > > > > >>>>> the
> > >>> > > > > >>>>>>>> committing tx) to come to the following final state:
> > >>> > > > > >>>>>>>>
> > >>> > > > > >>>>>>>>            [k1 => v1a]
> > >>> > > > > >>>>>>>>            [k2 => v2ab]
> > >>> > > > > >>>>>>>>            [k3 => v3b]
> > >>> > > > > >>>>>>>>
> > >>> > > > > >>>>>>>> Makes sense?
> > >>> > > > > >>>>>>>>
> > >>> > > > > >>>>>>>> Sergi
> > >>> > > > > >>>>>>>>
> > >>> > > > > >>>>>>>>
> > >>> > > > > >>>>>>>> 2017-03-15 16:54 GMT+03:00 ALEKSEY KUZNETSOV <
> > >>> > > > > >>>>> alkuznetsov.sb@gmail.com
> > >>> > > > > >>>>>>> :
> > >>> > > > > >>>>>>>>
> > >>> > > > > >>>>>>>>>   - what do u mean by saying "
> > >>> > > > > >>>>>>>>> *in a single transaction checks value versions for
> > all
> > >>> the
> > >>> > > > > >> old
> > >>> > > > > >>>>> values
> > >>> > > > > >>>>>>>>>    and replaces them with calculated new ones *"?
> > Every
> > >>> > time
> > >>> > > > > >>> you
> > >>> > > > > >>>>>>> change
> > >>> > > > > >>>>>>>>>   value(in some service), you store it to *some
> > special
> > >>> > > > > >> atomic
> > >>> > > > > >>>>>> cache*
> > >>> > > > > >>>>>>> ,
> > >>> > > > > >>>>>>>> so
> > >>> > > > > >>>>>>>>>   when all services ceased working, Service
> commiter
> > >>> got a
> > >>> > > > > >>>> values
> > >>> > > > > >>>>>> with
> > >>> > > > > >>>>>>>> the
> > >>> > > > > >>>>>>>>>   last versions.
> > >>> > > > > >>>>>>>>>   - After "*does cleanup of temporary keys and
> > values*"
> > >>> > > > > >>> Service
> > >>> > > > > >>>>>>> commiter
> > >>> > > > > >>>>>>>>>   persists them into permanent store, isn't it ?
> > >>> > > > > >>>>>>>>>   - I cant grasp your though, you say "*in case of
> > >>> version
> > >>> > > > > >>>>> mismatch
> > >>> > > > > >>>>>> or
> > >>> > > > > >>>>>>>> TX
> > >>> > > > > >>>>>>>>>   timeout just rollbacks*". But what versions would
> > it
> > >>> > > > > >> match?
> > >>> > > > > >>>>>>>>>
> > >>> > > > > >>>>>>>>>
> > >>> > > > > >>>>>>>>> ср, 15 мар. 2017 г. в 15:34, Sergi Vladykin <
> > >>> > > > > >>>>>> sergi.vladykin@gmail.com
> > >>> > > > > >>>>>>>> :
> > >>> > > > > >>>>>>>>>
> > >>> > > > > >>>>>>>>>> Ok, here is what you actually need to implement at
> > the
> > >>> > > > > >>>>> application
> > >>> > > > > >>>>>>>> level.
> > >>> > > > > >>>>>>>>>>
> > >>> > > > > >>>>>>>>>> Lets say we have to call 2 services in the
> following
> > >>> > order:
> > >>> > > > > >>>>>>>>>> - Service A: wants to update keys [k1 => v1,   k2
> =>
> > >>> v2]
> > >>> > > > > >> to
> > >>> > > > > >>>>> [k1
> > >>> > > > > >>>>>> =>
> > >>> > > > > >>>>>>>>> v1a,
> > >>> > > > > >>>>>>>>>>  k2 => v2a]
> > >>> > > > > >>>>>>>>>> - Service B: wants to update keys [k2 => v2a, k3
> =>
> > >>> v3]
> > >>> > > > > >> to
> > >>> > > > > >>>> [k2
> > >>> > > > > >>>>>> =>
> > >>> > > > > >>>>>>>>> v2ab,
> > >>> > > > > >>>>>>>>>> k3 => v3b]
> > >>> > > > > >>>>>>>>>>
> > >>> > > > > >>>>>>>>>> The change
> > >>> > > > > >>>>>>>>>>    from [ k1 => v1,   k2 => v2,     k3 => v3   ]
> > >>> > > > > >>>>>>>>>>    to     [ k1 => v1a, k2 => v2ab, k3 => v3b ]
> > >>> > > > > >>>>>>>>>> must happen in a single transaction.
> > >>> > > > > >>>>>>>>>>
> > >>> > > > > >>>>>>>>>>
> > >>> > > > > >>>>>>>>>> Optimistic protocol to solve this:
> > >>> > > > > >>>>>>>>>>
> > >>> > > > > >>>>>>>>>> Each cache key must have a field `otx`, which is a
> > >>> unique
> > >>> > > > > >>>>>>> orchestrator
> > >>> > > > > >>>>>>>> TX
> > >>> > > > > >>>>>>>>>> identifier - it must be a parameter passed to all
> > the
> > >>> > > > > >>> services.
> > >>> > > > > >>>>> If
> > >>> > > > > >>>>>>>> `otx`
> > >>> > > > > >>>>>>>>> is
> > >>> > > > > >>>>>>>>>> set to some value it means that it is an
> > intermediate
> > >>> key
> > >>> > > > > >> and
> > >>> > > > > >>>> is
> > >>> > > > > >>>>>>>> visible
> > >>> > > > > >>>>>>>>>> only inside of some transaction, for the finalized
> > key
> > >>> > > > > >> `otx`
> > >>> > > > > >>>> must
> > >>> > > > > >>>>>> be
> > >>> > > > > >>>>>>>>> null -
> > >>> > > > > >>>>>>>>>> it means the key is committed and visible for
> > >>> everyone.
> > >>> > > > > >>>>>>>>>>
> > >>> > > > > >>>>>>>>>> Each cache value must have a field `ver` which is
> a
> > >>> > version
> > >>> > > > > >>> of
> > >>> > > > > >>>>> that
> > >>> > > > > >>>>>>>>> value.
> > >>> > > > > >>>>>>>>>>
> > >>> > > > > >>>>>>>>>> For both fields (`otx` and `ver`) the safest way
> is
> > >>> to use
> > >>> > > > > >>>> UUID.
> > >>> > > > > >>>>>>>>>>
> > >>> > > > > >>>>>>>>>> Workflow is the following:
> > >>> > > > > >>>>>>>>>>
> > >>> > > > > >>>>>>>>>> Orchestrator starts the distributed transaction
> with
> > >>> `otx`
> > >>> > > > > >> =
> > >>> > > > > >>> x
> > >>> > > > > >>>>> and
> > >>> > > > > >>>>>>>> passes
> > >>> > > > > >>>>>>>>>> this parameter to all the services.
> > >>> > > > > >>>>>>>>>>
> > >>> > > > > >>>>>>>>>> Service A:
> > >>> > > > > >>>>>>>>>> - does some computations
> > >>> > > > > >>>>>>>>>> - stores [k1x => v1a, k2x => v2a]  with TTL = Za
> > >>> > > > > >>>>>>>>>>      where
> > >>> > > > > >>>>>>>>>>          Za - left time from max Orchestrator TX
> > >>> duration
> > >>> > > > > >>>> after
> > >>> > > > > >>>>>>>> Service
> > >>> > > > > >>>>>>>>> A
> > >>> > > > > >>>>>>>>>> end
> > >>> > > > > >>>>>>>>>>          k1x, k2x - new temporary keys with field
> > >>> `otx` =
> > >>> > > > > >> x
> > >>> > > > > >>>>>>>>>>          v2a has updated version `ver`
> > >>> > > > > >>>>>>>>>> - returns a set of updated keys and all the old
> > >>> versions
> > >>> > > > > >> to
> > >>> > > > > >>>> the
> > >>> > > > > >>>>>>>>>> orchestrator
> > >>> > > > > >>>>>>>>>>       or just stores it in some special atomic
> cache
> > >>> like
> > >>> > > > > >>>>>>>>>>       [x => (k1 -> ver1, k2 -> ver2)] TTL = Za
> > >>> > > > > >>>>>>>>>>
> > >>> > > > > >>>>>>>>>> Service B:
> > >>> > > > > >>>>>>>>>> - retrieves the updated value k2x => v2a because
> it
> > >>> knows
> > >>> > > > > >>>> `otx`
> > >>> > > > > >>>>> =
> > >>> > > > > >>>>>> x
> > >>> > > > > >>>>>>>>>> - does computations
> > >>> > > > > >>>>>>>>>> - stores [k2x => v2ab, k3x => v3b] TTL = Zb
> > >>> > > > > >>>>>>>>>> - updates the set of updated keys like [x => (k1
> ->
> > >>> ver1,
> > >>> > > > > >> k2
> > >>> > > > > >>>> ->
> > >>> > > > > >>>>>>> ver2,
> > >>> > > > > >>>>>>>> k3
> > >>> > > > > >>>>>>>>>> -> ver3)] TTL = Zb
> > >>> > > > > >>>>>>>>>>
> > >>> > > > > >>>>>>>>>> Service Committer (may be embedded into
> > Orchestrator):
> > >>> > > > > >>>>>>>>>> - takes all the updated keys and versions for
> `otx`
> > =
> > >>> x
> > >>> > > > > >>>>>>>>>>       [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)]
> > >>> > > > > >>>>>>>>>> - in a single transaction checks value versions
> for
> > >>> all
> > >>> > > > > >> the
> > >>> > > > > >>>> old
> > >>> > > > > >>>>>>> values
> > >>> > > > > >>>>>>>>>>       and replaces them with calculated new ones
> > >>> > > > > >>>>>>>>>> - does cleanup of temporary keys and values
> > >>> > > > > >>>>>>>>>> - in case of version mismatch or TX timeout just
> > >>> rollbacks
> > >>> > > > > >>> and
> > >>> > > > > >>>>>>> signals
> > >>> > > > > >>>>>>>>>>        to Orchestrator to restart the job with new
> > >>> `otx`
> > >>> > > > > >>>>>>>>>>
> > >>> > > > > >>>>>>>>>> PROFIT!!
> > >>> > > > > >>>>>>>>>>
> > >>> > > > > >>>>>>>>>> This approach even allows you to run independent
> > >>> parts of
> > >>> > > > > >> the
> > >>> > > > > >>>>> graph
> > >>> > > > > >>>>>>> in
> > >>> > > > > >>>>>>>>>> parallel (with TX transfer you will always run
> only
> > >>> one at
> > >>> > > > > >> a
> > >>> > > > > >>>>> time).
> > >>> > > > > >>>>>>>> Also
> > >>> > > > > >>>>>>>>> it
> > >>> > > > > >>>>>>>>>> does not require inventing any special fault
> > tolerance
> > >>> > > > > >>> technics
> > >>> > > > > >>>>>>> because
> > >>> > > > > >>>>>>>>>> Ignite caches are already fault tolerant and all
> the
> > >>> > > > > >>>> intermediate
> > >>> > > > > >>>>>>>> results
> > >>> > > > > >>>>>>>>>> are virtually invisible and stored with TTL, thus
> in
> > >>> case
> > >>> > > > > >> of
> > >>> > > > > >>>> any
> > >>> > > > > >>>>>>> crash
> > >>> > > > > >>>>>>>>> you
> > >>> > > > > >>>>>>>>>> will not have inconsistent state or garbage.
> > >>> > > > > >>>>>>>>>>
> > >>> > > > > >>>>>>>>>> Sergi
> > >>> > > > > >>>>>>>>>>
> > >>> > > > > >>>>>>>>>>
> > >>> > > > > >>>>>>>>>> 2017-03-15 11:42 GMT+03:00 ALEKSEY KUZNETSOV <
> > >>> > > > > >>>>>>> alkuznetsov.sb@gmail.com
> > >>> > > > > >>>>>>>>> :
> > >>> > > > > >>>>>>>>>>
> > >>> > > > > >>>>>>>>>>> Okay, we are open for proposals on business
> task. I
> > >>> mean,
> > >>> > > > > >>> we
> > >>> > > > > >>>>> can
> > >>> > > > > >>>>>>> make
> > >>> > > > > >>>>>>>>> use
> > >>> > > > > >>>>>>>>>>> of some other thing, not distributed transaction.
> > Not
> > >>> > > > > >>>>> transaction
> > >>> > > > > >>>>>>>> yet.
> > >>> > > > > >>>>>>>>>>>
> > >>> > > > > >>>>>>>>>>> ср, 15 мар. 2017 г. в 11:24, Vladimir Ozerov <
> > >>> > > > > >>>>>> vozerov@gridgain.com
> > >>> > > > > >>>>>>>> :
> > >>> > > > > >>>>>>>>>>>
> > >>> > > > > >>>>>>>>>>>> IMO the use case makes sense. However, as Sergi
> > >>> already
> > >>> > > > > >>>>>>> mentioned,
> > >>> > > > > >>>>>>>>> the
> > >>> > > > > >>>>>>>>>>>> problem is far more complex, than simply passing
> > TX
> > >>> > > > > >> state
> > >>> > > > > >>>>> over
> > >>> > > > > >>>>>> a
> > >>> > > > > >>>>>>>>> wire.
> > >>> > > > > >>>>>>>>>>> Most
> > >>> > > > > >>>>>>>>>>>> probably a kind of coordinator will be required
> > >>> still
> > >>> > > > > >> to
> > >>> > > > > >>>>> manage
> > >>> > > > > >>>>>>> all
> > >>> > > > > >>>>>>>>>> kinds
> > >>> > > > > >>>>>>>>>>>> of failures. This task should be started with
> > clean
> > >>> > > > > >>> design
> > >>> > > > > >>>>>>> proposal
> > >>> > > > > >>>>>>>>>>>> explaining how we handle all these concurrent
> > >>> events.
> > >>> > > > > >> And
> > >>> > > > > >>>>> only
> > >>> > > > > >>>>>>>> then,
> > >>> > > > > >>>>>>>>>> when
> > >>> > > > > >>>>>>>>>>>> we understand all implications, we should move
> to
> > >>> > > > > >>>> development
> > >>> > > > > >>>>>>>> stage.
> > >>> > > > > >>>>>>>>>>>>
> > >>> > > > > >>>>>>>>>>>> On Wed, Mar 15, 2017 at 10:38 AM, ALEKSEY
> > KUZNETSOV
> > >>> <
> > >>> > > > > >>>>>>>>>>>> alkuznetsov.sb@gmail.com> wrote:
> > >>> > > > > >>>>>>>>>>>>
> > >>> > > > > >>>>>>>>>>>>> Right
> > >>> > > > > >>>>>>>>>>>>>
> > >>> > > > > >>>>>>>>>>>>> ср, 15 мар. 2017 г. в 10:35, Sergi Vladykin <
> > >>> > > > > >>>>>>>>>> sergi.vladykin@gmail.com
> > >>> > > > > >>>>>>>>>>>> :
> > >>> > > > > >>>>>>>>>>>>>
> > >>> > > > > >>>>>>>>>>>>>> Good! Basically your orchestrator just takes
> > some
> > >>> > > > > >>>>>> predefined
> > >>> > > > > >>>>>>>>> graph
> > >>> > > > > >>>>>>>>>> of
> > >>> > > > > >>>>>>>>>>>>>> distributed services to be invoked, calls them
> > by
> > >>> > > > > >>> some
> > >>> > > > > >>>>> kind
> > >>> > > > > >>>>>>> of
> > >>> > > > > >>>>>>>>> RPC
> > >>> > > > > >>>>>>>>>>> and
> > >>> > > > > >>>>>>>>>>>>>> passes the needed parameters between them,
> > right?
> > >>> > > > > >>>>>>>>>>>>>>
> > >>> > > > > >>>>>>>>>>>>>> Sergi
> > >>> > > > > >>>>>>>>>>>>>>
> > >>> > > > > >>>>>>>>>>>>>> 2017-03-14 22:46 GMT+03:00 ALEKSEY KUZNETSOV <
> > >>> > > > > >>>>>>>>>>> alkuznetsov.sb@gmail.com
> > >>> > > > > >>>>>>>>>>>>> :
> > >>> > > > > >>>>>>>>>>>>>>
> > >>> > > > > >>>>>>>>>>>>>>> orchestrator is a custom thing. He is
> > responsible
> > >>> > > > > >>> for
> > >>> > > > > >>>>>>>> managing
> > >>> > > > > >>>>>>>>>>>> business
> > >>> > > > > >>>>>>>>>>>>>>> scenarios flows. Many nodes are involved in
> > >>> > > > > >>>> scenarios.
> > >>> > > > > >>>>>> They
> > >>> > > > > >>>>>>>>>>> exchange
> > >>> > > > > >>>>>>>>>>>>> data
> > >>> > > > > >>>>>>>>>>>>>>> and folow one another. If you acquinted with
> > BPMN
> > >>> > > > > >>>>>>> framework,
> > >>> > > > > >>>>>>>> so
> > >>> > > > > >>>>>>>>>>>>>>> orchestrator is like bpmn engine.
> > >>> > > > > >>>>>>>>>>>>>>>
> > >>> > > > > >>>>>>>>>>>>>>> вт, 14 Мар 2017 г., 18:56 Sergi Vladykin <
> > >>> > > > > >>>>>>>>>> sergi.vladykin@gmail.com
> > >>> > > > > >>>>>>>>>>>> :
> > >>> > > > > >>>>>>>>>>>>>>>
> > >>> > > > > >>>>>>>>>>>>>>>> What is Orchestrator for you? Is it a thing
> > >>> > > > > >> from
> > >>> > > > > >>>>>>> Microsoft
> > >>> > > > > >>>>>>>> or
> > >>> > > > > >>>>>>>>>>> your
> > >>> > > > > >>>>>>>>>>>>>> custom
> > >>> > > > > >>>>>>>>>>>>>>>> in-house software?
> > >>> > > > > >>>>>>>>>>>>>>>>
> > >>> > > > > >>>>>>>>>>>>>>>> Sergi
> > >>> > > > > >>>>>>>>>>>>>>>>
> > >>> > > > > >>>>>>>>>>>>>>>> 2017-03-14 18:00 GMT+03:00 ALEKSEY
> KUZNETSOV <
> > >>> > > > > >>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > >>> > > > > >>>>>>>>>>>>>>> :
> > >>> > > > > >>>>>>>>>>>>>>>>
> > >>> > > > > >>>>>>>>>>>>>>>>> Fine. Let's say we've got multiple servers
> > >>> > > > > >>> which
> > >>> > > > > >>>>>>> fulfills
> > >>> > > > > >>>>>>>>>>> custom
> > >>> > > > > >>>>>>>>>>>>>> logic.
> > >>> > > > > >>>>>>>>>>>>>>>>> This servers compound oriented graph (BPMN
> > >>> > > > > >>>> process)
> > >>> > > > > >>>>>>> which
> > >>> > > > > >>>>>>>>>>>>> controlled
> > >>> > > > > >>>>>>>>>>>>>> by
> > >>> > > > > >>>>>>>>>>>>>>>>> Orchestrator.
> > >>> > > > > >>>>>>>>>>>>>>>>> For instance, *server1  *creates *variable
> A
> > >>> > > > > >>>> *with
> > >>> > > > > >>>>>>> value
> > >>> > > > > >>>>>>>> 1,
> > >>> > > > > >>>>>>>>>>>>> persists
> > >>> > > > > >>>>>>>>>>>>>> it
> > >>> > > > > >>>>>>>>>>>>>>>> to
> > >>> > > > > >>>>>>>>>>>>>>>>> IGNITE cache and creates *variable B *and
> > >>> > > > > >> sends
> > >>> > > > > >>>> it
> > >>> > > > > >>>>>> to*
> > >>> > > > > >>>>>>>>>> server2.
> > >>> > > > > >>>>>>>>>>>>> *The
> > >>> > > > > >>>>>>>>>>>>>>>>> latests receives *variable B*, do some
> logic
> > >>> > > > > >>> with
> > >>> > > > > >>>>> it
> > >>> > > > > >>>>>>> and
> > >>> > > > > >>>>>>>>>> stores
> > >>> > > > > >>>>>>>>>>>> to
> > >>> > > > > >>>>>>>>>>>>>>>> IGNITE.
> > >>> > > > > >>>>>>>>>>>>>>>>> All the work made by both servers must be
> > >>> > > > > >>>> fulfilled
> > >>> > > > > >>>>>> in
> > >>> > > > > >>>>>>>>> *one*
> > >>> > > > > >>>>>>>>>>>>>>> transaction.
> > >>> > > > > >>>>>>>>>>>>>>>>> Because we need all information done, or
> > >>> > > > > >>>>>>>>> nothing(rollbacked).
> > >>> > > > > >>>>>>>>>>> The
> > >>> > > > > >>>>>>>>>>>>>>>> scenario
> > >>> > > > > >>>>>>>>>>>>>>>>> is managed by orchestrator.
> > >>> > > > > >>>>>>>>>>>>>>>>>
> > >>> > > > > >>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 17:31, Sergi
> Vladykin <
> > >>> > > > > >>>>>>>>>>>>>> sergi.vladykin@gmail.com
> > >>> > > > > >>>>>>>>>>>>>>>> :
> > >>> > > > > >>>>>>>>>>>>>>>>>
> > >>> > > > > >>>>>>>>>>>>>>>>>> Ok, it is not a business case, it is your
> > >>> > > > > >>> wrong
> > >>> > > > > >>>>>>>> solution
> > >>> > > > > >>>>>>>>>> for
> > >>> > > > > >>>>>>>>>>>> it.
> > >>> > > > > >>>>>>>>>>>>>>>>>> Lets try again, what is the business case?
> > >>> > > > > >>>>>>>>>>>>>>>>>>
> > >>> > > > > >>>>>>>>>>>>>>>>>> Sergi
> > >>> > > > > >>>>>>>>>>>>>>>>>>
> > >>> > > > > >>>>>>>>>>>>>>>>>> 2017-03-14 16:42 GMT+03:00 ALEKSEY
> > >>> > > > > >> KUZNETSOV
> > >>> > > > > >>> <
> > >>> > > > > >>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > >>> > > > > >>>>>>>>>>>>>>>>> :
> > >>> > > > > >>>>>>>>>>>>>>>>>>
> > >>> > > > > >>>>>>>>>>>>>>>>>>> The case is the following, One starts
> > >>> > > > > >>>>> transaction
> > >>> > > > > >>>>>>> in
> > >>> > > > > >>>>>>>>> one
> > >>> > > > > >>>>>>>>>>>> node,
> > >>> > > > > >>>>>>>>>>>>>> and
> > >>> > > > > >>>>>>>>>>>>>>>>> commit
> > >>> > > > > >>>>>>>>>>>>>>>>>>> this transaction in another jvm node(or
> > >>> > > > > >>>>> rollback
> > >>> > > > > >>>>>> it
> > >>> > > > > >>>>>>>>>>>> remotely).
> > >>> > > > > >>>>>>>>>>>>>>>>>>>
> > >>> > > > > >>>>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 16:30, Sergi
> > >>> > > > > >>> Vladykin <
> > >>> > > > > >>>>>>>>>>>>>>>> sergi.vladykin@gmail.com
> > >>> > > > > >>>>>>>>>>>>>>>>>> :
> > >>> > > > > >>>>>>>>>>>>>>>>>>>
> > >>> > > > > >>>>>>>>>>>>>>>>>>>> Because even if you make it work for
> > >>> > > > > >> some
> > >>> > > > > >>>>>>>> simplistic
> > >>> > > > > >>>>>>>>>>>>> scenario,
> > >>> > > > > >>>>>>>>>>>>>>> get
> > >>> > > > > >>>>>>>>>>>>>>>>>> ready
> > >>> > > > > >>>>>>>>>>>>>>>>>>> to
> > >>> > > > > >>>>>>>>>>>>>>>>>>>> write many fault tolerance tests and
> > >>> > > > > >> make
> > >>> > > > > >>>>> sure
> > >>> > > > > >>>>>>> that
> > >>> > > > > >>>>>>>>> you
> > >>> > > > > >>>>>>>>>>> TXs
> > >>> > > > > >>>>>>>>>>>>>> work
> > >>> > > > > >>>>>>>>>>>>>>>>>>> gracefully
> > >>> > > > > >>>>>>>>>>>>>>>>>>>> in all modes in case of crashes. Also
> > >>> > > > > >>> make
> > >>> > > > > >>>>> sure
> > >>> > > > > >>>>>>>> that
> > >>> > > > > >>>>>>>>> we
> > >>> > > > > >>>>>>>>>>> do
> > >>> > > > > >>>>>>>>>>>>> not
> > >>> > > > > >>>>>>>>>>>>>>> have
> > >>> > > > > >>>>>>>>>>>>>>>>> any
> > >>> > > > > >>>>>>>>>>>>>>>>>>>> performance drops after all your
> > >>> > > > > >> changes
> > >>> > > > > >>> in
> > >>> > > > > >>>>>>>> existing
> > >>> > > > > >>>>>>>>>>>>>> benchmarks.
> > >>> > > > > >>>>>>>>>>>>>>>> All
> > >>> > > > > >>>>>>>>>>>>>>>>> in
> > >>> > > > > >>>>>>>>>>>>>>>>>>> all
> > >>> > > > > >>>>>>>>>>>>>>>>>>>> I don't believe these conditions will
> > >>> > > > > >> be
> > >>> > > > > >>>> met
> > >>> > > > > >>>>>> and
> > >>> > > > > >>>>>>>> your
> > >>> > > > > >>>>>>>>>>>>>>> contribution
> > >>> > > > > >>>>>>>>>>>>>>>>> will
> > >>> > > > > >>>>>>>>>>>>>>>>>>> be
> > >>> > > > > >>>>>>>>>>>>>>>>>>>> accepted.
> > >>> > > > > >>>>>>>>>>>>>>>>>>>>
> > >>> > > > > >>>>>>>>>>>>>>>>>>>> Better solution to what problem?
> > >>> > > > > >> Sending
> > >>> > > > > >>> TX
> > >>> > > > > >>>>> to
> > >>> > > > > >>>>>>>>> another
> > >>> > > > > >>>>>>>>>>>> node?
> > >>> > > > > >>>>>>>>>>>>>> The
> > >>> > > > > >>>>>>>>>>>>>>>>>> problem
> > >>> > > > > >>>>>>>>>>>>>>>>>>>> statement itself is already wrong. What
> > >>> > > > > >>>>>> business
> > >>> > > > > >>>>>>>> case
> > >>> > > > > >>>>>>>>>> you
> > >>> > > > > >>>>>>>>>>>> are
> > >>> > > > > >>>>>>>>>>>>>>>> trying
> > >>> > > > > >>>>>>>>>>>>>>>>> to
> > >>> > > > > >>>>>>>>>>>>>>>>>>>> solve? I'm sure everything you need can
> > >>> > > > > >>> be
> > >>> > > > > >>>>> done
> > >>> > > > > >>>>>>> in
> > >>> > > > > >>>>>>>> a
> > >>> > > > > >>>>>>>>>> much
> > >>> > > > > >>>>>>>>>>>>> more
> > >>> > > > > >>>>>>>>>>>>>>>> simple
> > >>> > > > > >>>>>>>>>>>>>>>>>> and
> > >>> > > > > >>>>>>>>>>>>>>>>>>>> efficient way at the application level.
> > >>> > > > > >>>>>>>>>>>>>>>>>>>>
> > >>> > > > > >>>>>>>>>>>>>>>>>>>> Sergi
> > >>> > > > > >>>>>>>>>>>>>>>>>>>>
> > >>> > > > > >>>>>>>>>>>>>>>>>>>> 2017-03-14 16:03 GMT+03:00 ALEKSEY
> > >>> > > > > >>>> KUZNETSOV
> > >>> > > > > >>>>> <
> > >>> > > > > >>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > >>> > > > > >>>>>>>>>>>>>>>>>>> :
> > >>> > > > > >>>>>>>>>>>>>>>>>>>>
> > >>> > > > > >>>>>>>>>>>>>>>>>>>>> Why wrong ? You know the better
> > >>> > > > > >>> solution?
> > >>> > > > > >>>>>>>>>>>>>>>>>>>>>
> > >>> > > > > >>>>>>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 15:46, Sergi
> > >>> > > > > >>>>> Vladykin <
> > >>> > > > > >>>>>>>>>>>>>>>>>> sergi.vladykin@gmail.com
> > >>> > > > > >>>>>>>>>>>>>>>>>>>> :
> > >>> > > > > >>>>>>>>>>>>>>>>>>>>>
> > >>> > > > > >>>>>>>>>>>>>>>>>>>>>> Just serializing TX object and
> > >>> > > > > >>>>>> deserializing
> > >>> > > > > >>>>>>> it
> > >>> > > > > >>>>>>>>> on
> > >>> > > > > >>>>>>>>>>>>> another
> > >>> > > > > >>>>>>>>>>>>>>> node
> > >>> > > > > >>>>>>>>>>>>>>>>> is
> > >>> > > > > >>>>>>>>>>>>>>>>>>>>>> meaningless, because other nodes
> > >>> > > > > >>>>>>> participating
> > >>> > > > > >>>>>>>> in
> > >>> > > > > >>>>>>>>>> the
> > >>> > > > > >>>>>>>>>>>> TX
> > >>> > > > > >>>>>>>>>>>>>> have
> > >>> > > > > >>>>>>>>>>>>>>>> to
> > >>> > > > > >>>>>>>>>>>>>>>>>> know
> > >>> > > > > >>>>>>>>>>>>>>>>>>>>> about
> > >>> > > > > >>>>>>>>>>>>>>>>>>>>>> the new coordinator. This will
> > >>> > > > > >>> require
> > >>> > > > > >>>>>>> protocol
> > >>> > > > > >>>>>>>>>>>> changes,
> > >>> > > > > >>>>>>>>>>>>> we
> > >>> > > > > >>>>>>>>>>>>>>>>>>> definitely
> > >>> > > > > >>>>>>>>>>>>>>>>>>>>> will
> > >>> > > > > >>>>>>>>>>>>>>>>>>>>>> have fault tolerance and
> > >>> > > > > >> performance
> > >>> > > > > >>>>>> issues.
> > >>> > > > > >>>>>>>> IMO
> > >>> > > > > >>>>>>>>>> the
> > >>> > > > > >>>>>>>>>>>>> whole
> > >>> > > > > >>>>>>>>>>>>>>> idea
> > >>> > > > > >>>>>>>>>>>>>>>>> is
> > >>> > > > > >>>>>>>>>>>>>>>>>>>> wrong
> > >>> > > > > >>>>>>>>>>>>>>>>>>>>>> and it makes no sense to waste time
> > >>> > > > > >>> on
> > >>> > > > > >>>>> it.
> > >>> > > > > >>>>>>>>>>>>>>>>>>>>>>
> > >>> > > > > >>>>>>>>>>>>>>>>>>>>>> Sergi
> > >>> > > > > >>>>>>>>>>>>>>>>>>>>>>
> > >>> > > > > >>>>>>>>>>>>>>>>>>>>>> 2017-03-14 10:57 GMT+03:00 ALEKSEY
> > >>> > > > > >>>>>> KUZNETSOV
> > >>> > > > > >>>>>>> <
> > >>> > > > > >>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > >>> > > > > >>>>>>>>>>>>>>>>>>>>> :
> > >>> > > > > >>>>>>>>>>>>>>>>>>>>>>
> > >>> > > > > >>>>>>>>>>>>>>>>>>>>>>> IgniteTransactionState
> > >>> > > > > >>>> implememntation
> > >>> > > > > >>>>>>>> contains
> > >>> > > > > >>>>>>>>>>>>>>>> IgniteTxEntry's
> > >>> > > > > >>>>>>>>>>>>>>>>>>> which
> > >>> > > > > >>>>>>>>>>>>>>>>>>>>> is
> > >>> > > > > >>>>>>>>>>>>>>>>>>>>>>> supposed to be transferable
> > >>> > > > > >>>>>>>>>>>>>>>>>>>>>>>
> > >>> > > > > >>>>>>>>>>>>>>>>>>>>>>> пн, 13 мар. 2017 г. в 19:32,
> > >>> > > > > >>> Dmitriy
> > >>> > > > > >>>>>>>> Setrakyan
> > >>> > > > > >>>>>>>>> <
> > >>> > > > > >>>>>>>>>>>>>>>>>>>> dsetrakyan@apache.org
> > >>> > > > > >>>>>>>>>>>>>>>>>>>>>> :
> > >>> > > > > >>>>>>>>>>>>>>>>>>>>>>>
> > >>> > > > > >>>>>>>>>>>>>>>>>>>>>>>> It sounds a little scary to me
> > >>> > > > > >>> that
> > >>> > > > > >>>>> we
> > >>> > > > > >>>>>>> are
> > >>> > > > > >>>>>>>>>>> passing
> > >>> > > > > >>>>>>>>>>>>>>>>> transaction
> > >>> > > > > >>>>>>>>>>>>>>>>>>>>> objects
> > >>>
> > >>> --
> >
> > *Best Regards,*
> >
> > *Kuznetsov Aleksey*
> >
>
-- 

*Best Regards,*

*Kuznetsov Aleksey*

Re: distributed transaction of non-single coordinator

Posted by Yakov Zhdanov <yz...@gridgain.com>.
Alex, I have commented in the ticket. Please take a look.

Thanks!
--
Yakov Zhdanov, Director R&D
*GridGain Systems*
www.gridgain.com

2017-06-29 17:27 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:

> I've attached HangTest. I suppose it should not hang, am i right ?
>
> чт, 29 июн. 2017 г. в 14:54, ALEKSEY KUZNETSOV <al...@gmail.com>:
>
> > Igntrs.
> > Im rewieving all usages of threadId of
> > transaction.(IgniteTxAdapter#threadID). What is the point of usage
> threadId
> > in mvcc entry ?
> >
> > пн, 3 апр. 2017 г. в 9:47, ALEKSEY KUZNETSOV <al...@gmail.com>:
> >
> >> so what do u think on my idea?
> >>
> >> пт, 31 Мар 2017 г., 11:05 ALEKSEY KUZNETSOV <al...@gmail.com>:
> >>
> >>> sorry for misleading you. We planned to support multi-node
> transactions,
> >>> but failed.
> >>>
> >>> пт, 31 мар. 2017 г. в 10:51, Alexey Goncharuk <
> >>> alexey.goncharuk@gmail.com>:
> >>>
> >>> Well, now the scenario is more clear, but it has nothing to do with
> >>> multiple coordinators :) Let me think a little bit about it.
> >>>
> >>> 2017-03-31 9:53 GMT+03:00 ALEKSEY KUZNETSOV <alkuznetsov.sb@gmail.com
> >:
> >>>
> >>> > so what do u think on the issue ?
> >>> >
> >>> > чт, 30 Мар 2017 г., 17:49 ALEKSEY KUZNETSOV <
> alkuznetsov.sb@gmail.com
> >>> >:
> >>> >
> >>> > > Hi ! Thanks for help. I've created ticket :
> >>> > > https://issues.apache.org/jira/browse/IGNITE-4887
> >>> > > and a commit :
> >>> > >
> >>> https://github.com/voipp/ignite/commit/aa3487bd9c203394f534c605f84e06
> >>> > 436b638e5c
> >>> > > We really need this feature
> >>> > >
> >>> > > чт, 30 мар. 2017 г. в 11:31, Alexey Goncharuk <
> >>> > alexey.goncharuk@gmail.com
> >>> > > >:
> >>> > >
> >>> > > Aleksey,
> >>> > >
> >>> > > I doubt your approach works as expected. Current transaction
> recovery
> >>> > > protocol heavily relies on the originating node ID in its internal
> >>> logic.
> >>> > > For example, currently a transaction will be rolled back if you
> want
> >>> to
> >>> > > transfer a transaction ownership to another node and original tx
> >>> owner
> >>> > > fails. An attempt to commit such a transaction on another node may
> >>> fail
> >>> > > with all sorts of assertions. After transaction ownership changed,
> >>> you
> >>> > need
> >>> > > to notify all current transaction participants about this change,
> >>> and it
> >>> > > should also be done failover-safe, let alone that you did not add
> any
> >>> > tests
> >>> > > for these cases.
> >>> > >
> >>> > > I back Denis here. Please create a ticket first and come up with
> >>> clear
> >>> > > use-cases, API and protocol changes design. It is hard to reason
> >>> about
> >>> > the
> >>> > > changes you've made when we do not even understand why you are
> making
> >>> > these
> >>> > > changes and how they are supposed to work.
> >>> > >
> >>> > > --AG
> >>> > >
> >>> > > 2017-03-30 10:43 GMT+03:00 ALEKSEY KUZNETSOV <
> >>> alkuznetsov.sb@gmail.com>:
> >>> > >
> >>> > > > So, what do u think on my idea ?
> >>> > > >
> >>> > > > ср, 29 мар. 2017 г. в 10:35, ALEKSEY KUZNETSOV <
> >>> > alkuznetsov.sb@gmail.com
> >>> > > >:
> >>> > > >
> >>> > > > > Hi! No, i dont have ticket for this.
> >>> > > > > In the ticket i have implemented methods that change
> transaction
> >>> > status
> >>> > > > to
> >>> > > > > STOP, thus letting it to commit transaction in another thread.
> In
> >>> > > another
> >>> > > > > thread you r going to restart transaction in order to commit
> it.
> >>> > > > > The mechanism behind it is obvious : we change thread id to
> >>> newer one
> >>> > > in
> >>> > > > > ThreadMap, and make use of serialization of txState,
> transactions
> >>> > > itself
> >>> > > > to
> >>> > > > > transfer them into another thread.
> >>> > > > >
> >>> > > > >
> >>> > > > > вт, 28 мар. 2017 г. в 20:15, Denis Magda <dm...@apache.org>:
> >>> > > > >
> >>> > > > > Aleksey,
> >>> > > > >
> >>> > > > > Do you have a ticket for this? Could you briefly list what
> >>> exactly
> >>> > was
> >>> > > > > done and how the things work.
> >>> > > > >
> >>> > > > > —
> >>> > > > > Denis
> >>> > > > >
> >>> > > > > > On Mar 28, 2017, at 8:32 AM, ALEKSEY KUZNETSOV <
> >>> > > > alkuznetsov.sb@gmail.com>
> >>> > > > > wrote:
> >>> > > > > >
> >>> > > > > > Hi, Igniters! I 've made implementation of transactions of
> >>> > non-single
> >>> > > > > > coordinator. Here you can start transaction in one thread and
> >>> > commit
> >>> > > it
> >>> > > > > in
> >>> > > > > > another thread.
> >>> > > > > > Take a look on it. Give your thoughts on it.
> >>> > > > > >
> >>> > > > > >
> >>> > > > > https://github.com/voipp/ignite/pull/10/commits/
> >>> > > > 3a3d90aa6ac84f125e4c3ce4ced4f269a695ef45
> >>> > > > > >
> >>> > > > > > пт, 17 мар. 2017 г. в 19:26, Sergi Vladykin <
> >>> > > sergi.vladykin@gmail.com
> >>> > > > >:
> >>> > > > > >
> >>> > > > > >> You know better, go ahead! :)
> >>> > > > > >>
> >>> > > > > >> Sergi
> >>> > > > > >>
> >>> > > > > >> 2017-03-17 16:16 GMT+03:00 ALEKSEY KUZNETSOV <
> >>> > > > alkuznetsov.sb@gmail.com
> >>> > > > > >:
> >>> > > > > >>
> >>> > > > > >>> we've discovered several problems regarding your
> >>> "accumulation"
> >>> > > > > >>> approach.These are
> >>> > > > > >>>
> >>> > > > > >>>   1. perfomance issues when transfering data from temporary
> >>> cache
> >>> > > to
> >>> > > > > >>>   permanent one. Keep in mind big deal of concurent
> >>> transactions
> >>> > in
> >>> > > > > >>> Service
> >>> > > > > >>>   commiter
> >>> > > > > >>>   2. extreme memory load when keeping temporary cache in
> >>> memory
> >>> > > > > >>>   3. As long as user is not acquainted with ignite, working
> >>> with
> >>> > > > cache
> >>> > > > > >>>   must be transparent for him. Keep this in mind. User's
> >>> node can
> >>> > > > > >> evaluate
> >>> > > > > >>>   logic with no transaction at all, so we should deal with
> >>> both
> >>> > > types
> >>> > > > > of
> >>> > > > > >>>   execution flow : transactional and
> >>> non-transactional.Another
> >>> > one
> >>> > > > > >>> problem is
> >>> > > > > >>>   transaction id support at the user node. We would have
> >>> handled
> >>> > > all
> >>> > > > > >> this
> >>> > > > > >>>   issues and many more.
> >>> > > > > >>>   4. we cannot pessimistically lock entity.
> >>> > > > > >>>
> >>> > > > > >>> As a result, we decided to move on building distributed
> >>> > > transaction.
> >>> > > > We
> >>> > > > > >> put
> >>> > > > > >>> aside your "accumulation" approach until we realize how to
> >>> solve
> >>> > > > > >>> difficulties above .
> >>> > > > > >>>
> >>> > > > > >>> чт, 16 мар. 2017 г. в 16:56, Sergi Vladykin <
> >>> > > > sergi.vladykin@gmail.com
> >>> > > > > >:
> >>> > > > > >>>
> >>> > > > > >>>> The problem "How to run millions of entities, and millions
> >>> of
> >>> > > > > >> operations
> >>> > > > > >>> on
> >>> > > > > >>>> a single Pentium3" is out of scope here. Do the math, plan
> >>> > > capacity
> >>> > > > > >>>> reasonably.
> >>> > > > > >>>>
> >>> > > > > >>>> Sergi
> >>> > > > > >>>>
> >>> > > > > >>>> 2017-03-16 15:54 GMT+03:00 ALEKSEY KUZNETSOV <
> >>> > > > > alkuznetsov.sb@gmail.com
> >>> > > > > >>> :
> >>> > > > > >>>>
> >>> > > > > >>>>> hmm, If we have millions of entities, and millions of
> >>> > operations,
> >>> > > > > >> would
> >>> > > > > >>>> not
> >>> > > > > >>>>> this approache lead to memory overflow and perfomance
> >>> > degradation
> >>> > > > > >>>>>
> >>> > > > > >>>>> чт, 16 мар. 2017 г. в 15:42, Sergi Vladykin <
> >>> > > > > >> sergi.vladykin@gmail.com
> >>> > > > > >>>> :
> >>> > > > > >>>>>
> >>> > > > > >>>>>> 1. Actually you have to check versions on all the values
> >>> you
> >>> > > have
> >>> > > > > >>> read
> >>> > > > > >>>>>> during the tx.
> >>> > > > > >>>>>>
> >>> > > > > >>>>>> For example if we have [k1 => v1, k2 => v2] and do:
> >>> > > > > >>>>>>
> >>> > > > > >>>>>> put(k1, get(k2) + 5)
> >>> > > > > >>>>>>
> >>> > > > > >>>>>> We have to remember the version for k2. This logic can
> be
> >>> > > > > >> relatively
> >>> > > > > >>>>> easily
> >>> > > > > >>>>>> encapsulated in a framework atop of Ignite. You need to
> >>> > > implement
> >>> > > > > >> one
> >>> > > > > >>>> to
> >>> > > > > >>>>>> make all this stuff usable.
> >>> > > > > >>>>>>
> >>> > > > > >>>>>> 2. I suggest to avoid any locking here, because you
> easily
> >>> > will
> >>> > > > end
> >>> > > > > >>> up
> >>> > > > > >>>>> with
> >>> > > > > >>>>>> deadlocks. If you do not have too frequent updates for
> >>> your
> >>> > > keys,
> >>> > > > > >>>>>> optimistic approach will work just fine.
> >>> > > > > >>>>>>
> >>> > > > > >>>>>> Theoretically in the Committer Service you can start a
> >>> thread
> >>> > > for
> >>> > > > > >> the
> >>> > > > > >>>>>> lifetime of the whole distributed transaction, take a
> >>> lock on
> >>> > > the
> >>> > > > > >> key
> >>> > > > > >>>>> using
> >>> > > > > >>>>>> IgniteCache.lock(K key) before executing any Services,
> >>> wait
> >>> > for
> >>> > > > all
> >>> > > > > >>> the
> >>> > > > > >>>>>> services to complete, execute optimistic commit in the
> >>> same
> >>> > > thread
> >>> > > > > >>>> while
> >>> > > > > >>>>>> keeping this lock and then release it. Notice that all
> the
> >>> > > Ignite
> >>> > > > > >>>>>> transactions inside of all Services must be optimistic
> >>> here to
> >>> > > be
> >>> > > > > >>> able
> >>> > > > > >>>> to
> >>> > > > > >>>>>> read this locked key.
> >>> > > > > >>>>>>
> >>> > > > > >>>>>> But again I do not recommend you using this approach
> >>> until you
> >>> > > > > >> have a
> >>> > > > > >>>>>> reliable deadlock avoidance scheme.
> >>> > > > > >>>>>>
> >>> > > > > >>>>>> Sergi
> >>> > > > > >>>>>>
> >>> > > > > >>>>>>
> >>> > > > > >>>>>>
> >>> > > > > >>>>>>
> >>> > > > > >>>>>>
> >>> > > > > >>>>>>
> >>> > > > > >>>>>>
> >>> > > > > >>>>>> 2017-03-16 12:53 GMT+03:00 ALEKSEY KUZNETSOV <
> >>> > > > > >>> alkuznetsov.sb@gmail.com
> >>> > > > > >>>>> :
> >>> > > > > >>>>>>
> >>> > > > > >>>>>>> Yeah, now i got it.
> >>> > > > > >>>>>>> There are some doubts on this approach
> >>> > > > > >>>>>>> 1) During optimistic commit phase, when you assure no
> one
> >>> > > altered
> >>> > > > > >>> the
> >>> > > > > >>>>>>> original values, you must check versions of other
> >>> dependent
> >>> > > keys.
> >>> > > > > >>> How
> >>> > > > > >>>>>> could
> >>> > > > > >>>>>>> we obtain those keys(in an automative manner, of
> course)
> >>> ?
> >>> > > > > >>>>>>> 2) How could we lock a key before some Service A
> >>> introduce
> >>> > > > > >> changes?
> >>> > > > > >>>> So
> >>> > > > > >>>>> no
> >>> > > > > >>>>>>> other service is allowed to change this key-value?(sort
> >>> of
> >>> > > > > >>>> pessimistic
> >>> > > > > >>>>>>> blocking)
> >>> > > > > >>>>>>> May be you know some implementations of such approach ?
> >>> > > > > >>>>>>>
> >>> > > > > >>>>>>> ср, 15 мар. 2017 г. в 17:54, ALEKSEY KUZNETSOV <
> >>> > > > > >>>>> alkuznetsov.sb@gmail.com
> >>> > > > > >>>>>>> :
> >>> > > > > >>>>>>>
> >>> > > > > >>>>>>>> Thank you very much for help.  I will answer later.
> >>> > > > > >>>>>>>>
> >>> > > > > >>>>>>>> ср, 15 мар. 2017 г. в 17:39, Sergi Vladykin <
> >>> > > > > >>>>> sergi.vladykin@gmail.com
> >>> > > > > >>>>>>> :
> >>> > > > > >>>>>>>>
> >>> > > > > >>>>>>>> All the services do not update key in place, but only
> >>> > generate
> >>> > > > > >>> new
> >>> > > > > >>>>> keys
> >>> > > > > >>>>>>>> augmented by otx and store the updated value in the
> same
> >>> > cache
> >>> > > > > >> +
> >>> > > > > >>>>>> remember
> >>> > > > > >>>>>>>> the keys and versions participating in the transaction
> >>> in
> >>> > some
> >>> > > > > >>>>> separate
> >>> > > > > >>>>>>>> atomic cache.
> >>> > > > > >>>>>>>>
> >>> > > > > >>>>>>>> Follow this sequence of changes applied to cache
> >>> contents by
> >>> > > > > >> each
> >>> > > > > >>>>>>> Service:
> >>> > > > > >>>>>>>>
> >>> > > > > >>>>>>>> Initial cache contents:
> >>> > > > > >>>>>>>>            [k1 => v1]
> >>> > > > > >>>>>>>>            [k2 => v2]
> >>> > > > > >>>>>>>>            [k3 => v3]
> >>> > > > > >>>>>>>>
> >>> > > > > >>>>>>>> Cache contents after Service A:
> >>> > > > > >>>>>>>>            [k1 => v1]
> >>> > > > > >>>>>>>>            [k2 => v2]
> >>> > > > > >>>>>>>>            [k3 => v3]
> >>> > > > > >>>>>>>>            [k1x => v1a]
> >>> > > > > >>>>>>>>            [k2x => v2a]
> >>> > > > > >>>>>>>>
> >>> > > > > >>>>>>>>         + [x => (k1 -> ver1, k2 -> ver2)] in some
> >>> separate
> >>> > > > > >>> atomic
> >>> > > > > >>>>>> cache
> >>> > > > > >>>>>>>>
> >>> > > > > >>>>>>>> Cache contents after Service B:
> >>> > > > > >>>>>>>>            [k1 => v1]
> >>> > > > > >>>>>>>>            [k2 => v2]
> >>> > > > > >>>>>>>>            [k3 => v3]
> >>> > > > > >>>>>>>>            [k1x => v1a]
> >>> > > > > >>>>>>>>            [k2x => v2ab]
> >>> > > > > >>>>>>>>            [k3x => v3b]
> >>> > > > > >>>>>>>>
> >>> > > > > >>>>>>>>        + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)]
> in
> >>> some
> >>> > > > > >>>>> separate
> >>> > > > > >>>>>>>> atomic cache
> >>> > > > > >>>>>>>>
> >>> > > > > >>>>>>>> Finally the Committer Service takes this map of
> updated
> >>> keys
> >>> > > > > >> and
> >>> > > > > >>>>> their
> >>> > > > > >>>>>>>> versions from some separate atomic cache, starts
> Ignite
> >>> > > > > >>> transaction
> >>> > > > > >>>>> and
> >>> > > > > >>>>>>>> replaces all the values for k* keys to values taken
> >>> from k*x
> >>> > > > > >>> keys.
> >>> > > > > >>>>> The
> >>> > > > > >>>>>>>> successful result must be the following:
> >>> > > > > >>>>>>>>
> >>> > > > > >>>>>>>>            [k1 => v1a]
> >>> > > > > >>>>>>>>            [k2 => v2ab]
> >>> > > > > >>>>>>>>            [k3 => v3b]
> >>> > > > > >>>>>>>>            [k1x => v1a]
> >>> > > > > >>>>>>>>            [k2x => v2ab]
> >>> > > > > >>>>>>>>            [k3x => v3b]
> >>> > > > > >>>>>>>>
> >>> > > > > >>>>>>>>        + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)]
> in
> >>> some
> >>> > > > > >>>>> separate
> >>> > > > > >>>>>>>> atomic cache
> >>> > > > > >>>>>>>>
> >>> > > > > >>>>>>>> But Committer Service also has to check that no one
> >>> updated
> >>> > > the
> >>> > > > > >>>>>> original
> >>> > > > > >>>>>>>> values before us, because otherwise we can not give
> any
> >>> > > > > >>>>> serializability
> >>> > > > > >>>>>>>> guarantee for these distributed transactions. Here we
> >>> may
> >>> > need
> >>> > > > > >> to
> >>> > > > > >>>>> check
> >>> > > > > >>>>>>> not
> >>> > > > > >>>>>>>> only versions of the updated keys, but also versions
> of
> >>> any
> >>> > > > > >> other
> >>> > > > > >>>>> keys
> >>> > > > > >>>>>>> end
> >>> > > > > >>>>>>>> result depends on.
> >>> > > > > >>>>>>>>
> >>> > > > > >>>>>>>> After that Committer Service has to do a cleanup (may
> be
> >>> > > > > >> outside
> >>> > > > > >>> of
> >>> > > > > >>>>> the
> >>> > > > > >>>>>>>> committing tx) to come to the following final state:
> >>> > > > > >>>>>>>>
> >>> > > > > >>>>>>>>            [k1 => v1a]
> >>> > > > > >>>>>>>>            [k2 => v2ab]
> >>> > > > > >>>>>>>>            [k3 => v3b]
> >>> > > > > >>>>>>>>
> >>> > > > > >>>>>>>> Makes sense?
> >>> > > > > >>>>>>>>
> >>> > > > > >>>>>>>> Sergi
> >>> > > > > >>>>>>>>
> >>> > > > > >>>>>>>>
> >>> > > > > >>>>>>>> 2017-03-15 16:54 GMT+03:00 ALEKSEY KUZNETSOV <
> >>> > > > > >>>>> alkuznetsov.sb@gmail.com
> >>> > > > > >>>>>>> :
> >>> > > > > >>>>>>>>
> >>> > > > > >>>>>>>>>   - what do u mean by saying "
> >>> > > > > >>>>>>>>> *in a single transaction checks value versions for
> all
> >>> the
> >>> > > > > >> old
> >>> > > > > >>>>> values
> >>> > > > > >>>>>>>>>    and replaces them with calculated new ones *"?
> Every
> >>> > time
> >>> > > > > >>> you
> >>> > > > > >>>>>>> change
> >>> > > > > >>>>>>>>>   value(in some service), you store it to *some
> special
> >>> > > > > >> atomic
> >>> > > > > >>>>>> cache*
> >>> > > > > >>>>>>> ,
> >>> > > > > >>>>>>>> so
> >>> > > > > >>>>>>>>>   when all services ceased working, Service commiter
> >>> got a
> >>> > > > > >>>> values
> >>> > > > > >>>>>> with
> >>> > > > > >>>>>>>> the
> >>> > > > > >>>>>>>>>   last versions.
> >>> > > > > >>>>>>>>>   - After "*does cleanup of temporary keys and
> values*"
> >>> > > > > >>> Service
> >>> > > > > >>>>>>> commiter
> >>> > > > > >>>>>>>>>   persists them into permanent store, isn't it ?
> >>> > > > > >>>>>>>>>   - I cant grasp your though, you say "*in case of
> >>> version
> >>> > > > > >>>>> mismatch
> >>> > > > > >>>>>> or
> >>> > > > > >>>>>>>> TX
> >>> > > > > >>>>>>>>>   timeout just rollbacks*". But what versions would
> it
> >>> > > > > >> match?
> >>> > > > > >>>>>>>>>
> >>> > > > > >>>>>>>>>
> >>> > > > > >>>>>>>>> ср, 15 мар. 2017 г. в 15:34, Sergi Vladykin <
> >>> > > > > >>>>>> sergi.vladykin@gmail.com
> >>> > > > > >>>>>>>> :
> >>> > > > > >>>>>>>>>
> >>> > > > > >>>>>>>>>> Ok, here is what you actually need to implement at
> the
> >>> > > > > >>>>> application
> >>> > > > > >>>>>>>> level.
> >>> > > > > >>>>>>>>>>
> >>> > > > > >>>>>>>>>> Lets say we have to call 2 services in the following
> >>> > order:
> >>> > > > > >>>>>>>>>> - Service A: wants to update keys [k1 => v1,   k2 =>
> >>> v2]
> >>> > > > > >> to
> >>> > > > > >>>>> [k1
> >>> > > > > >>>>>> =>
> >>> > > > > >>>>>>>>> v1a,
> >>> > > > > >>>>>>>>>>  k2 => v2a]
> >>> > > > > >>>>>>>>>> - Service B: wants to update keys [k2 => v2a, k3 =>
> >>> v3]
> >>> > > > > >> to
> >>> > > > > >>>> [k2
> >>> > > > > >>>>>> =>
> >>> > > > > >>>>>>>>> v2ab,
> >>> > > > > >>>>>>>>>> k3 => v3b]
> >>> > > > > >>>>>>>>>>
> >>> > > > > >>>>>>>>>> The change
> >>> > > > > >>>>>>>>>>    from [ k1 => v1,   k2 => v2,     k3 => v3   ]
> >>> > > > > >>>>>>>>>>    to     [ k1 => v1a, k2 => v2ab, k3 => v3b ]
> >>> > > > > >>>>>>>>>> must happen in a single transaction.
> >>> > > > > >>>>>>>>>>
> >>> > > > > >>>>>>>>>>
> >>> > > > > >>>>>>>>>> Optimistic protocol to solve this:
> >>> > > > > >>>>>>>>>>
> >>> > > > > >>>>>>>>>> Each cache key must have a field `otx`, which is a
> >>> unique
> >>> > > > > >>>>>>> orchestrator
> >>> > > > > >>>>>>>> TX
> >>> > > > > >>>>>>>>>> identifier - it must be a parameter passed to all
> the
> >>> > > > > >>> services.
> >>> > > > > >>>>> If
> >>> > > > > >>>>>>>> `otx`
> >>> > > > > >>>>>>>>> is
> >>> > > > > >>>>>>>>>> set to some value it means that it is an
> intermediate
> >>> key
> >>> > > > > >> and
> >>> > > > > >>>> is
> >>> > > > > >>>>>>>> visible
> >>> > > > > >>>>>>>>>> only inside of some transaction, for the finalized
> key
> >>> > > > > >> `otx`
> >>> > > > > >>>> must
> >>> > > > > >>>>>> be
> >>> > > > > >>>>>>>>> null -
> >>> > > > > >>>>>>>>>> it means the key is committed and visible for
> >>> everyone.
> >>> > > > > >>>>>>>>>>
> >>> > > > > >>>>>>>>>> Each cache value must have a field `ver` which is a
> >>> > version
> >>> > > > > >>> of
> >>> > > > > >>>>> that
> >>> > > > > >>>>>>>>> value.
> >>> > > > > >>>>>>>>>>
> >>> > > > > >>>>>>>>>> For both fields (`otx` and `ver`) the safest way is
> >>> to use
> >>> > > > > >>>> UUID.
> >>> > > > > >>>>>>>>>>
> >>> > > > > >>>>>>>>>> Workflow is the following:
> >>> > > > > >>>>>>>>>>
> >>> > > > > >>>>>>>>>> Orchestrator starts the distributed transaction with
> >>> `otx`
> >>> > > > > >> =
> >>> > > > > >>> x
> >>> > > > > >>>>> and
> >>> > > > > >>>>>>>> passes
> >>> > > > > >>>>>>>>>> this parameter to all the services.
> >>> > > > > >>>>>>>>>>
> >>> > > > > >>>>>>>>>> Service A:
> >>> > > > > >>>>>>>>>> - does some computations
> >>> > > > > >>>>>>>>>> - stores [k1x => v1a, k2x => v2a]  with TTL = Za
> >>> > > > > >>>>>>>>>>      where
> >>> > > > > >>>>>>>>>>          Za - left time from max Orchestrator TX
> >>> duration
> >>> > > > > >>>> after
> >>> > > > > >>>>>>>> Service
> >>> > > > > >>>>>>>>> A
> >>> > > > > >>>>>>>>>> end
> >>> > > > > >>>>>>>>>>          k1x, k2x - new temporary keys with field
> >>> `otx` =
> >>> > > > > >> x
> >>> > > > > >>>>>>>>>>          v2a has updated version `ver`
> >>> > > > > >>>>>>>>>> - returns a set of updated keys and all the old
> >>> versions
> >>> > > > > >> to
> >>> > > > > >>>> the
> >>> > > > > >>>>>>>>>> orchestrator
> >>> > > > > >>>>>>>>>>       or just stores it in some special atomic cache
> >>> like
> >>> > > > > >>>>>>>>>>       [x => (k1 -> ver1, k2 -> ver2)] TTL = Za
> >>> > > > > >>>>>>>>>>
> >>> > > > > >>>>>>>>>> Service B:
> >>> > > > > >>>>>>>>>> - retrieves the updated value k2x => v2a because it
> >>> knows
> >>> > > > > >>>> `otx`
> >>> > > > > >>>>> =
> >>> > > > > >>>>>> x
> >>> > > > > >>>>>>>>>> - does computations
> >>> > > > > >>>>>>>>>> - stores [k2x => v2ab, k3x => v3b] TTL = Zb
> >>> > > > > >>>>>>>>>> - updates the set of updated keys like [x => (k1 ->
> >>> ver1,
> >>> > > > > >> k2
> >>> > > > > >>>> ->
> >>> > > > > >>>>>>> ver2,
> >>> > > > > >>>>>>>> k3
> >>> > > > > >>>>>>>>>> -> ver3)] TTL = Zb
> >>> > > > > >>>>>>>>>>
> >>> > > > > >>>>>>>>>> Service Committer (may be embedded into
> Orchestrator):
> >>> > > > > >>>>>>>>>> - takes all the updated keys and versions for `otx`
> =
> >>> x
> >>> > > > > >>>>>>>>>>       [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)]
> >>> > > > > >>>>>>>>>> - in a single transaction checks value versions for
> >>> all
> >>> > > > > >> the
> >>> > > > > >>>> old
> >>> > > > > >>>>>>> values
> >>> > > > > >>>>>>>>>>       and replaces them with calculated new ones
> >>> > > > > >>>>>>>>>> - does cleanup of temporary keys and values
> >>> > > > > >>>>>>>>>> - in case of version mismatch or TX timeout just
> >>> rollbacks
> >>> > > > > >>> and
> >>> > > > > >>>>>>> signals
> >>> > > > > >>>>>>>>>>        to Orchestrator to restart the job with new
> >>> `otx`
> >>> > > > > >>>>>>>>>>
> >>> > > > > >>>>>>>>>> PROFIT!!
> >>> > > > > >>>>>>>>>>
> >>> > > > > >>>>>>>>>> This approach even allows you to run independent
> >>> parts of
> >>> > > > > >> the
> >>> > > > > >>>>> graph
> >>> > > > > >>>>>>> in
> >>> > > > > >>>>>>>>>> parallel (with TX transfer you will always run only
> >>> one at
> >>> > > > > >> a
> >>> > > > > >>>>> time).
> >>> > > > > >>>>>>>> Also
> >>> > > > > >>>>>>>>> it
> >>> > > > > >>>>>>>>>> does not require inventing any special fault
> tolerance
> >>> > > > > >>> technics
> >>> > > > > >>>>>>> because
> >>> > > > > >>>>>>>>>> Ignite caches are already fault tolerant and all the
> >>> > > > > >>>> intermediate
> >>> > > > > >>>>>>>> results
> >>> > > > > >>>>>>>>>> are virtually invisible and stored with TTL, thus in
> >>> case
> >>> > > > > >> of
> >>> > > > > >>>> any
> >>> > > > > >>>>>>> crash
> >>> > > > > >>>>>>>>> you
> >>> > > > > >>>>>>>>>> will not have inconsistent state or garbage.
> >>> > > > > >>>>>>>>>>
> >>> > > > > >>>>>>>>>> Sergi
> >>> > > > > >>>>>>>>>>
> >>> > > > > >>>>>>>>>>
> >>> > > > > >>>>>>>>>> 2017-03-15 11:42 GMT+03:00 ALEKSEY KUZNETSOV <
> >>> > > > > >>>>>>> alkuznetsov.sb@gmail.com
> >>> > > > > >>>>>>>>> :
> >>> > > > > >>>>>>>>>>
> >>> > > > > >>>>>>>>>>> Okay, we are open for proposals on business task. I
> >>> mean,
> >>> > > > > >>> we
> >>> > > > > >>>>> can
> >>> > > > > >>>>>>> make
> >>> > > > > >>>>>>>>> use
> >>> > > > > >>>>>>>>>>> of some other thing, not distributed transaction.
> Not
> >>> > > > > >>>>> transaction
> >>> > > > > >>>>>>>> yet.
> >>> > > > > >>>>>>>>>>>
> >>> > > > > >>>>>>>>>>> ср, 15 мар. 2017 г. в 11:24, Vladimir Ozerov <
> >>> > > > > >>>>>> vozerov@gridgain.com
> >>> > > > > >>>>>>>> :
> >>> > > > > >>>>>>>>>>>
> >>> > > > > >>>>>>>>>>>> IMO the use case makes sense. However, as Sergi
> >>> already
> >>> > > > > >>>>>>> mentioned,
> >>> > > > > >>>>>>>>> the
> >>> > > > > >>>>>>>>>>>> problem is far more complex, than simply passing
> TX
> >>> > > > > >> state
> >>> > > > > >>>>> over
> >>> > > > > >>>>>> a
> >>> > > > > >>>>>>>>> wire.
> >>> > > > > >>>>>>>>>>> Most
> >>> > > > > >>>>>>>>>>>> probably a kind of coordinator will be required
> >>> still
> >>> > > > > >> to
> >>> > > > > >>>>> manage
> >>> > > > > >>>>>>> all
> >>> > > > > >>>>>>>>>> kinds
> >>> > > > > >>>>>>>>>>>> of failures. This task should be started with
> clean
> >>> > > > > >>> design
> >>> > > > > >>>>>>> proposal
> >>> > > > > >>>>>>>>>>>> explaining how we handle all these concurrent
> >>> events.
> >>> > > > > >> And
> >>> > > > > >>>>> only
> >>> > > > > >>>>>>>> then,
> >>> > > > > >>>>>>>>>> when
> >>> > > > > >>>>>>>>>>>> we understand all implications, we should move to
> >>> > > > > >>>> development
> >>> > > > > >>>>>>>> stage.
> >>> > > > > >>>>>>>>>>>>
> >>> > > > > >>>>>>>>>>>> On Wed, Mar 15, 2017 at 10:38 AM, ALEKSEY
> KUZNETSOV
> >>> <
> >>> > > > > >>>>>>>>>>>> alkuznetsov.sb@gmail.com> wrote:
> >>> > > > > >>>>>>>>>>>>
> >>> > > > > >>>>>>>>>>>>> Right
> >>> > > > > >>>>>>>>>>>>>
> >>> > > > > >>>>>>>>>>>>> ср, 15 мар. 2017 г. в 10:35, Sergi Vladykin <
> >>> > > > > >>>>>>>>>> sergi.vladykin@gmail.com
> >>> > > > > >>>>>>>>>>>> :
> >>> > > > > >>>>>>>>>>>>>
> >>> > > > > >>>>>>>>>>>>>> Good! Basically your orchestrator just takes
> some
> >>> > > > > >>>>>> predefined
> >>> > > > > >>>>>>>>> graph
> >>> > > > > >>>>>>>>>> of
> >>> > > > > >>>>>>>>>>>>>> distributed services to be invoked, calls them
> by
> >>> > > > > >>> some
> >>> > > > > >>>>> kind
> >>> > > > > >>>>>>> of
> >>> > > > > >>>>>>>>> RPC
> >>> > > > > >>>>>>>>>>> and
> >>> > > > > >>>>>>>>>>>>>> passes the needed parameters between them,
> right?
> >>> > > > > >>>>>>>>>>>>>>
> >>> > > > > >>>>>>>>>>>>>> Sergi
> >>> > > > > >>>>>>>>>>>>>>
> >>> > > > > >>>>>>>>>>>>>> 2017-03-14 22:46 GMT+03:00 ALEKSEY KUZNETSOV <
> >>> > > > > >>>>>>>>>>> alkuznetsov.sb@gmail.com
> >>> > > > > >>>>>>>>>>>>> :
> >>> > > > > >>>>>>>>>>>>>>
> >>> > > > > >>>>>>>>>>>>>>> orchestrator is a custom thing. He is
> responsible
> >>> > > > > >>> for
> >>> > > > > >>>>>>>> managing
> >>> > > > > >>>>>>>>>>>> business
> >>> > > > > >>>>>>>>>>>>>>> scenarios flows. Many nodes are involved in
> >>> > > > > >>>> scenarios.
> >>> > > > > >>>>>> They
> >>> > > > > >>>>>>>>>>> exchange
> >>> > > > > >>>>>>>>>>>>> data
> >>> > > > > >>>>>>>>>>>>>>> and folow one another. If you acquinted with
> BPMN
> >>> > > > > >>>>>>> framework,
> >>> > > > > >>>>>>>> so
> >>> > > > > >>>>>>>>>>>>>>> orchestrator is like bpmn engine.
> >>> > > > > >>>>>>>>>>>>>>>
> >>> > > > > >>>>>>>>>>>>>>> вт, 14 Мар 2017 г., 18:56 Sergi Vladykin <
> >>> > > > > >>>>>>>>>> sergi.vladykin@gmail.com
> >>> > > > > >>>>>>>>>>>> :
> >>> > > > > >>>>>>>>>>>>>>>
> >>> > > > > >>>>>>>>>>>>>>>> What is Orchestrator for you? Is it a thing
> >>> > > > > >> from
> >>> > > > > >>>>>>> Microsoft
> >>> > > > > >>>>>>>> or
> >>> > > > > >>>>>>>>>>> your
> >>> > > > > >>>>>>>>>>>>>> custom
> >>> > > > > >>>>>>>>>>>>>>>> in-house software?
> >>> > > > > >>>>>>>>>>>>>>>>
> >>> > > > > >>>>>>>>>>>>>>>> Sergi
> >>> > > > > >>>>>>>>>>>>>>>>
> >>> > > > > >>>>>>>>>>>>>>>> 2017-03-14 18:00 GMT+03:00 ALEKSEY KUZNETSOV <
> >>> > > > > >>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> >>> > > > > >>>>>>>>>>>>>>> :
> >>> > > > > >>>>>>>>>>>>>>>>
> >>> > > > > >>>>>>>>>>>>>>>>> Fine. Let's say we've got multiple servers
> >>> > > > > >>> which
> >>> > > > > >>>>>>> fulfills
> >>> > > > > >>>>>>>>>>> custom
> >>> > > > > >>>>>>>>>>>>>> logic.
> >>> > > > > >>>>>>>>>>>>>>>>> This servers compound oriented graph (BPMN
> >>> > > > > >>>> process)
> >>> > > > > >>>>>>> which
> >>> > > > > >>>>>>>>>>>>> controlled
> >>> > > > > >>>>>>>>>>>>>> by
> >>> > > > > >>>>>>>>>>>>>>>>> Orchestrator.
> >>> > > > > >>>>>>>>>>>>>>>>> For instance, *server1  *creates *variable A
> >>> > > > > >>>> *with
> >>> > > > > >>>>>>> value
> >>> > > > > >>>>>>>> 1,
> >>> > > > > >>>>>>>>>>>>> persists
> >>> > > > > >>>>>>>>>>>>>> it
> >>> > > > > >>>>>>>>>>>>>>>> to
> >>> > > > > >>>>>>>>>>>>>>>>> IGNITE cache and creates *variable B *and
> >>> > > > > >> sends
> >>> > > > > >>>> it
> >>> > > > > >>>>>> to*
> >>> > > > > >>>>>>>>>> server2.
> >>> > > > > >>>>>>>>>>>>> *The
> >>> > > > > >>>>>>>>>>>>>>>>> latests receives *variable B*, do some logic
> >>> > > > > >>> with
> >>> > > > > >>>>> it
> >>> > > > > >>>>>>> and
> >>> > > > > >>>>>>>>>> stores
> >>> > > > > >>>>>>>>>>>> to
> >>> > > > > >>>>>>>>>>>>>>>> IGNITE.
> >>> > > > > >>>>>>>>>>>>>>>>> All the work made by both servers must be
> >>> > > > > >>>> fulfilled
> >>> > > > > >>>>>> in
> >>> > > > > >>>>>>>>> *one*
> >>> > > > > >>>>>>>>>>>>>>> transaction.
> >>> > > > > >>>>>>>>>>>>>>>>> Because we need all information done, or
> >>> > > > > >>>>>>>>> nothing(rollbacked).
> >>> > > > > >>>>>>>>>>> The
> >>> > > > > >>>>>>>>>>>>>>>> scenario
> >>> > > > > >>>>>>>>>>>>>>>>> is managed by orchestrator.
> >>> > > > > >>>>>>>>>>>>>>>>>
> >>> > > > > >>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 17:31, Sergi Vladykin <
> >>> > > > > >>>>>>>>>>>>>> sergi.vladykin@gmail.com
> >>> > > > > >>>>>>>>>>>>>>>> :
> >>> > > > > >>>>>>>>>>>>>>>>>
> >>> > > > > >>>>>>>>>>>>>>>>>> Ok, it is not a business case, it is your
> >>> > > > > >>> wrong
> >>> > > > > >>>>>>>> solution
> >>> > > > > >>>>>>>>>> for
> >>> > > > > >>>>>>>>>>>> it.
> >>> > > > > >>>>>>>>>>>>>>>>>> Lets try again, what is the business case?
> >>> > > > > >>>>>>>>>>>>>>>>>>
> >>> > > > > >>>>>>>>>>>>>>>>>> Sergi
> >>> > > > > >>>>>>>>>>>>>>>>>>
> >>> > > > > >>>>>>>>>>>>>>>>>> 2017-03-14 16:42 GMT+03:00 ALEKSEY
> >>> > > > > >> KUZNETSOV
> >>> > > > > >>> <
> >>> > > > > >>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> >>> > > > > >>>>>>>>>>>>>>>>> :
> >>> > > > > >>>>>>>>>>>>>>>>>>
> >>> > > > > >>>>>>>>>>>>>>>>>>> The case is the following, One starts
> >>> > > > > >>>>> transaction
> >>> > > > > >>>>>>> in
> >>> > > > > >>>>>>>>> one
> >>> > > > > >>>>>>>>>>>> node,
> >>> > > > > >>>>>>>>>>>>>> and
> >>> > > > > >>>>>>>>>>>>>>>>> commit
> >>> > > > > >>>>>>>>>>>>>>>>>>> this transaction in another jvm node(or
> >>> > > > > >>>>> rollback
> >>> > > > > >>>>>> it
> >>> > > > > >>>>>>>>>>>> remotely).
> >>> > > > > >>>>>>>>>>>>>>>>>>>
> >>> > > > > >>>>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 16:30, Sergi
> >>> > > > > >>> Vladykin <
> >>> > > > > >>>>>>>>>>>>>>>> sergi.vladykin@gmail.com
> >>> > > > > >>>>>>>>>>>>>>>>>> :
> >>> > > > > >>>>>>>>>>>>>>>>>>>
> >>> > > > > >>>>>>>>>>>>>>>>>>>> Because even if you make it work for
> >>> > > > > >> some
> >>> > > > > >>>>>>>> simplistic
> >>> > > > > >>>>>>>>>>>>> scenario,
> >>> > > > > >>>>>>>>>>>>>>> get
> >>> > > > > >>>>>>>>>>>>>>>>>> ready
> >>> > > > > >>>>>>>>>>>>>>>>>>> to
> >>> > > > > >>>>>>>>>>>>>>>>>>>> write many fault tolerance tests and
> >>> > > > > >> make
> >>> > > > > >>>>> sure
> >>> > > > > >>>>>>> that
> >>> > > > > >>>>>>>>> you
> >>> > > > > >>>>>>>>>>> TXs
> >>> > > > > >>>>>>>>>>>>>> work
> >>> > > > > >>>>>>>>>>>>>>>>>>> gracefully
> >>> > > > > >>>>>>>>>>>>>>>>>>>> in all modes in case of crashes. Also
> >>> > > > > >>> make
> >>> > > > > >>>>> sure
> >>> > > > > >>>>>>>> that
> >>> > > > > >>>>>>>>> we
> >>> > > > > >>>>>>>>>>> do
> >>> > > > > >>>>>>>>>>>>> not
> >>> > > > > >>>>>>>>>>>>>>> have
> >>> > > > > >>>>>>>>>>>>>>>>> any
> >>> > > > > >>>>>>>>>>>>>>>>>>>> performance drops after all your
> >>> > > > > >> changes
> >>> > > > > >>> in
> >>> > > > > >>>>>>>> existing
> >>> > > > > >>>>>>>>>>>>>> benchmarks.
> >>> > > > > >>>>>>>>>>>>>>>> All
> >>> > > > > >>>>>>>>>>>>>>>>> in
> >>> > > > > >>>>>>>>>>>>>>>>>>> all
> >>> > > > > >>>>>>>>>>>>>>>>>>>> I don't believe these conditions will
> >>> > > > > >> be
> >>> > > > > >>>> met
> >>> > > > > >>>>>> and
> >>> > > > > >>>>>>>> your
> >>> > > > > >>>>>>>>>>>>>>> contribution
> >>> > > > > >>>>>>>>>>>>>>>>> will
> >>> > > > > >>>>>>>>>>>>>>>>>>> be
> >>> > > > > >>>>>>>>>>>>>>>>>>>> accepted.
> >>> > > > > >>>>>>>>>>>>>>>>>>>>
> >>> > > > > >>>>>>>>>>>>>>>>>>>> Better solution to what problem?
> >>> > > > > >> Sending
> >>> > > > > >>> TX
> >>> > > > > >>>>> to
> >>> > > > > >>>>>>>>> another
> >>> > > > > >>>>>>>>>>>> node?
> >>> > > > > >>>>>>>>>>>>>> The
> >>> > > > > >>>>>>>>>>>>>>>>>> problem
> >>> > > > > >>>>>>>>>>>>>>>>>>>> statement itself is already wrong. What
> >>> > > > > >>>>>> business
> >>> > > > > >>>>>>>> case
> >>> > > > > >>>>>>>>>> you
> >>> > > > > >>>>>>>>>>>> are
> >>> > > > > >>>>>>>>>>>>>>>> trying
> >>> > > > > >>>>>>>>>>>>>>>>> to
> >>> > > > > >>>>>>>>>>>>>>>>>>>> solve? I'm sure everything you need can
> >>> > > > > >>> be
> >>> > > > > >>>>> done
> >>> > > > > >>>>>>> in
> >>> > > > > >>>>>>>> a
> >>> > > > > >>>>>>>>>> much
> >>> > > > > >>>>>>>>>>>>> more
> >>> > > > > >>>>>>>>>>>>>>>> simple
> >>> > > > > >>>>>>>>>>>>>>>>>> and
> >>> > > > > >>>>>>>>>>>>>>>>>>>> efficient way at the application level.
> >>> > > > > >>>>>>>>>>>>>>>>>>>>
> >>> > > > > >>>>>>>>>>>>>>>>>>>> Sergi
> >>> > > > > >>>>>>>>>>>>>>>>>>>>
> >>> > > > > >>>>>>>>>>>>>>>>>>>> 2017-03-14 16:03 GMT+03:00 ALEKSEY
> >>> > > > > >>>> KUZNETSOV
> >>> > > > > >>>>> <
> >>> > > > > >>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> >>> > > > > >>>>>>>>>>>>>>>>>>> :
> >>> > > > > >>>>>>>>>>>>>>>>>>>>
> >>> > > > > >>>>>>>>>>>>>>>>>>>>> Why wrong ? You know the better
> >>> > > > > >>> solution?
> >>> > > > > >>>>>>>>>>>>>>>>>>>>>
> >>> > > > > >>>>>>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 15:46, Sergi
> >>> > > > > >>>>> Vladykin <
> >>> > > > > >>>>>>>>>>>>>>>>>> sergi.vladykin@gmail.com
> >>> > > > > >>>>>>>>>>>>>>>>>>>> :
> >>> > > > > >>>>>>>>>>>>>>>>>>>>>
> >>> > > > > >>>>>>>>>>>>>>>>>>>>>> Just serializing TX object and
> >>> > > > > >>>>>> deserializing
> >>> > > > > >>>>>>> it
> >>> > > > > >>>>>>>>> on
> >>> > > > > >>>>>>>>>>>>> another
> >>> > > > > >>>>>>>>>>>>>>> node
> >>> > > > > >>>>>>>>>>>>>>>>> is
> >>> > > > > >>>>>>>>>>>>>>>>>>>>>> meaningless, because other nodes
> >>> > > > > >>>>>>> participating
> >>> > > > > >>>>>>>> in
> >>> > > > > >>>>>>>>>> the
> >>> > > > > >>>>>>>>>>>> TX
> >>> > > > > >>>>>>>>>>>>>> have
> >>> > > > > >>>>>>>>>>>>>>>> to
> >>> > > > > >>>>>>>>>>>>>>>>>> know
> >>> > > > > >>>>>>>>>>>>>>>>>>>>> about
> >>> > > > > >>>>>>>>>>>>>>>>>>>>>> the new coordinator. This will
> >>> > > > > >>> require
> >>> > > > > >>>>>>> protocol
> >>> > > > > >>>>>>>>>>>> changes,
> >>> > > > > >>>>>>>>>>>>> we
> >>> > > > > >>>>>>>>>>>>>>>>>>> definitely
> >>> > > > > >>>>>>>>>>>>>>>>>>>>> will
> >>> > > > > >>>>>>>>>>>>>>>>>>>>>> have fault tolerance and
> >>> > > > > >> performance
> >>> > > > > >>>>>> issues.
> >>> > > > > >>>>>>>> IMO
> >>> > > > > >>>>>>>>>> the
> >>> > > > > >>>>>>>>>>>>> whole
> >>> > > > > >>>>>>>>>>>>>>> idea
> >>> > > > > >>>>>>>>>>>>>>>>> is
> >>> > > > > >>>>>>>>>>>>>>>>>>>> wrong
> >>> > > > > >>>>>>>>>>>>>>>>>>>>>> and it makes no sense to waste time
> >>> > > > > >>> on
> >>> > > > > >>>>> it.
> >>> > > > > >>>>>>>>>>>>>>>>>>>>>>
> >>> > > > > >>>>>>>>>>>>>>>>>>>>>> Sergi
> >>> > > > > >>>>>>>>>>>>>>>>>>>>>>
> >>> > > > > >>>>>>>>>>>>>>>>>>>>>> 2017-03-14 10:57 GMT+03:00 ALEKSEY
> >>> > > > > >>>>>> KUZNETSOV
> >>> > > > > >>>>>>> <
> >>> > > > > >>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> >>> > > > > >>>>>>>>>>>>>>>>>>>>> :
> >>> > > > > >>>>>>>>>>>>>>>>>>>>>>
> >>> > > > > >>>>>>>>>>>>>>>>>>>>>>> IgniteTransactionState
> >>> > > > > >>>> implememntation
> >>> > > > > >>>>>>>> contains
> >>> > > > > >>>>>>>>>>>>>>>> IgniteTxEntry's
> >>> > > > > >>>>>>>>>>>>>>>>>>> which
> >>> > > > > >>>>>>>>>>>>>>>>>>>>> is
> >>> > > > > >>>>>>>>>>>>>>>>>>>>>>> supposed to be transferable
> >>> > > > > >>>>>>>>>>>>>>>>>>>>>>>
> >>> > > > > >>>>>>>>>>>>>>>>>>>>>>> пн, 13 мар. 2017 г. в 19:32,
> >>> > > > > >>> Dmitriy
> >>> > > > > >>>>>>>> Setrakyan
> >>> > > > > >>>>>>>>> <
> >>> > > > > >>>>>>>>>>>>>>>>>>>> dsetrakyan@apache.org
> >>> > > > > >>>>>>>>>>>>>>>>>>>>>> :
> >>> > > > > >>>>>>>>>>>>>>>>>>>>>>>
> >>> > > > > >>>>>>>>>>>>>>>>>>>>>>>> It sounds a little scary to me
> >>> > > > > >>> that
> >>> > > > > >>>>> we
> >>> > > > > >>>>>>> are
> >>> > > > > >>>>>>>>>>> passing
> >>> > > > > >>>>>>>>>>>>>>>>> transaction
> >>> > > > > >>>>>>>>>>>>>>>>>>>>> objects
> >>>
> >>> --
>
> *Best Regards,*
>
> *Kuznetsov Aleksey*
>

Re: distributed transaction of non-single coordinator

Posted by ALEKSEY KUZNETSOV <al...@gmail.com>.
I've attached HangTest. I suppose it should not hang, am i right ?

чт, 29 июн. 2017 г. в 14:54, ALEKSEY KUZNETSOV <al...@gmail.com>:

> Igntrs.
> Im rewieving all usages of threadId of
> transaction.(IgniteTxAdapter#threadID). What is the point of usage threadId
> in mvcc entry ?
>
> пн, 3 апр. 2017 г. в 9:47, ALEKSEY KUZNETSOV <al...@gmail.com>:
>
>> so what do u think on my idea?
>>
>> пт, 31 Мар 2017 г., 11:05 ALEKSEY KUZNETSOV <al...@gmail.com>:
>>
>>> sorry for misleading you. We planned to support multi-node transactions,
>>> but failed.
>>>
>>> пт, 31 мар. 2017 г. в 10:51, Alexey Goncharuk <
>>> alexey.goncharuk@gmail.com>:
>>>
>>> Well, now the scenario is more clear, but it has nothing to do with
>>> multiple coordinators :) Let me think a little bit about it.
>>>
>>> 2017-03-31 9:53 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:
>>>
>>> > so what do u think on the issue ?
>>> >
>>> > чт, 30 Мар 2017 г., 17:49 ALEKSEY KUZNETSOV <alkuznetsov.sb@gmail.com
>>> >:
>>> >
>>> > > Hi ! Thanks for help. I've created ticket :
>>> > > https://issues.apache.org/jira/browse/IGNITE-4887
>>> > > and a commit :
>>> > >
>>> https://github.com/voipp/ignite/commit/aa3487bd9c203394f534c605f84e06
>>> > 436b638e5c
>>> > > We really need this feature
>>> > >
>>> > > чт, 30 мар. 2017 г. в 11:31, Alexey Goncharuk <
>>> > alexey.goncharuk@gmail.com
>>> > > >:
>>> > >
>>> > > Aleksey,
>>> > >
>>> > > I doubt your approach works as expected. Current transaction recovery
>>> > > protocol heavily relies on the originating node ID in its internal
>>> logic.
>>> > > For example, currently a transaction will be rolled back if you want
>>> to
>>> > > transfer a transaction ownership to another node and original tx
>>> owner
>>> > > fails. An attempt to commit such a transaction on another node may
>>> fail
>>> > > with all sorts of assertions. After transaction ownership changed,
>>> you
>>> > need
>>> > > to notify all current transaction participants about this change,
>>> and it
>>> > > should also be done failover-safe, let alone that you did not add any
>>> > tests
>>> > > for these cases.
>>> > >
>>> > > I back Denis here. Please create a ticket first and come up with
>>> clear
>>> > > use-cases, API and protocol changes design. It is hard to reason
>>> about
>>> > the
>>> > > changes you've made when we do not even understand why you are making
>>> > these
>>> > > changes and how they are supposed to work.
>>> > >
>>> > > --AG
>>> > >
>>> > > 2017-03-30 10:43 GMT+03:00 ALEKSEY KUZNETSOV <
>>> alkuznetsov.sb@gmail.com>:
>>> > >
>>> > > > So, what do u think on my idea ?
>>> > > >
>>> > > > ср, 29 мар. 2017 г. в 10:35, ALEKSEY KUZNETSOV <
>>> > alkuznetsov.sb@gmail.com
>>> > > >:
>>> > > >
>>> > > > > Hi! No, i dont have ticket for this.
>>> > > > > In the ticket i have implemented methods that change transaction
>>> > status
>>> > > > to
>>> > > > > STOP, thus letting it to commit transaction in another thread. In
>>> > > another
>>> > > > > thread you r going to restart transaction in order to commit it.
>>> > > > > The mechanism behind it is obvious : we change thread id to
>>> newer one
>>> > > in
>>> > > > > ThreadMap, and make use of serialization of txState, transactions
>>> > > itself
>>> > > > to
>>> > > > > transfer them into another thread.
>>> > > > >
>>> > > > >
>>> > > > > вт, 28 мар. 2017 г. в 20:15, Denis Magda <dm...@apache.org>:
>>> > > > >
>>> > > > > Aleksey,
>>> > > > >
>>> > > > > Do you have a ticket for this? Could you briefly list what
>>> exactly
>>> > was
>>> > > > > done and how the things work.
>>> > > > >
>>> > > > > —
>>> > > > > Denis
>>> > > > >
>>> > > > > > On Mar 28, 2017, at 8:32 AM, ALEKSEY KUZNETSOV <
>>> > > > alkuznetsov.sb@gmail.com>
>>> > > > > wrote:
>>> > > > > >
>>> > > > > > Hi, Igniters! I 've made implementation of transactions of
>>> > non-single
>>> > > > > > coordinator. Here you can start transaction in one thread and
>>> > commit
>>> > > it
>>> > > > > in
>>> > > > > > another thread.
>>> > > > > > Take a look on it. Give your thoughts on it.
>>> > > > > >
>>> > > > > >
>>> > > > > https://github.com/voipp/ignite/pull/10/commits/
>>> > > > 3a3d90aa6ac84f125e4c3ce4ced4f269a695ef45
>>> > > > > >
>>> > > > > > пт, 17 мар. 2017 г. в 19:26, Sergi Vladykin <
>>> > > sergi.vladykin@gmail.com
>>> > > > >:
>>> > > > > >
>>> > > > > >> You know better, go ahead! :)
>>> > > > > >>
>>> > > > > >> Sergi
>>> > > > > >>
>>> > > > > >> 2017-03-17 16:16 GMT+03:00 ALEKSEY KUZNETSOV <
>>> > > > alkuznetsov.sb@gmail.com
>>> > > > > >:
>>> > > > > >>
>>> > > > > >>> we've discovered several problems regarding your
>>> "accumulation"
>>> > > > > >>> approach.These are
>>> > > > > >>>
>>> > > > > >>>   1. perfomance issues when transfering data from temporary
>>> cache
>>> > > to
>>> > > > > >>>   permanent one. Keep in mind big deal of concurent
>>> transactions
>>> > in
>>> > > > > >>> Service
>>> > > > > >>>   commiter
>>> > > > > >>>   2. extreme memory load when keeping temporary cache in
>>> memory
>>> > > > > >>>   3. As long as user is not acquainted with ignite, working
>>> with
>>> > > > cache
>>> > > > > >>>   must be transparent for him. Keep this in mind. User's
>>> node can
>>> > > > > >> evaluate
>>> > > > > >>>   logic with no transaction at all, so we should deal with
>>> both
>>> > > types
>>> > > > > of
>>> > > > > >>>   execution flow : transactional and
>>> non-transactional.Another
>>> > one
>>> > > > > >>> problem is
>>> > > > > >>>   transaction id support at the user node. We would have
>>> handled
>>> > > all
>>> > > > > >> this
>>> > > > > >>>   issues and many more.
>>> > > > > >>>   4. we cannot pessimistically lock entity.
>>> > > > > >>>
>>> > > > > >>> As a result, we decided to move on building distributed
>>> > > transaction.
>>> > > > We
>>> > > > > >> put
>>> > > > > >>> aside your "accumulation" approach until we realize how to
>>> solve
>>> > > > > >>> difficulties above .
>>> > > > > >>>
>>> > > > > >>> чт, 16 мар. 2017 г. в 16:56, Sergi Vladykin <
>>> > > > sergi.vladykin@gmail.com
>>> > > > > >:
>>> > > > > >>>
>>> > > > > >>>> The problem "How to run millions of entities, and millions
>>> of
>>> > > > > >> operations
>>> > > > > >>> on
>>> > > > > >>>> a single Pentium3" is out of scope here. Do the math, plan
>>> > > capacity
>>> > > > > >>>> reasonably.
>>> > > > > >>>>
>>> > > > > >>>> Sergi
>>> > > > > >>>>
>>> > > > > >>>> 2017-03-16 15:54 GMT+03:00 ALEKSEY KUZNETSOV <
>>> > > > > alkuznetsov.sb@gmail.com
>>> > > > > >>> :
>>> > > > > >>>>
>>> > > > > >>>>> hmm, If we have millions of entities, and millions of
>>> > operations,
>>> > > > > >> would
>>> > > > > >>>> not
>>> > > > > >>>>> this approache lead to memory overflow and perfomance
>>> > degradation
>>> > > > > >>>>>
>>> > > > > >>>>> чт, 16 мар. 2017 г. в 15:42, Sergi Vladykin <
>>> > > > > >> sergi.vladykin@gmail.com
>>> > > > > >>>> :
>>> > > > > >>>>>
>>> > > > > >>>>>> 1. Actually you have to check versions on all the values
>>> you
>>> > > have
>>> > > > > >>> read
>>> > > > > >>>>>> during the tx.
>>> > > > > >>>>>>
>>> > > > > >>>>>> For example if we have [k1 => v1, k2 => v2] and do:
>>> > > > > >>>>>>
>>> > > > > >>>>>> put(k1, get(k2) + 5)
>>> > > > > >>>>>>
>>> > > > > >>>>>> We have to remember the version for k2. This logic can be
>>> > > > > >> relatively
>>> > > > > >>>>> easily
>>> > > > > >>>>>> encapsulated in a framework atop of Ignite. You need to
>>> > > implement
>>> > > > > >> one
>>> > > > > >>>> to
>>> > > > > >>>>>> make all this stuff usable.
>>> > > > > >>>>>>
>>> > > > > >>>>>> 2. I suggest to avoid any locking here, because you easily
>>> > will
>>> > > > end
>>> > > > > >>> up
>>> > > > > >>>>> with
>>> > > > > >>>>>> deadlocks. If you do not have too frequent updates for
>>> your
>>> > > keys,
>>> > > > > >>>>>> optimistic approach will work just fine.
>>> > > > > >>>>>>
>>> > > > > >>>>>> Theoretically in the Committer Service you can start a
>>> thread
>>> > > for
>>> > > > > >> the
>>> > > > > >>>>>> lifetime of the whole distributed transaction, take a
>>> lock on
>>> > > the
>>> > > > > >> key
>>> > > > > >>>>> using
>>> > > > > >>>>>> IgniteCache.lock(K key) before executing any Services,
>>> wait
>>> > for
>>> > > > all
>>> > > > > >>> the
>>> > > > > >>>>>> services to complete, execute optimistic commit in the
>>> same
>>> > > thread
>>> > > > > >>>> while
>>> > > > > >>>>>> keeping this lock and then release it. Notice that all the
>>> > > Ignite
>>> > > > > >>>>>> transactions inside of all Services must be optimistic
>>> here to
>>> > > be
>>> > > > > >>> able
>>> > > > > >>>> to
>>> > > > > >>>>>> read this locked key.
>>> > > > > >>>>>>
>>> > > > > >>>>>> But again I do not recommend you using this approach
>>> until you
>>> > > > > >> have a
>>> > > > > >>>>>> reliable deadlock avoidance scheme.
>>> > > > > >>>>>>
>>> > > > > >>>>>> Sergi
>>> > > > > >>>>>>
>>> > > > > >>>>>>
>>> > > > > >>>>>>
>>> > > > > >>>>>>
>>> > > > > >>>>>>
>>> > > > > >>>>>>
>>> > > > > >>>>>>
>>> > > > > >>>>>> 2017-03-16 12:53 GMT+03:00 ALEKSEY KUZNETSOV <
>>> > > > > >>> alkuznetsov.sb@gmail.com
>>> > > > > >>>>> :
>>> > > > > >>>>>>
>>> > > > > >>>>>>> Yeah, now i got it.
>>> > > > > >>>>>>> There are some doubts on this approach
>>> > > > > >>>>>>> 1) During optimistic commit phase, when you assure no one
>>> > > altered
>>> > > > > >>> the
>>> > > > > >>>>>>> original values, you must check versions of other
>>> dependent
>>> > > keys.
>>> > > > > >>> How
>>> > > > > >>>>>> could
>>> > > > > >>>>>>> we obtain those keys(in an automative manner, of course)
>>> ?
>>> > > > > >>>>>>> 2) How could we lock a key before some Service A
>>> introduce
>>> > > > > >> changes?
>>> > > > > >>>> So
>>> > > > > >>>>> no
>>> > > > > >>>>>>> other service is allowed to change this key-value?(sort
>>> of
>>> > > > > >>>> pessimistic
>>> > > > > >>>>>>> blocking)
>>> > > > > >>>>>>> May be you know some implementations of such approach ?
>>> > > > > >>>>>>>
>>> > > > > >>>>>>> ср, 15 мар. 2017 г. в 17:54, ALEKSEY KUZNETSOV <
>>> > > > > >>>>> alkuznetsov.sb@gmail.com
>>> > > > > >>>>>>> :
>>> > > > > >>>>>>>
>>> > > > > >>>>>>>> Thank you very much for help.  I will answer later.
>>> > > > > >>>>>>>>
>>> > > > > >>>>>>>> ср, 15 мар. 2017 г. в 17:39, Sergi Vladykin <
>>> > > > > >>>>> sergi.vladykin@gmail.com
>>> > > > > >>>>>>> :
>>> > > > > >>>>>>>>
>>> > > > > >>>>>>>> All the services do not update key in place, but only
>>> > generate
>>> > > > > >>> new
>>> > > > > >>>>> keys
>>> > > > > >>>>>>>> augmented by otx and store the updated value in the same
>>> > cache
>>> > > > > >> +
>>> > > > > >>>>>> remember
>>> > > > > >>>>>>>> the keys and versions participating in the transaction
>>> in
>>> > some
>>> > > > > >>>>> separate
>>> > > > > >>>>>>>> atomic cache.
>>> > > > > >>>>>>>>
>>> > > > > >>>>>>>> Follow this sequence of changes applied to cache
>>> contents by
>>> > > > > >> each
>>> > > > > >>>>>>> Service:
>>> > > > > >>>>>>>>
>>> > > > > >>>>>>>> Initial cache contents:
>>> > > > > >>>>>>>>            [k1 => v1]
>>> > > > > >>>>>>>>            [k2 => v2]
>>> > > > > >>>>>>>>            [k3 => v3]
>>> > > > > >>>>>>>>
>>> > > > > >>>>>>>> Cache contents after Service A:
>>> > > > > >>>>>>>>            [k1 => v1]
>>> > > > > >>>>>>>>            [k2 => v2]
>>> > > > > >>>>>>>>            [k3 => v3]
>>> > > > > >>>>>>>>            [k1x => v1a]
>>> > > > > >>>>>>>>            [k2x => v2a]
>>> > > > > >>>>>>>>
>>> > > > > >>>>>>>>         + [x => (k1 -> ver1, k2 -> ver2)] in some
>>> separate
>>> > > > > >>> atomic
>>> > > > > >>>>>> cache
>>> > > > > >>>>>>>>
>>> > > > > >>>>>>>> Cache contents after Service B:
>>> > > > > >>>>>>>>            [k1 => v1]
>>> > > > > >>>>>>>>            [k2 => v2]
>>> > > > > >>>>>>>>            [k3 => v3]
>>> > > > > >>>>>>>>            [k1x => v1a]
>>> > > > > >>>>>>>>            [k2x => v2ab]
>>> > > > > >>>>>>>>            [k3x => v3b]
>>> > > > > >>>>>>>>
>>> > > > > >>>>>>>>        + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in
>>> some
>>> > > > > >>>>> separate
>>> > > > > >>>>>>>> atomic cache
>>> > > > > >>>>>>>>
>>> > > > > >>>>>>>> Finally the Committer Service takes this map of updated
>>> keys
>>> > > > > >> and
>>> > > > > >>>>> their
>>> > > > > >>>>>>>> versions from some separate atomic cache, starts Ignite
>>> > > > > >>> transaction
>>> > > > > >>>>> and
>>> > > > > >>>>>>>> replaces all the values for k* keys to values taken
>>> from k*x
>>> > > > > >>> keys.
>>> > > > > >>>>> The
>>> > > > > >>>>>>>> successful result must be the following:
>>> > > > > >>>>>>>>
>>> > > > > >>>>>>>>            [k1 => v1a]
>>> > > > > >>>>>>>>            [k2 => v2ab]
>>> > > > > >>>>>>>>            [k3 => v3b]
>>> > > > > >>>>>>>>            [k1x => v1a]
>>> > > > > >>>>>>>>            [k2x => v2ab]
>>> > > > > >>>>>>>>            [k3x => v3b]
>>> > > > > >>>>>>>>
>>> > > > > >>>>>>>>        + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in
>>> some
>>> > > > > >>>>> separate
>>> > > > > >>>>>>>> atomic cache
>>> > > > > >>>>>>>>
>>> > > > > >>>>>>>> But Committer Service also has to check that no one
>>> updated
>>> > > the
>>> > > > > >>>>>> original
>>> > > > > >>>>>>>> values before us, because otherwise we can not give any
>>> > > > > >>>>> serializability
>>> > > > > >>>>>>>> guarantee for these distributed transactions. Here we
>>> may
>>> > need
>>> > > > > >> to
>>> > > > > >>>>> check
>>> > > > > >>>>>>> not
>>> > > > > >>>>>>>> only versions of the updated keys, but also versions of
>>> any
>>> > > > > >> other
>>> > > > > >>>>> keys
>>> > > > > >>>>>>> end
>>> > > > > >>>>>>>> result depends on.
>>> > > > > >>>>>>>>
>>> > > > > >>>>>>>> After that Committer Service has to do a cleanup (may be
>>> > > > > >> outside
>>> > > > > >>> of
>>> > > > > >>>>> the
>>> > > > > >>>>>>>> committing tx) to come to the following final state:
>>> > > > > >>>>>>>>
>>> > > > > >>>>>>>>            [k1 => v1a]
>>> > > > > >>>>>>>>            [k2 => v2ab]
>>> > > > > >>>>>>>>            [k3 => v3b]
>>> > > > > >>>>>>>>
>>> > > > > >>>>>>>> Makes sense?
>>> > > > > >>>>>>>>
>>> > > > > >>>>>>>> Sergi
>>> > > > > >>>>>>>>
>>> > > > > >>>>>>>>
>>> > > > > >>>>>>>> 2017-03-15 16:54 GMT+03:00 ALEKSEY KUZNETSOV <
>>> > > > > >>>>> alkuznetsov.sb@gmail.com
>>> > > > > >>>>>>> :
>>> > > > > >>>>>>>>
>>> > > > > >>>>>>>>>   - what do u mean by saying "
>>> > > > > >>>>>>>>> *in a single transaction checks value versions for all
>>> the
>>> > > > > >> old
>>> > > > > >>>>> values
>>> > > > > >>>>>>>>>    and replaces them with calculated new ones *"? Every
>>> > time
>>> > > > > >>> you
>>> > > > > >>>>>>> change
>>> > > > > >>>>>>>>>   value(in some service), you store it to *some special
>>> > > > > >> atomic
>>> > > > > >>>>>> cache*
>>> > > > > >>>>>>> ,
>>> > > > > >>>>>>>> so
>>> > > > > >>>>>>>>>   when all services ceased working, Service commiter
>>> got a
>>> > > > > >>>> values
>>> > > > > >>>>>> with
>>> > > > > >>>>>>>> the
>>> > > > > >>>>>>>>>   last versions.
>>> > > > > >>>>>>>>>   - After "*does cleanup of temporary keys and values*"
>>> > > > > >>> Service
>>> > > > > >>>>>>> commiter
>>> > > > > >>>>>>>>>   persists them into permanent store, isn't it ?
>>> > > > > >>>>>>>>>   - I cant grasp your though, you say "*in case of
>>> version
>>> > > > > >>>>> mismatch
>>> > > > > >>>>>> or
>>> > > > > >>>>>>>> TX
>>> > > > > >>>>>>>>>   timeout just rollbacks*". But what versions would it
>>> > > > > >> match?
>>> > > > > >>>>>>>>>
>>> > > > > >>>>>>>>>
>>> > > > > >>>>>>>>> ср, 15 мар. 2017 г. в 15:34, Sergi Vladykin <
>>> > > > > >>>>>> sergi.vladykin@gmail.com
>>> > > > > >>>>>>>> :
>>> > > > > >>>>>>>>>
>>> > > > > >>>>>>>>>> Ok, here is what you actually need to implement at the
>>> > > > > >>>>> application
>>> > > > > >>>>>>>> level.
>>> > > > > >>>>>>>>>>
>>> > > > > >>>>>>>>>> Lets say we have to call 2 services in the following
>>> > order:
>>> > > > > >>>>>>>>>> - Service A: wants to update keys [k1 => v1,   k2 =>
>>> v2]
>>> > > > > >> to
>>> > > > > >>>>> [k1
>>> > > > > >>>>>> =>
>>> > > > > >>>>>>>>> v1a,
>>> > > > > >>>>>>>>>>  k2 => v2a]
>>> > > > > >>>>>>>>>> - Service B: wants to update keys [k2 => v2a, k3 =>
>>> v3]
>>> > > > > >> to
>>> > > > > >>>> [k2
>>> > > > > >>>>>> =>
>>> > > > > >>>>>>>>> v2ab,
>>> > > > > >>>>>>>>>> k3 => v3b]
>>> > > > > >>>>>>>>>>
>>> > > > > >>>>>>>>>> The change
>>> > > > > >>>>>>>>>>    from [ k1 => v1,   k2 => v2,     k3 => v3   ]
>>> > > > > >>>>>>>>>>    to     [ k1 => v1a, k2 => v2ab, k3 => v3b ]
>>> > > > > >>>>>>>>>> must happen in a single transaction.
>>> > > > > >>>>>>>>>>
>>> > > > > >>>>>>>>>>
>>> > > > > >>>>>>>>>> Optimistic protocol to solve this:
>>> > > > > >>>>>>>>>>
>>> > > > > >>>>>>>>>> Each cache key must have a field `otx`, which is a
>>> unique
>>> > > > > >>>>>>> orchestrator
>>> > > > > >>>>>>>> TX
>>> > > > > >>>>>>>>>> identifier - it must be a parameter passed to all the
>>> > > > > >>> services.
>>> > > > > >>>>> If
>>> > > > > >>>>>>>> `otx`
>>> > > > > >>>>>>>>> is
>>> > > > > >>>>>>>>>> set to some value it means that it is an intermediate
>>> key
>>> > > > > >> and
>>> > > > > >>>> is
>>> > > > > >>>>>>>> visible
>>> > > > > >>>>>>>>>> only inside of some transaction, for the finalized key
>>> > > > > >> `otx`
>>> > > > > >>>> must
>>> > > > > >>>>>> be
>>> > > > > >>>>>>>>> null -
>>> > > > > >>>>>>>>>> it means the key is committed and visible for
>>> everyone.
>>> > > > > >>>>>>>>>>
>>> > > > > >>>>>>>>>> Each cache value must have a field `ver` which is a
>>> > version
>>> > > > > >>> of
>>> > > > > >>>>> that
>>> > > > > >>>>>>>>> value.
>>> > > > > >>>>>>>>>>
>>> > > > > >>>>>>>>>> For both fields (`otx` and `ver`) the safest way is
>>> to use
>>> > > > > >>>> UUID.
>>> > > > > >>>>>>>>>>
>>> > > > > >>>>>>>>>> Workflow is the following:
>>> > > > > >>>>>>>>>>
>>> > > > > >>>>>>>>>> Orchestrator starts the distributed transaction with
>>> `otx`
>>> > > > > >> =
>>> > > > > >>> x
>>> > > > > >>>>> and
>>> > > > > >>>>>>>> passes
>>> > > > > >>>>>>>>>> this parameter to all the services.
>>> > > > > >>>>>>>>>>
>>> > > > > >>>>>>>>>> Service A:
>>> > > > > >>>>>>>>>> - does some computations
>>> > > > > >>>>>>>>>> - stores [k1x => v1a, k2x => v2a]  with TTL = Za
>>> > > > > >>>>>>>>>>      where
>>> > > > > >>>>>>>>>>          Za - left time from max Orchestrator TX
>>> duration
>>> > > > > >>>> after
>>> > > > > >>>>>>>> Service
>>> > > > > >>>>>>>>> A
>>> > > > > >>>>>>>>>> end
>>> > > > > >>>>>>>>>>          k1x, k2x - new temporary keys with field
>>> `otx` =
>>> > > > > >> x
>>> > > > > >>>>>>>>>>          v2a has updated version `ver`
>>> > > > > >>>>>>>>>> - returns a set of updated keys and all the old
>>> versions
>>> > > > > >> to
>>> > > > > >>>> the
>>> > > > > >>>>>>>>>> orchestrator
>>> > > > > >>>>>>>>>>       or just stores it in some special atomic cache
>>> like
>>> > > > > >>>>>>>>>>       [x => (k1 -> ver1, k2 -> ver2)] TTL = Za
>>> > > > > >>>>>>>>>>
>>> > > > > >>>>>>>>>> Service B:
>>> > > > > >>>>>>>>>> - retrieves the updated value k2x => v2a because it
>>> knows
>>> > > > > >>>> `otx`
>>> > > > > >>>>> =
>>> > > > > >>>>>> x
>>> > > > > >>>>>>>>>> - does computations
>>> > > > > >>>>>>>>>> - stores [k2x => v2ab, k3x => v3b] TTL = Zb
>>> > > > > >>>>>>>>>> - updates the set of updated keys like [x => (k1 ->
>>> ver1,
>>> > > > > >> k2
>>> > > > > >>>> ->
>>> > > > > >>>>>>> ver2,
>>> > > > > >>>>>>>> k3
>>> > > > > >>>>>>>>>> -> ver3)] TTL = Zb
>>> > > > > >>>>>>>>>>
>>> > > > > >>>>>>>>>> Service Committer (may be embedded into Orchestrator):
>>> > > > > >>>>>>>>>> - takes all the updated keys and versions for `otx` =
>>> x
>>> > > > > >>>>>>>>>>       [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)]
>>> > > > > >>>>>>>>>> - in a single transaction checks value versions for
>>> all
>>> > > > > >> the
>>> > > > > >>>> old
>>> > > > > >>>>>>> values
>>> > > > > >>>>>>>>>>       and replaces them with calculated new ones
>>> > > > > >>>>>>>>>> - does cleanup of temporary keys and values
>>> > > > > >>>>>>>>>> - in case of version mismatch or TX timeout just
>>> rollbacks
>>> > > > > >>> and
>>> > > > > >>>>>>> signals
>>> > > > > >>>>>>>>>>        to Orchestrator to restart the job with new
>>> `otx`
>>> > > > > >>>>>>>>>>
>>> > > > > >>>>>>>>>> PROFIT!!
>>> > > > > >>>>>>>>>>
>>> > > > > >>>>>>>>>> This approach even allows you to run independent
>>> parts of
>>> > > > > >> the
>>> > > > > >>>>> graph
>>> > > > > >>>>>>> in
>>> > > > > >>>>>>>>>> parallel (with TX transfer you will always run only
>>> one at
>>> > > > > >> a
>>> > > > > >>>>> time).
>>> > > > > >>>>>>>> Also
>>> > > > > >>>>>>>>> it
>>> > > > > >>>>>>>>>> does not require inventing any special fault tolerance
>>> > > > > >>> technics
>>> > > > > >>>>>>> because
>>> > > > > >>>>>>>>>> Ignite caches are already fault tolerant and all the
>>> > > > > >>>> intermediate
>>> > > > > >>>>>>>> results
>>> > > > > >>>>>>>>>> are virtually invisible and stored with TTL, thus in
>>> case
>>> > > > > >> of
>>> > > > > >>>> any
>>> > > > > >>>>>>> crash
>>> > > > > >>>>>>>>> you
>>> > > > > >>>>>>>>>> will not have inconsistent state or garbage.
>>> > > > > >>>>>>>>>>
>>> > > > > >>>>>>>>>> Sergi
>>> > > > > >>>>>>>>>>
>>> > > > > >>>>>>>>>>
>>> > > > > >>>>>>>>>> 2017-03-15 11:42 GMT+03:00 ALEKSEY KUZNETSOV <
>>> > > > > >>>>>>> alkuznetsov.sb@gmail.com
>>> > > > > >>>>>>>>> :
>>> > > > > >>>>>>>>>>
>>> > > > > >>>>>>>>>>> Okay, we are open for proposals on business task. I
>>> mean,
>>> > > > > >>> we
>>> > > > > >>>>> can
>>> > > > > >>>>>>> make
>>> > > > > >>>>>>>>> use
>>> > > > > >>>>>>>>>>> of some other thing, not distributed transaction. Not
>>> > > > > >>>>> transaction
>>> > > > > >>>>>>>> yet.
>>> > > > > >>>>>>>>>>>
>>> > > > > >>>>>>>>>>> ср, 15 мар. 2017 г. в 11:24, Vladimir Ozerov <
>>> > > > > >>>>>> vozerov@gridgain.com
>>> > > > > >>>>>>>> :
>>> > > > > >>>>>>>>>>>
>>> > > > > >>>>>>>>>>>> IMO the use case makes sense. However, as Sergi
>>> already
>>> > > > > >>>>>>> mentioned,
>>> > > > > >>>>>>>>> the
>>> > > > > >>>>>>>>>>>> problem is far more complex, than simply passing TX
>>> > > > > >> state
>>> > > > > >>>>> over
>>> > > > > >>>>>> a
>>> > > > > >>>>>>>>> wire.
>>> > > > > >>>>>>>>>>> Most
>>> > > > > >>>>>>>>>>>> probably a kind of coordinator will be required
>>> still
>>> > > > > >> to
>>> > > > > >>>>> manage
>>> > > > > >>>>>>> all
>>> > > > > >>>>>>>>>> kinds
>>> > > > > >>>>>>>>>>>> of failures. This task should be started with clean
>>> > > > > >>> design
>>> > > > > >>>>>>> proposal
>>> > > > > >>>>>>>>>>>> explaining how we handle all these concurrent
>>> events.
>>> > > > > >> And
>>> > > > > >>>>> only
>>> > > > > >>>>>>>> then,
>>> > > > > >>>>>>>>>> when
>>> > > > > >>>>>>>>>>>> we understand all implications, we should move to
>>> > > > > >>>> development
>>> > > > > >>>>>>>> stage.
>>> > > > > >>>>>>>>>>>>
>>> > > > > >>>>>>>>>>>> On Wed, Mar 15, 2017 at 10:38 AM, ALEKSEY KUZNETSOV
>>> <
>>> > > > > >>>>>>>>>>>> alkuznetsov.sb@gmail.com> wrote:
>>> > > > > >>>>>>>>>>>>
>>> > > > > >>>>>>>>>>>>> Right
>>> > > > > >>>>>>>>>>>>>
>>> > > > > >>>>>>>>>>>>> ср, 15 мар. 2017 г. в 10:35, Sergi Vladykin <
>>> > > > > >>>>>>>>>> sergi.vladykin@gmail.com
>>> > > > > >>>>>>>>>>>> :
>>> > > > > >>>>>>>>>>>>>
>>> > > > > >>>>>>>>>>>>>> Good! Basically your orchestrator just takes some
>>> > > > > >>>>>> predefined
>>> > > > > >>>>>>>>> graph
>>> > > > > >>>>>>>>>> of
>>> > > > > >>>>>>>>>>>>>> distributed services to be invoked, calls them by
>>> > > > > >>> some
>>> > > > > >>>>> kind
>>> > > > > >>>>>>> of
>>> > > > > >>>>>>>>> RPC
>>> > > > > >>>>>>>>>>> and
>>> > > > > >>>>>>>>>>>>>> passes the needed parameters between them, right?
>>> > > > > >>>>>>>>>>>>>>
>>> > > > > >>>>>>>>>>>>>> Sergi
>>> > > > > >>>>>>>>>>>>>>
>>> > > > > >>>>>>>>>>>>>> 2017-03-14 22:46 GMT+03:00 ALEKSEY KUZNETSOV <
>>> > > > > >>>>>>>>>>> alkuznetsov.sb@gmail.com
>>> > > > > >>>>>>>>>>>>> :
>>> > > > > >>>>>>>>>>>>>>
>>> > > > > >>>>>>>>>>>>>>> orchestrator is a custom thing. He is responsible
>>> > > > > >>> for
>>> > > > > >>>>>>>> managing
>>> > > > > >>>>>>>>>>>> business
>>> > > > > >>>>>>>>>>>>>>> scenarios flows. Many nodes are involved in
>>> > > > > >>>> scenarios.
>>> > > > > >>>>>> They
>>> > > > > >>>>>>>>>>> exchange
>>> > > > > >>>>>>>>>>>>> data
>>> > > > > >>>>>>>>>>>>>>> and folow one another. If you acquinted with BPMN
>>> > > > > >>>>>>> framework,
>>> > > > > >>>>>>>> so
>>> > > > > >>>>>>>>>>>>>>> orchestrator is like bpmn engine.
>>> > > > > >>>>>>>>>>>>>>>
>>> > > > > >>>>>>>>>>>>>>> вт, 14 Мар 2017 г., 18:56 Sergi Vladykin <
>>> > > > > >>>>>>>>>> sergi.vladykin@gmail.com
>>> > > > > >>>>>>>>>>>> :
>>> > > > > >>>>>>>>>>>>>>>
>>> > > > > >>>>>>>>>>>>>>>> What is Orchestrator for you? Is it a thing
>>> > > > > >> from
>>> > > > > >>>>>>> Microsoft
>>> > > > > >>>>>>>> or
>>> > > > > >>>>>>>>>>> your
>>> > > > > >>>>>>>>>>>>>> custom
>>> > > > > >>>>>>>>>>>>>>>> in-house software?
>>> > > > > >>>>>>>>>>>>>>>>
>>> > > > > >>>>>>>>>>>>>>>> Sergi
>>> > > > > >>>>>>>>>>>>>>>>
>>> > > > > >>>>>>>>>>>>>>>> 2017-03-14 18:00 GMT+03:00 ALEKSEY KUZNETSOV <
>>> > > > > >>>>>>>>>>>>> alkuznetsov.sb@gmail.com
>>> > > > > >>>>>>>>>>>>>>> :
>>> > > > > >>>>>>>>>>>>>>>>
>>> > > > > >>>>>>>>>>>>>>>>> Fine. Let's say we've got multiple servers
>>> > > > > >>> which
>>> > > > > >>>>>>> fulfills
>>> > > > > >>>>>>>>>>> custom
>>> > > > > >>>>>>>>>>>>>> logic.
>>> > > > > >>>>>>>>>>>>>>>>> This servers compound oriented graph (BPMN
>>> > > > > >>>> process)
>>> > > > > >>>>>>> which
>>> > > > > >>>>>>>>>>>>> controlled
>>> > > > > >>>>>>>>>>>>>> by
>>> > > > > >>>>>>>>>>>>>>>>> Orchestrator.
>>> > > > > >>>>>>>>>>>>>>>>> For instance, *server1  *creates *variable A
>>> > > > > >>>> *with
>>> > > > > >>>>>>> value
>>> > > > > >>>>>>>> 1,
>>> > > > > >>>>>>>>>>>>> persists
>>> > > > > >>>>>>>>>>>>>> it
>>> > > > > >>>>>>>>>>>>>>>> to
>>> > > > > >>>>>>>>>>>>>>>>> IGNITE cache and creates *variable B *and
>>> > > > > >> sends
>>> > > > > >>>> it
>>> > > > > >>>>>> to*
>>> > > > > >>>>>>>>>> server2.
>>> > > > > >>>>>>>>>>>>> *The
>>> > > > > >>>>>>>>>>>>>>>>> latests receives *variable B*, do some logic
>>> > > > > >>> with
>>> > > > > >>>>> it
>>> > > > > >>>>>>> and
>>> > > > > >>>>>>>>>> stores
>>> > > > > >>>>>>>>>>>> to
>>> > > > > >>>>>>>>>>>>>>>> IGNITE.
>>> > > > > >>>>>>>>>>>>>>>>> All the work made by both servers must be
>>> > > > > >>>> fulfilled
>>> > > > > >>>>>> in
>>> > > > > >>>>>>>>> *one*
>>> > > > > >>>>>>>>>>>>>>> transaction.
>>> > > > > >>>>>>>>>>>>>>>>> Because we need all information done, or
>>> > > > > >>>>>>>>> nothing(rollbacked).
>>> > > > > >>>>>>>>>>> The
>>> > > > > >>>>>>>>>>>>>>>> scenario
>>> > > > > >>>>>>>>>>>>>>>>> is managed by orchestrator.
>>> > > > > >>>>>>>>>>>>>>>>>
>>> > > > > >>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 17:31, Sergi Vladykin <
>>> > > > > >>>>>>>>>>>>>> sergi.vladykin@gmail.com
>>> > > > > >>>>>>>>>>>>>>>> :
>>> > > > > >>>>>>>>>>>>>>>>>
>>> > > > > >>>>>>>>>>>>>>>>>> Ok, it is not a business case, it is your
>>> > > > > >>> wrong
>>> > > > > >>>>>>>> solution
>>> > > > > >>>>>>>>>> for
>>> > > > > >>>>>>>>>>>> it.
>>> > > > > >>>>>>>>>>>>>>>>>> Lets try again, what is the business case?
>>> > > > > >>>>>>>>>>>>>>>>>>
>>> > > > > >>>>>>>>>>>>>>>>>> Sergi
>>> > > > > >>>>>>>>>>>>>>>>>>
>>> > > > > >>>>>>>>>>>>>>>>>> 2017-03-14 16:42 GMT+03:00 ALEKSEY
>>> > > > > >> KUZNETSOV
>>> > > > > >>> <
>>> > > > > >>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
>>> > > > > >>>>>>>>>>>>>>>>> :
>>> > > > > >>>>>>>>>>>>>>>>>>
>>> > > > > >>>>>>>>>>>>>>>>>>> The case is the following, One starts
>>> > > > > >>>>> transaction
>>> > > > > >>>>>>> in
>>> > > > > >>>>>>>>> one
>>> > > > > >>>>>>>>>>>> node,
>>> > > > > >>>>>>>>>>>>>> and
>>> > > > > >>>>>>>>>>>>>>>>> commit
>>> > > > > >>>>>>>>>>>>>>>>>>> this transaction in another jvm node(or
>>> > > > > >>>>> rollback
>>> > > > > >>>>>> it
>>> > > > > >>>>>>>>>>>> remotely).
>>> > > > > >>>>>>>>>>>>>>>>>>>
>>> > > > > >>>>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 16:30, Sergi
>>> > > > > >>> Vladykin <
>>> > > > > >>>>>>>>>>>>>>>> sergi.vladykin@gmail.com
>>> > > > > >>>>>>>>>>>>>>>>>> :
>>> > > > > >>>>>>>>>>>>>>>>>>>
>>> > > > > >>>>>>>>>>>>>>>>>>>> Because even if you make it work for
>>> > > > > >> some
>>> > > > > >>>>>>>> simplistic
>>> > > > > >>>>>>>>>>>>> scenario,
>>> > > > > >>>>>>>>>>>>>>> get
>>> > > > > >>>>>>>>>>>>>>>>>> ready
>>> > > > > >>>>>>>>>>>>>>>>>>> to
>>> > > > > >>>>>>>>>>>>>>>>>>>> write many fault tolerance tests and
>>> > > > > >> make
>>> > > > > >>>>> sure
>>> > > > > >>>>>>> that
>>> > > > > >>>>>>>>> you
>>> > > > > >>>>>>>>>>> TXs
>>> > > > > >>>>>>>>>>>>>> work
>>> > > > > >>>>>>>>>>>>>>>>>>> gracefully
>>> > > > > >>>>>>>>>>>>>>>>>>>> in all modes in case of crashes. Also
>>> > > > > >>> make
>>> > > > > >>>>> sure
>>> > > > > >>>>>>>> that
>>> > > > > >>>>>>>>> we
>>> > > > > >>>>>>>>>>> do
>>> > > > > >>>>>>>>>>>>> not
>>> > > > > >>>>>>>>>>>>>>> have
>>> > > > > >>>>>>>>>>>>>>>>> any
>>> > > > > >>>>>>>>>>>>>>>>>>>> performance drops after all your
>>> > > > > >> changes
>>> > > > > >>> in
>>> > > > > >>>>>>>> existing
>>> > > > > >>>>>>>>>>>>>> benchmarks.
>>> > > > > >>>>>>>>>>>>>>>> All
>>> > > > > >>>>>>>>>>>>>>>>> in
>>> > > > > >>>>>>>>>>>>>>>>>>> all
>>> > > > > >>>>>>>>>>>>>>>>>>>> I don't believe these conditions will
>>> > > > > >> be
>>> > > > > >>>> met
>>> > > > > >>>>>> and
>>> > > > > >>>>>>>> your
>>> > > > > >>>>>>>>>>>>>>> contribution
>>> > > > > >>>>>>>>>>>>>>>>> will
>>> > > > > >>>>>>>>>>>>>>>>>>> be
>>> > > > > >>>>>>>>>>>>>>>>>>>> accepted.
>>> > > > > >>>>>>>>>>>>>>>>>>>>
>>> > > > > >>>>>>>>>>>>>>>>>>>> Better solution to what problem?
>>> > > > > >> Sending
>>> > > > > >>> TX
>>> > > > > >>>>> to
>>> > > > > >>>>>>>>> another
>>> > > > > >>>>>>>>>>>> node?
>>> > > > > >>>>>>>>>>>>>> The
>>> > > > > >>>>>>>>>>>>>>>>>> problem
>>> > > > > >>>>>>>>>>>>>>>>>>>> statement itself is already wrong. What
>>> > > > > >>>>>> business
>>> > > > > >>>>>>>> case
>>> > > > > >>>>>>>>>> you
>>> > > > > >>>>>>>>>>>> are
>>> > > > > >>>>>>>>>>>>>>>> trying
>>> > > > > >>>>>>>>>>>>>>>>> to
>>> > > > > >>>>>>>>>>>>>>>>>>>> solve? I'm sure everything you need can
>>> > > > > >>> be
>>> > > > > >>>>> done
>>> > > > > >>>>>>> in
>>> > > > > >>>>>>>> a
>>> > > > > >>>>>>>>>> much
>>> > > > > >>>>>>>>>>>>> more
>>> > > > > >>>>>>>>>>>>>>>> simple
>>> > > > > >>>>>>>>>>>>>>>>>> and
>>> > > > > >>>>>>>>>>>>>>>>>>>> efficient way at the application level.
>>> > > > > >>>>>>>>>>>>>>>>>>>>
>>> > > > > >>>>>>>>>>>>>>>>>>>> Sergi
>>> > > > > >>>>>>>>>>>>>>>>>>>>
>>> > > > > >>>>>>>>>>>>>>>>>>>> 2017-03-14 16:03 GMT+03:00 ALEKSEY
>>> > > > > >>>> KUZNETSOV
>>> > > > > >>>>> <
>>> > > > > >>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
>>> > > > > >>>>>>>>>>>>>>>>>>> :
>>> > > > > >>>>>>>>>>>>>>>>>>>>
>>> > > > > >>>>>>>>>>>>>>>>>>>>> Why wrong ? You know the better
>>> > > > > >>> solution?
>>> > > > > >>>>>>>>>>>>>>>>>>>>>
>>> > > > > >>>>>>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 15:46, Sergi
>>> > > > > >>>>> Vladykin <
>>> > > > > >>>>>>>>>>>>>>>>>> sergi.vladykin@gmail.com
>>> > > > > >>>>>>>>>>>>>>>>>>>> :
>>> > > > > >>>>>>>>>>>>>>>>>>>>>
>>> > > > > >>>>>>>>>>>>>>>>>>>>>> Just serializing TX object and
>>> > > > > >>>>>> deserializing
>>> > > > > >>>>>>> it
>>> > > > > >>>>>>>>> on
>>> > > > > >>>>>>>>>>>>> another
>>> > > > > >>>>>>>>>>>>>>> node
>>> > > > > >>>>>>>>>>>>>>>>> is
>>> > > > > >>>>>>>>>>>>>>>>>>>>>> meaningless, because other nodes
>>> > > > > >>>>>>> participating
>>> > > > > >>>>>>>> in
>>> > > > > >>>>>>>>>> the
>>> > > > > >>>>>>>>>>>> TX
>>> > > > > >>>>>>>>>>>>>> have
>>> > > > > >>>>>>>>>>>>>>>> to
>>> > > > > >>>>>>>>>>>>>>>>>> know
>>> > > > > >>>>>>>>>>>>>>>>>>>>> about
>>> > > > > >>>>>>>>>>>>>>>>>>>>>> the new coordinator. This will
>>> > > > > >>> require
>>> > > > > >>>>>>> protocol
>>> > > > > >>>>>>>>>>>> changes,
>>> > > > > >>>>>>>>>>>>> we
>>> > > > > >>>>>>>>>>>>>>>>>>> definitely
>>> > > > > >>>>>>>>>>>>>>>>>>>>> will
>>> > > > > >>>>>>>>>>>>>>>>>>>>>> have fault tolerance and
>>> > > > > >> performance
>>> > > > > >>>>>> issues.
>>> > > > > >>>>>>>> IMO
>>> > > > > >>>>>>>>>> the
>>> > > > > >>>>>>>>>>>>> whole
>>> > > > > >>>>>>>>>>>>>>> idea
>>> > > > > >>>>>>>>>>>>>>>>> is
>>> > > > > >>>>>>>>>>>>>>>>>>>> wrong
>>> > > > > >>>>>>>>>>>>>>>>>>>>>> and it makes no sense to waste time
>>> > > > > >>> on
>>> > > > > >>>>> it.
>>> > > > > >>>>>>>>>>>>>>>>>>>>>>
>>> > > > > >>>>>>>>>>>>>>>>>>>>>> Sergi
>>> > > > > >>>>>>>>>>>>>>>>>>>>>>
>>> > > > > >>>>>>>>>>>>>>>>>>>>>> 2017-03-14 10:57 GMT+03:00 ALEKSEY
>>> > > > > >>>>>> KUZNETSOV
>>> > > > > >>>>>>> <
>>> > > > > >>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
>>> > > > > >>>>>>>>>>>>>>>>>>>>> :
>>> > > > > >>>>>>>>>>>>>>>>>>>>>>
>>> > > > > >>>>>>>>>>>>>>>>>>>>>>> IgniteTransactionState
>>> > > > > >>>> implememntation
>>> > > > > >>>>>>>> contains
>>> > > > > >>>>>>>>>>>>>>>> IgniteTxEntry's
>>> > > > > >>>>>>>>>>>>>>>>>>> which
>>> > > > > >>>>>>>>>>>>>>>>>>>>> is
>>> > > > > >>>>>>>>>>>>>>>>>>>>>>> supposed to be transferable
>>> > > > > >>>>>>>>>>>>>>>>>>>>>>>
>>> > > > > >>>>>>>>>>>>>>>>>>>>>>> пн, 13 мар. 2017 г. в 19:32,
>>> > > > > >>> Dmitriy
>>> > > > > >>>>>>>> Setrakyan
>>> > > > > >>>>>>>>> <
>>> > > > > >>>>>>>>>>>>>>>>>>>> dsetrakyan@apache.org
>>> > > > > >>>>>>>>>>>>>>>>>>>>>> :
>>> > > > > >>>>>>>>>>>>>>>>>>>>>>>
>>> > > > > >>>>>>>>>>>>>>>>>>>>>>>> It sounds a little scary to me
>>> > > > > >>> that
>>> > > > > >>>>> we
>>> > > > > >>>>>>> are
>>> > > > > >>>>>>>>>>> passing
>>> > > > > >>>>>>>>>>>>>>>>> transaction
>>> > > > > >>>>>>>>>>>>>>>>>>>>> objects
>>>
>>> --

*Best Regards,*

*Kuznetsov Aleksey*

Re: distributed transaction of non-single coordinator

Posted by ALEKSEY KUZNETSOV <al...@gmail.com>.
Igntrs.
Im rewieving all usages of threadId of
transaction.(IgniteTxAdapter#threadID). What is the point of usage threadId
in mvcc entry ?

пн, 3 апр. 2017 г. в 9:47, ALEKSEY KUZNETSOV <al...@gmail.com>:

> so what do u think on my idea?
>
> пт, 31 Мар 2017 г., 11:05 ALEKSEY KUZNETSOV <al...@gmail.com>:
>
>> sorry for misleading you. We planned to support multi-node transactions,
>> but failed.
>>
>> пт, 31 мар. 2017 г. в 10:51, Alexey Goncharuk <alexey.goncharuk@gmail.com
>> >:
>>
>> Well, now the scenario is more clear, but it has nothing to do with
>> multiple coordinators :) Let me think a little bit about it.
>>
>> 2017-03-31 9:53 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:
>>
>> > so what do u think on the issue ?
>> >
>> > чт, 30 Мар 2017 г., 17:49 ALEKSEY KUZNETSOV <al...@gmail.com>:
>> >
>> > > Hi ! Thanks for help. I've created ticket :
>> > > https://issues.apache.org/jira/browse/IGNITE-4887
>> > > and a commit :
>> > > https://github.com/voipp/ignite/commit/aa3487bd9c203394f534c605f84e06
>> > 436b638e5c
>> > > We really need this feature
>> > >
>> > > чт, 30 мар. 2017 г. в 11:31, Alexey Goncharuk <
>> > alexey.goncharuk@gmail.com
>> > > >:
>> > >
>> > > Aleksey,
>> > >
>> > > I doubt your approach works as expected. Current transaction recovery
>> > > protocol heavily relies on the originating node ID in its internal
>> logic.
>> > > For example, currently a transaction will be rolled back if you want
>> to
>> > > transfer a transaction ownership to another node and original tx owner
>> > > fails. An attempt to commit such a transaction on another node may
>> fail
>> > > with all sorts of assertions. After transaction ownership changed, you
>> > need
>> > > to notify all current transaction participants about this change, and
>> it
>> > > should also be done failover-safe, let alone that you did not add any
>> > tests
>> > > for these cases.
>> > >
>> > > I back Denis here. Please create a ticket first and come up with clear
>> > > use-cases, API and protocol changes design. It is hard to reason about
>> > the
>> > > changes you've made when we do not even understand why you are making
>> > these
>> > > changes and how they are supposed to work.
>> > >
>> > > --AG
>> > >
>> > > 2017-03-30 10:43 GMT+03:00 ALEKSEY KUZNETSOV <
>> alkuznetsov.sb@gmail.com>:
>> > >
>> > > > So, what do u think on my idea ?
>> > > >
>> > > > ср, 29 мар. 2017 г. в 10:35, ALEKSEY KUZNETSOV <
>> > alkuznetsov.sb@gmail.com
>> > > >:
>> > > >
>> > > > > Hi! No, i dont have ticket for this.
>> > > > > In the ticket i have implemented methods that change transaction
>> > status
>> > > > to
>> > > > > STOP, thus letting it to commit transaction in another thread. In
>> > > another
>> > > > > thread you r going to restart transaction in order to commit it.
>> > > > > The mechanism behind it is obvious : we change thread id to newer
>> one
>> > > in
>> > > > > ThreadMap, and make use of serialization of txState, transactions
>> > > itself
>> > > > to
>> > > > > transfer them into another thread.
>> > > > >
>> > > > >
>> > > > > вт, 28 мар. 2017 г. в 20:15, Denis Magda <dm...@apache.org>:
>> > > > >
>> > > > > Aleksey,
>> > > > >
>> > > > > Do you have a ticket for this? Could you briefly list what exactly
>> > was
>> > > > > done and how the things work.
>> > > > >
>> > > > > —
>> > > > > Denis
>> > > > >
>> > > > > > On Mar 28, 2017, at 8:32 AM, ALEKSEY KUZNETSOV <
>> > > > alkuznetsov.sb@gmail.com>
>> > > > > wrote:
>> > > > > >
>> > > > > > Hi, Igniters! I 've made implementation of transactions of
>> > non-single
>> > > > > > coordinator. Here you can start transaction in one thread and
>> > commit
>> > > it
>> > > > > in
>> > > > > > another thread.
>> > > > > > Take a look on it. Give your thoughts on it.
>> > > > > >
>> > > > > >
>> > > > > https://github.com/voipp/ignite/pull/10/commits/
>> > > > 3a3d90aa6ac84f125e4c3ce4ced4f269a695ef45
>> > > > > >
>> > > > > > пт, 17 мар. 2017 г. в 19:26, Sergi Vladykin <
>> > > sergi.vladykin@gmail.com
>> > > > >:
>> > > > > >
>> > > > > >> You know better, go ahead! :)
>> > > > > >>
>> > > > > >> Sergi
>> > > > > >>
>> > > > > >> 2017-03-17 16:16 GMT+03:00 ALEKSEY KUZNETSOV <
>> > > > alkuznetsov.sb@gmail.com
>> > > > > >:
>> > > > > >>
>> > > > > >>> we've discovered several problems regarding your
>> "accumulation"
>> > > > > >>> approach.These are
>> > > > > >>>
>> > > > > >>>   1. perfomance issues when transfering data from temporary
>> cache
>> > > to
>> > > > > >>>   permanent one. Keep in mind big deal of concurent
>> transactions
>> > in
>> > > > > >>> Service
>> > > > > >>>   commiter
>> > > > > >>>   2. extreme memory load when keeping temporary cache in
>> memory
>> > > > > >>>   3. As long as user is not acquainted with ignite, working
>> with
>> > > > cache
>> > > > > >>>   must be transparent for him. Keep this in mind. User's node
>> can
>> > > > > >> evaluate
>> > > > > >>>   logic with no transaction at all, so we should deal with
>> both
>> > > types
>> > > > > of
>> > > > > >>>   execution flow : transactional and non-transactional.Another
>> > one
>> > > > > >>> problem is
>> > > > > >>>   transaction id support at the user node. We would have
>> handled
>> > > all
>> > > > > >> this
>> > > > > >>>   issues and many more.
>> > > > > >>>   4. we cannot pessimistically lock entity.
>> > > > > >>>
>> > > > > >>> As a result, we decided to move on building distributed
>> > > transaction.
>> > > > We
>> > > > > >> put
>> > > > > >>> aside your "accumulation" approach until we realize how to
>> solve
>> > > > > >>> difficulties above .
>> > > > > >>>
>> > > > > >>> чт, 16 мар. 2017 г. в 16:56, Sergi Vladykin <
>> > > > sergi.vladykin@gmail.com
>> > > > > >:
>> > > > > >>>
>> > > > > >>>> The problem "How to run millions of entities, and millions of
>> > > > > >> operations
>> > > > > >>> on
>> > > > > >>>> a single Pentium3" is out of scope here. Do the math, plan
>> > > capacity
>> > > > > >>>> reasonably.
>> > > > > >>>>
>> > > > > >>>> Sergi
>> > > > > >>>>
>> > > > > >>>> 2017-03-16 15:54 GMT+03:00 ALEKSEY KUZNETSOV <
>> > > > > alkuznetsov.sb@gmail.com
>> > > > > >>> :
>> > > > > >>>>
>> > > > > >>>>> hmm, If we have millions of entities, and millions of
>> > operations,
>> > > > > >> would
>> > > > > >>>> not
>> > > > > >>>>> this approache lead to memory overflow and perfomance
>> > degradation
>> > > > > >>>>>
>> > > > > >>>>> чт, 16 мар. 2017 г. в 15:42, Sergi Vladykin <
>> > > > > >> sergi.vladykin@gmail.com
>> > > > > >>>> :
>> > > > > >>>>>
>> > > > > >>>>>> 1. Actually you have to check versions on all the values
>> you
>> > > have
>> > > > > >>> read
>> > > > > >>>>>> during the tx.
>> > > > > >>>>>>
>> > > > > >>>>>> For example if we have [k1 => v1, k2 => v2] and do:
>> > > > > >>>>>>
>> > > > > >>>>>> put(k1, get(k2) + 5)
>> > > > > >>>>>>
>> > > > > >>>>>> We have to remember the version for k2. This logic can be
>> > > > > >> relatively
>> > > > > >>>>> easily
>> > > > > >>>>>> encapsulated in a framework atop of Ignite. You need to
>> > > implement
>> > > > > >> one
>> > > > > >>>> to
>> > > > > >>>>>> make all this stuff usable.
>> > > > > >>>>>>
>> > > > > >>>>>> 2. I suggest to avoid any locking here, because you easily
>> > will
>> > > > end
>> > > > > >>> up
>> > > > > >>>>> with
>> > > > > >>>>>> deadlocks. If you do not have too frequent updates for your
>> > > keys,
>> > > > > >>>>>> optimistic approach will work just fine.
>> > > > > >>>>>>
>> > > > > >>>>>> Theoretically in the Committer Service you can start a
>> thread
>> > > for
>> > > > > >> the
>> > > > > >>>>>> lifetime of the whole distributed transaction, take a lock
>> on
>> > > the
>> > > > > >> key
>> > > > > >>>>> using
>> > > > > >>>>>> IgniteCache.lock(K key) before executing any Services, wait
>> > for
>> > > > all
>> > > > > >>> the
>> > > > > >>>>>> services to complete, execute optimistic commit in the same
>> > > thread
>> > > > > >>>> while
>> > > > > >>>>>> keeping this lock and then release it. Notice that all the
>> > > Ignite
>> > > > > >>>>>> transactions inside of all Services must be optimistic
>> here to
>> > > be
>> > > > > >>> able
>> > > > > >>>> to
>> > > > > >>>>>> read this locked key.
>> > > > > >>>>>>
>> > > > > >>>>>> But again I do not recommend you using this approach until
>> you
>> > > > > >> have a
>> > > > > >>>>>> reliable deadlock avoidance scheme.
>> > > > > >>>>>>
>> > > > > >>>>>> Sergi
>> > > > > >>>>>>
>> > > > > >>>>>>
>> > > > > >>>>>>
>> > > > > >>>>>>
>> > > > > >>>>>>
>> > > > > >>>>>>
>> > > > > >>>>>>
>> > > > > >>>>>> 2017-03-16 12:53 GMT+03:00 ALEKSEY KUZNETSOV <
>> > > > > >>> alkuznetsov.sb@gmail.com
>> > > > > >>>>> :
>> > > > > >>>>>>
>> > > > > >>>>>>> Yeah, now i got it.
>> > > > > >>>>>>> There are some doubts on this approach
>> > > > > >>>>>>> 1) During optimistic commit phase, when you assure no one
>> > > altered
>> > > > > >>> the
>> > > > > >>>>>>> original values, you must check versions of other
>> dependent
>> > > keys.
>> > > > > >>> How
>> > > > > >>>>>> could
>> > > > > >>>>>>> we obtain those keys(in an automative manner, of course) ?
>> > > > > >>>>>>> 2) How could we lock a key before some Service A introduce
>> > > > > >> changes?
>> > > > > >>>> So
>> > > > > >>>>> no
>> > > > > >>>>>>> other service is allowed to change this key-value?(sort of
>> > > > > >>>> pessimistic
>> > > > > >>>>>>> blocking)
>> > > > > >>>>>>> May be you know some implementations of such approach ?
>> > > > > >>>>>>>
>> > > > > >>>>>>> ср, 15 мар. 2017 г. в 17:54, ALEKSEY KUZNETSOV <
>> > > > > >>>>> alkuznetsov.sb@gmail.com
>> > > > > >>>>>>> :
>> > > > > >>>>>>>
>> > > > > >>>>>>>> Thank you very much for help.  I will answer later.
>> > > > > >>>>>>>>
>> > > > > >>>>>>>> ср, 15 мар. 2017 г. в 17:39, Sergi Vladykin <
>> > > > > >>>>> sergi.vladykin@gmail.com
>> > > > > >>>>>>> :
>> > > > > >>>>>>>>
>> > > > > >>>>>>>> All the services do not update key in place, but only
>> > generate
>> > > > > >>> new
>> > > > > >>>>> keys
>> > > > > >>>>>>>> augmented by otx and store the updated value in the same
>> > cache
>> > > > > >> +
>> > > > > >>>>>> remember
>> > > > > >>>>>>>> the keys and versions participating in the transaction in
>> > some
>> > > > > >>>>> separate
>> > > > > >>>>>>>> atomic cache.
>> > > > > >>>>>>>>
>> > > > > >>>>>>>> Follow this sequence of changes applied to cache
>> contents by
>> > > > > >> each
>> > > > > >>>>>>> Service:
>> > > > > >>>>>>>>
>> > > > > >>>>>>>> Initial cache contents:
>> > > > > >>>>>>>>            [k1 => v1]
>> > > > > >>>>>>>>            [k2 => v2]
>> > > > > >>>>>>>>            [k3 => v3]
>> > > > > >>>>>>>>
>> > > > > >>>>>>>> Cache contents after Service A:
>> > > > > >>>>>>>>            [k1 => v1]
>> > > > > >>>>>>>>            [k2 => v2]
>> > > > > >>>>>>>>            [k3 => v3]
>> > > > > >>>>>>>>            [k1x => v1a]
>> > > > > >>>>>>>>            [k2x => v2a]
>> > > > > >>>>>>>>
>> > > > > >>>>>>>>         + [x => (k1 -> ver1, k2 -> ver2)] in some
>> separate
>> > > > > >>> atomic
>> > > > > >>>>>> cache
>> > > > > >>>>>>>>
>> > > > > >>>>>>>> Cache contents after Service B:
>> > > > > >>>>>>>>            [k1 => v1]
>> > > > > >>>>>>>>            [k2 => v2]
>> > > > > >>>>>>>>            [k3 => v3]
>> > > > > >>>>>>>>            [k1x => v1a]
>> > > > > >>>>>>>>            [k2x => v2ab]
>> > > > > >>>>>>>>            [k3x => v3b]
>> > > > > >>>>>>>>
>> > > > > >>>>>>>>        + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in
>> some
>> > > > > >>>>> separate
>> > > > > >>>>>>>> atomic cache
>> > > > > >>>>>>>>
>> > > > > >>>>>>>> Finally the Committer Service takes this map of updated
>> keys
>> > > > > >> and
>> > > > > >>>>> their
>> > > > > >>>>>>>> versions from some separate atomic cache, starts Ignite
>> > > > > >>> transaction
>> > > > > >>>>> and
>> > > > > >>>>>>>> replaces all the values for k* keys to values taken from
>> k*x
>> > > > > >>> keys.
>> > > > > >>>>> The
>> > > > > >>>>>>>> successful result must be the following:
>> > > > > >>>>>>>>
>> > > > > >>>>>>>>            [k1 => v1a]
>> > > > > >>>>>>>>            [k2 => v2ab]
>> > > > > >>>>>>>>            [k3 => v3b]
>> > > > > >>>>>>>>            [k1x => v1a]
>> > > > > >>>>>>>>            [k2x => v2ab]
>> > > > > >>>>>>>>            [k3x => v3b]
>> > > > > >>>>>>>>
>> > > > > >>>>>>>>        + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in
>> some
>> > > > > >>>>> separate
>> > > > > >>>>>>>> atomic cache
>> > > > > >>>>>>>>
>> > > > > >>>>>>>> But Committer Service also has to check that no one
>> updated
>> > > the
>> > > > > >>>>>> original
>> > > > > >>>>>>>> values before us, because otherwise we can not give any
>> > > > > >>>>> serializability
>> > > > > >>>>>>>> guarantee for these distributed transactions. Here we may
>> > need
>> > > > > >> to
>> > > > > >>>>> check
>> > > > > >>>>>>> not
>> > > > > >>>>>>>> only versions of the updated keys, but also versions of
>> any
>> > > > > >> other
>> > > > > >>>>> keys
>> > > > > >>>>>>> end
>> > > > > >>>>>>>> result depends on.
>> > > > > >>>>>>>>
>> > > > > >>>>>>>> After that Committer Service has to do a cleanup (may be
>> > > > > >> outside
>> > > > > >>> of
>> > > > > >>>>> the
>> > > > > >>>>>>>> committing tx) to come to the following final state:
>> > > > > >>>>>>>>
>> > > > > >>>>>>>>            [k1 => v1a]
>> > > > > >>>>>>>>            [k2 => v2ab]
>> > > > > >>>>>>>>            [k3 => v3b]
>> > > > > >>>>>>>>
>> > > > > >>>>>>>> Makes sense?
>> > > > > >>>>>>>>
>> > > > > >>>>>>>> Sergi
>> > > > > >>>>>>>>
>> > > > > >>>>>>>>
>> > > > > >>>>>>>> 2017-03-15 16:54 GMT+03:00 ALEKSEY KUZNETSOV <
>> > > > > >>>>> alkuznetsov.sb@gmail.com
>> > > > > >>>>>>> :
>> > > > > >>>>>>>>
>> > > > > >>>>>>>>>   - what do u mean by saying "
>> > > > > >>>>>>>>> *in a single transaction checks value versions for all
>> the
>> > > > > >> old
>> > > > > >>>>> values
>> > > > > >>>>>>>>>    and replaces them with calculated new ones *"? Every
>> > time
>> > > > > >>> you
>> > > > > >>>>>>> change
>> > > > > >>>>>>>>>   value(in some service), you store it to *some special
>> > > > > >> atomic
>> > > > > >>>>>> cache*
>> > > > > >>>>>>> ,
>> > > > > >>>>>>>> so
>> > > > > >>>>>>>>>   when all services ceased working, Service commiter
>> got a
>> > > > > >>>> values
>> > > > > >>>>>> with
>> > > > > >>>>>>>> the
>> > > > > >>>>>>>>>   last versions.
>> > > > > >>>>>>>>>   - After "*does cleanup of temporary keys and values*"
>> > > > > >>> Service
>> > > > > >>>>>>> commiter
>> > > > > >>>>>>>>>   persists them into permanent store, isn't it ?
>> > > > > >>>>>>>>>   - I cant grasp your though, you say "*in case of
>> version
>> > > > > >>>>> mismatch
>> > > > > >>>>>> or
>> > > > > >>>>>>>> TX
>> > > > > >>>>>>>>>   timeout just rollbacks*". But what versions would it
>> > > > > >> match?
>> > > > > >>>>>>>>>
>> > > > > >>>>>>>>>
>> > > > > >>>>>>>>> ср, 15 мар. 2017 г. в 15:34, Sergi Vladykin <
>> > > > > >>>>>> sergi.vladykin@gmail.com
>> > > > > >>>>>>>> :
>> > > > > >>>>>>>>>
>> > > > > >>>>>>>>>> Ok, here is what you actually need to implement at the
>> > > > > >>>>> application
>> > > > > >>>>>>>> level.
>> > > > > >>>>>>>>>>
>> > > > > >>>>>>>>>> Lets say we have to call 2 services in the following
>> > order:
>> > > > > >>>>>>>>>> - Service A: wants to update keys [k1 => v1,   k2 =>
>> v2]
>> > > > > >> to
>> > > > > >>>>> [k1
>> > > > > >>>>>> =>
>> > > > > >>>>>>>>> v1a,
>> > > > > >>>>>>>>>>  k2 => v2a]
>> > > > > >>>>>>>>>> - Service B: wants to update keys [k2 => v2a, k3 => v3]
>> > > > > >> to
>> > > > > >>>> [k2
>> > > > > >>>>>> =>
>> > > > > >>>>>>>>> v2ab,
>> > > > > >>>>>>>>>> k3 => v3b]
>> > > > > >>>>>>>>>>
>> > > > > >>>>>>>>>> The change
>> > > > > >>>>>>>>>>    from [ k1 => v1,   k2 => v2,     k3 => v3   ]
>> > > > > >>>>>>>>>>    to     [ k1 => v1a, k2 => v2ab, k3 => v3b ]
>> > > > > >>>>>>>>>> must happen in a single transaction.
>> > > > > >>>>>>>>>>
>> > > > > >>>>>>>>>>
>> > > > > >>>>>>>>>> Optimistic protocol to solve this:
>> > > > > >>>>>>>>>>
>> > > > > >>>>>>>>>> Each cache key must have a field `otx`, which is a
>> unique
>> > > > > >>>>>>> orchestrator
>> > > > > >>>>>>>> TX
>> > > > > >>>>>>>>>> identifier - it must be a parameter passed to all the
>> > > > > >>> services.
>> > > > > >>>>> If
>> > > > > >>>>>>>> `otx`
>> > > > > >>>>>>>>> is
>> > > > > >>>>>>>>>> set to some value it means that it is an intermediate
>> key
>> > > > > >> and
>> > > > > >>>> is
>> > > > > >>>>>>>> visible
>> > > > > >>>>>>>>>> only inside of some transaction, for the finalized key
>> > > > > >> `otx`
>> > > > > >>>> must
>> > > > > >>>>>> be
>> > > > > >>>>>>>>> null -
>> > > > > >>>>>>>>>> it means the key is committed and visible for everyone.
>> > > > > >>>>>>>>>>
>> > > > > >>>>>>>>>> Each cache value must have a field `ver` which is a
>> > version
>> > > > > >>> of
>> > > > > >>>>> that
>> > > > > >>>>>>>>> value.
>> > > > > >>>>>>>>>>
>> > > > > >>>>>>>>>> For both fields (`otx` and `ver`) the safest way is to
>> use
>> > > > > >>>> UUID.
>> > > > > >>>>>>>>>>
>> > > > > >>>>>>>>>> Workflow is the following:
>> > > > > >>>>>>>>>>
>> > > > > >>>>>>>>>> Orchestrator starts the distributed transaction with
>> `otx`
>> > > > > >> =
>> > > > > >>> x
>> > > > > >>>>> and
>> > > > > >>>>>>>> passes
>> > > > > >>>>>>>>>> this parameter to all the services.
>> > > > > >>>>>>>>>>
>> > > > > >>>>>>>>>> Service A:
>> > > > > >>>>>>>>>> - does some computations
>> > > > > >>>>>>>>>> - stores [k1x => v1a, k2x => v2a]  with TTL = Za
>> > > > > >>>>>>>>>>      where
>> > > > > >>>>>>>>>>          Za - left time from max Orchestrator TX
>> duration
>> > > > > >>>> after
>> > > > > >>>>>>>> Service
>> > > > > >>>>>>>>> A
>> > > > > >>>>>>>>>> end
>> > > > > >>>>>>>>>>          k1x, k2x - new temporary keys with field
>> `otx` =
>> > > > > >> x
>> > > > > >>>>>>>>>>          v2a has updated version `ver`
>> > > > > >>>>>>>>>> - returns a set of updated keys and all the old
>> versions
>> > > > > >> to
>> > > > > >>>> the
>> > > > > >>>>>>>>>> orchestrator
>> > > > > >>>>>>>>>>       or just stores it in some special atomic cache
>> like
>> > > > > >>>>>>>>>>       [x => (k1 -> ver1, k2 -> ver2)] TTL = Za
>> > > > > >>>>>>>>>>
>> > > > > >>>>>>>>>> Service B:
>> > > > > >>>>>>>>>> - retrieves the updated value k2x => v2a because it
>> knows
>> > > > > >>>> `otx`
>> > > > > >>>>> =
>> > > > > >>>>>> x
>> > > > > >>>>>>>>>> - does computations
>> > > > > >>>>>>>>>> - stores [k2x => v2ab, k3x => v3b] TTL = Zb
>> > > > > >>>>>>>>>> - updates the set of updated keys like [x => (k1 ->
>> ver1,
>> > > > > >> k2
>> > > > > >>>> ->
>> > > > > >>>>>>> ver2,
>> > > > > >>>>>>>> k3
>> > > > > >>>>>>>>>> -> ver3)] TTL = Zb
>> > > > > >>>>>>>>>>
>> > > > > >>>>>>>>>> Service Committer (may be embedded into Orchestrator):
>> > > > > >>>>>>>>>> - takes all the updated keys and versions for `otx` = x
>> > > > > >>>>>>>>>>       [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)]
>> > > > > >>>>>>>>>> - in a single transaction checks value versions for all
>> > > > > >> the
>> > > > > >>>> old
>> > > > > >>>>>>> values
>> > > > > >>>>>>>>>>       and replaces them with calculated new ones
>> > > > > >>>>>>>>>> - does cleanup of temporary keys and values
>> > > > > >>>>>>>>>> - in case of version mismatch or TX timeout just
>> rollbacks
>> > > > > >>> and
>> > > > > >>>>>>> signals
>> > > > > >>>>>>>>>>        to Orchestrator to restart the job with new
>> `otx`
>> > > > > >>>>>>>>>>
>> > > > > >>>>>>>>>> PROFIT!!
>> > > > > >>>>>>>>>>
>> > > > > >>>>>>>>>> This approach even allows you to run independent parts
>> of
>> > > > > >> the
>> > > > > >>>>> graph
>> > > > > >>>>>>> in
>> > > > > >>>>>>>>>> parallel (with TX transfer you will always run only
>> one at
>> > > > > >> a
>> > > > > >>>>> time).
>> > > > > >>>>>>>> Also
>> > > > > >>>>>>>>> it
>> > > > > >>>>>>>>>> does not require inventing any special fault tolerance
>> > > > > >>> technics
>> > > > > >>>>>>> because
>> > > > > >>>>>>>>>> Ignite caches are already fault tolerant and all the
>> > > > > >>>> intermediate
>> > > > > >>>>>>>> results
>> > > > > >>>>>>>>>> are virtually invisible and stored with TTL, thus in
>> case
>> > > > > >> of
>> > > > > >>>> any
>> > > > > >>>>>>> crash
>> > > > > >>>>>>>>> you
>> > > > > >>>>>>>>>> will not have inconsistent state or garbage.
>> > > > > >>>>>>>>>>
>> > > > > >>>>>>>>>> Sergi
>> > > > > >>>>>>>>>>
>> > > > > >>>>>>>>>>
>> > > > > >>>>>>>>>> 2017-03-15 11:42 GMT+03:00 ALEKSEY KUZNETSOV <
>> > > > > >>>>>>> alkuznetsov.sb@gmail.com
>> > > > > >>>>>>>>> :
>> > > > > >>>>>>>>>>
>> > > > > >>>>>>>>>>> Okay, we are open for proposals on business task. I
>> mean,
>> > > > > >>> we
>> > > > > >>>>> can
>> > > > > >>>>>>> make
>> > > > > >>>>>>>>> use
>> > > > > >>>>>>>>>>> of some other thing, not distributed transaction. Not
>> > > > > >>>>> transaction
>> > > > > >>>>>>>> yet.
>> > > > > >>>>>>>>>>>
>> > > > > >>>>>>>>>>> ср, 15 мар. 2017 г. в 11:24, Vladimir Ozerov <
>> > > > > >>>>>> vozerov@gridgain.com
>> > > > > >>>>>>>> :
>> > > > > >>>>>>>>>>>
>> > > > > >>>>>>>>>>>> IMO the use case makes sense. However, as Sergi
>> already
>> > > > > >>>>>>> mentioned,
>> > > > > >>>>>>>>> the
>> > > > > >>>>>>>>>>>> problem is far more complex, than simply passing TX
>> > > > > >> state
>> > > > > >>>>> over
>> > > > > >>>>>> a
>> > > > > >>>>>>>>> wire.
>> > > > > >>>>>>>>>>> Most
>> > > > > >>>>>>>>>>>> probably a kind of coordinator will be required still
>> > > > > >> to
>> > > > > >>>>> manage
>> > > > > >>>>>>> all
>> > > > > >>>>>>>>>> kinds
>> > > > > >>>>>>>>>>>> of failures. This task should be started with clean
>> > > > > >>> design
>> > > > > >>>>>>> proposal
>> > > > > >>>>>>>>>>>> explaining how we handle all these concurrent events.
>> > > > > >> And
>> > > > > >>>>> only
>> > > > > >>>>>>>> then,
>> > > > > >>>>>>>>>> when
>> > > > > >>>>>>>>>>>> we understand all implications, we should move to
>> > > > > >>>> development
>> > > > > >>>>>>>> stage.
>> > > > > >>>>>>>>>>>>
>> > > > > >>>>>>>>>>>> On Wed, Mar 15, 2017 at 10:38 AM, ALEKSEY KUZNETSOV <
>> > > > > >>>>>>>>>>>> alkuznetsov.sb@gmail.com> wrote:
>> > > > > >>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>> Right
>> > > > > >>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>> ср, 15 мар. 2017 г. в 10:35, Sergi Vladykin <
>> > > > > >>>>>>>>>> sergi.vladykin@gmail.com
>> > > > > >>>>>>>>>>>> :
>> > > > > >>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>> Good! Basically your orchestrator just takes some
>> > > > > >>>>>> predefined
>> > > > > >>>>>>>>> graph
>> > > > > >>>>>>>>>> of
>> > > > > >>>>>>>>>>>>>> distributed services to be invoked, calls them by
>> > > > > >>> some
>> > > > > >>>>> kind
>> > > > > >>>>>>> of
>> > > > > >>>>>>>>> RPC
>> > > > > >>>>>>>>>>> and
>> > > > > >>>>>>>>>>>>>> passes the needed parameters between them, right?
>> > > > > >>>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>> Sergi
>> > > > > >>>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>> 2017-03-14 22:46 GMT+03:00 ALEKSEY KUZNETSOV <
>> > > > > >>>>>>>>>>> alkuznetsov.sb@gmail.com
>> > > > > >>>>>>>>>>>>> :
>> > > > > >>>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>>> orchestrator is a custom thing. He is responsible
>> > > > > >>> for
>> > > > > >>>>>>>> managing
>> > > > > >>>>>>>>>>>> business
>> > > > > >>>>>>>>>>>>>>> scenarios flows. Many nodes are involved in
>> > > > > >>>> scenarios.
>> > > > > >>>>>> They
>> > > > > >>>>>>>>>>> exchange
>> > > > > >>>>>>>>>>>>> data
>> > > > > >>>>>>>>>>>>>>> and folow one another. If you acquinted with BPMN
>> > > > > >>>>>>> framework,
>> > > > > >>>>>>>> so
>> > > > > >>>>>>>>>>>>>>> orchestrator is like bpmn engine.
>> > > > > >>>>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>>> вт, 14 Мар 2017 г., 18:56 Sergi Vladykin <
>> > > > > >>>>>>>>>> sergi.vladykin@gmail.com
>> > > > > >>>>>>>>>>>> :
>> > > > > >>>>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>>>> What is Orchestrator for you? Is it a thing
>> > > > > >> from
>> > > > > >>>>>>> Microsoft
>> > > > > >>>>>>>> or
>> > > > > >>>>>>>>>>> your
>> > > > > >>>>>>>>>>>>>> custom
>> > > > > >>>>>>>>>>>>>>>> in-house software?
>> > > > > >>>>>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>>>> Sergi
>> > > > > >>>>>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>>>> 2017-03-14 18:00 GMT+03:00 ALEKSEY KUZNETSOV <
>> > > > > >>>>>>>>>>>>> alkuznetsov.sb@gmail.com
>> > > > > >>>>>>>>>>>>>>> :
>> > > > > >>>>>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>>>>> Fine. Let's say we've got multiple servers
>> > > > > >>> which
>> > > > > >>>>>>> fulfills
>> > > > > >>>>>>>>>>> custom
>> > > > > >>>>>>>>>>>>>> logic.
>> > > > > >>>>>>>>>>>>>>>>> This servers compound oriented graph (BPMN
>> > > > > >>>> process)
>> > > > > >>>>>>> which
>> > > > > >>>>>>>>>>>>> controlled
>> > > > > >>>>>>>>>>>>>> by
>> > > > > >>>>>>>>>>>>>>>>> Orchestrator.
>> > > > > >>>>>>>>>>>>>>>>> For instance, *server1  *creates *variable A
>> > > > > >>>> *with
>> > > > > >>>>>>> value
>> > > > > >>>>>>>> 1,
>> > > > > >>>>>>>>>>>>> persists
>> > > > > >>>>>>>>>>>>>> it
>> > > > > >>>>>>>>>>>>>>>> to
>> > > > > >>>>>>>>>>>>>>>>> IGNITE cache and creates *variable B *and
>> > > > > >> sends
>> > > > > >>>> it
>> > > > > >>>>>> to*
>> > > > > >>>>>>>>>> server2.
>> > > > > >>>>>>>>>>>>> *The
>> > > > > >>>>>>>>>>>>>>>>> latests receives *variable B*, do some logic
>> > > > > >>> with
>> > > > > >>>>> it
>> > > > > >>>>>>> and
>> > > > > >>>>>>>>>> stores
>> > > > > >>>>>>>>>>>> to
>> > > > > >>>>>>>>>>>>>>>> IGNITE.
>> > > > > >>>>>>>>>>>>>>>>> All the work made by both servers must be
>> > > > > >>>> fulfilled
>> > > > > >>>>>> in
>> > > > > >>>>>>>>> *one*
>> > > > > >>>>>>>>>>>>>>> transaction.
>> > > > > >>>>>>>>>>>>>>>>> Because we need all information done, or
>> > > > > >>>>>>>>> nothing(rollbacked).
>> > > > > >>>>>>>>>>> The
>> > > > > >>>>>>>>>>>>>>>> scenario
>> > > > > >>>>>>>>>>>>>>>>> is managed by orchestrator.
>> > > > > >>>>>>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 17:31, Sergi Vladykin <
>> > > > > >>>>>>>>>>>>>> sergi.vladykin@gmail.com
>> > > > > >>>>>>>>>>>>>>>> :
>> > > > > >>>>>>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>>>>>> Ok, it is not a business case, it is your
>> > > > > >>> wrong
>> > > > > >>>>>>>> solution
>> > > > > >>>>>>>>>> for
>> > > > > >>>>>>>>>>>> it.
>> > > > > >>>>>>>>>>>>>>>>>> Lets try again, what is the business case?
>> > > > > >>>>>>>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>>>>>> Sergi
>> > > > > >>>>>>>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>>>>>> 2017-03-14 16:42 GMT+03:00 ALEKSEY
>> > > > > >> KUZNETSOV
>> > > > > >>> <
>> > > > > >>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
>> > > > > >>>>>>>>>>>>>>>>> :
>> > > > > >>>>>>>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>>>>>>> The case is the following, One starts
>> > > > > >>>>> transaction
>> > > > > >>>>>>> in
>> > > > > >>>>>>>>> one
>> > > > > >>>>>>>>>>>> node,
>> > > > > >>>>>>>>>>>>>> and
>> > > > > >>>>>>>>>>>>>>>>> commit
>> > > > > >>>>>>>>>>>>>>>>>>> this transaction in another jvm node(or
>> > > > > >>>>> rollback
>> > > > > >>>>>> it
>> > > > > >>>>>>>>>>>> remotely).
>> > > > > >>>>>>>>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 16:30, Sergi
>> > > > > >>> Vladykin <
>> > > > > >>>>>>>>>>>>>>>> sergi.vladykin@gmail.com
>> > > > > >>>>>>>>>>>>>>>>>> :
>> > > > > >>>>>>>>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>>>>>>>> Because even if you make it work for
>> > > > > >> some
>> > > > > >>>>>>>> simplistic
>> > > > > >>>>>>>>>>>>> scenario,
>> > > > > >>>>>>>>>>>>>>> get
>> > > > > >>>>>>>>>>>>>>>>>> ready
>> > > > > >>>>>>>>>>>>>>>>>>> to
>> > > > > >>>>>>>>>>>>>>>>>>>> write many fault tolerance tests and
>> > > > > >> make
>> > > > > >>>>> sure
>> > > > > >>>>>>> that
>> > > > > >>>>>>>>> you
>> > > > > >>>>>>>>>>> TXs
>> > > > > >>>>>>>>>>>>>> work
>> > > > > >>>>>>>>>>>>>>>>>>> gracefully
>> > > > > >>>>>>>>>>>>>>>>>>>> in all modes in case of crashes. Also
>> > > > > >>> make
>> > > > > >>>>> sure
>> > > > > >>>>>>>> that
>> > > > > >>>>>>>>> we
>> > > > > >>>>>>>>>>> do
>> > > > > >>>>>>>>>>>>> not
>> > > > > >>>>>>>>>>>>>>> have
>> > > > > >>>>>>>>>>>>>>>>> any
>> > > > > >>>>>>>>>>>>>>>>>>>> performance drops after all your
>> > > > > >> changes
>> > > > > >>> in
>> > > > > >>>>>>>> existing
>> > > > > >>>>>>>>>>>>>> benchmarks.
>> > > > > >>>>>>>>>>>>>>>> All
>> > > > > >>>>>>>>>>>>>>>>> in
>> > > > > >>>>>>>>>>>>>>>>>>> all
>> > > > > >>>>>>>>>>>>>>>>>>>> I don't believe these conditions will
>> > > > > >> be
>> > > > > >>>> met
>> > > > > >>>>>> and
>> > > > > >>>>>>>> your
>> > > > > >>>>>>>>>>>>>>> contribution
>> > > > > >>>>>>>>>>>>>>>>> will
>> > > > > >>>>>>>>>>>>>>>>>>> be
>> > > > > >>>>>>>>>>>>>>>>>>>> accepted.
>> > > > > >>>>>>>>>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>>>>>>>> Better solution to what problem?
>> > > > > >> Sending
>> > > > > >>> TX
>> > > > > >>>>> to
>> > > > > >>>>>>>>> another
>> > > > > >>>>>>>>>>>> node?
>> > > > > >>>>>>>>>>>>>> The
>> > > > > >>>>>>>>>>>>>>>>>> problem
>> > > > > >>>>>>>>>>>>>>>>>>>> statement itself is already wrong. What
>> > > > > >>>>>> business
>> > > > > >>>>>>>> case
>> > > > > >>>>>>>>>> you
>> > > > > >>>>>>>>>>>> are
>> > > > > >>>>>>>>>>>>>>>> trying
>> > > > > >>>>>>>>>>>>>>>>> to
>> > > > > >>>>>>>>>>>>>>>>>>>> solve? I'm sure everything you need can
>> > > > > >>> be
>> > > > > >>>>> done
>> > > > > >>>>>>> in
>> > > > > >>>>>>>> a
>> > > > > >>>>>>>>>> much
>> > > > > >>>>>>>>>>>>> more
>> > > > > >>>>>>>>>>>>>>>> simple
>> > > > > >>>>>>>>>>>>>>>>>> and
>> > > > > >>>>>>>>>>>>>>>>>>>> efficient way at the application level.
>> > > > > >>>>>>>>>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>>>>>>>> Sergi
>> > > > > >>>>>>>>>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>>>>>>>> 2017-03-14 16:03 GMT+03:00 ALEKSEY
>> > > > > >>>> KUZNETSOV
>> > > > > >>>>> <
>> > > > > >>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
>> > > > > >>>>>>>>>>>>>>>>>>> :
>> > > > > >>>>>>>>>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>>>>>>>>> Why wrong ? You know the better
>> > > > > >>> solution?
>> > > > > >>>>>>>>>>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 15:46, Sergi
>> > > > > >>>>> Vladykin <
>> > > > > >>>>>>>>>>>>>>>>>> sergi.vladykin@gmail.com
>> > > > > >>>>>>>>>>>>>>>>>>>> :
>> > > > > >>>>>>>>>>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>>>>>>>>>> Just serializing TX object and
>> > > > > >>>>>> deserializing
>> > > > > >>>>>>> it
>> > > > > >>>>>>>>> on
>> > > > > >>>>>>>>>>>>> another
>> > > > > >>>>>>>>>>>>>>> node
>> > > > > >>>>>>>>>>>>>>>>> is
>> > > > > >>>>>>>>>>>>>>>>>>>>>> meaningless, because other nodes
>> > > > > >>>>>>> participating
>> > > > > >>>>>>>> in
>> > > > > >>>>>>>>>> the
>> > > > > >>>>>>>>>>>> TX
>> > > > > >>>>>>>>>>>>>> have
>> > > > > >>>>>>>>>>>>>>>> to
>> > > > > >>>>>>>>>>>>>>>>>> know
>> > > > > >>>>>>>>>>>>>>>>>>>>> about
>> > > > > >>>>>>>>>>>>>>>>>>>>>> the new coordinator. This will
>> > > > > >>> require
>> > > > > >>>>>>> protocol
>> > > > > >>>>>>>>>>>> changes,
>> > > > > >>>>>>>>>>>>> we
>> > > > > >>>>>>>>>>>>>>>>>>> definitely
>> > > > > >>>>>>>>>>>>>>>>>>>>> will
>> > > > > >>>>>>>>>>>>>>>>>>>>>> have fault tolerance and
>> > > > > >> performance
>> > > > > >>>>>> issues.
>> > > > > >>>>>>>> IMO
>> > > > > >>>>>>>>>> the
>> > > > > >>>>>>>>>>>>> whole
>> > > > > >>>>>>>>>>>>>>> idea
>> > > > > >>>>>>>>>>>>>>>>> is
>> > > > > >>>>>>>>>>>>>>>>>>>> wrong
>> > > > > >>>>>>>>>>>>>>>>>>>>>> and it makes no sense to waste time
>> > > > > >>> on
>> > > > > >>>>> it.
>> > > > > >>>>>>>>>>>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>>>>>>>>>> Sergi
>> > > > > >>>>>>>>>>>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>>>>>>>>>> 2017-03-14 10:57 GMT+03:00 ALEKSEY
>> > > > > >>>>>> KUZNETSOV
>> > > > > >>>>>>> <
>> > > > > >>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
>> > > > > >>>>>>>>>>>>>>>>>>>>> :
>> > > > > >>>>>>>>>>>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>>>>>>>>>>> IgniteTransactionState
>> > > > > >>>> implememntation
>> > > > > >>>>>>>> contains
>> > > > > >>>>>>>>>>>>>>>> IgniteTxEntry's
>> > > > > >>>>>>>>>>>>>>>>>>> which
>> > > > > >>>>>>>>>>>>>>>>>>>>> is
>> > > > > >>>>>>>>>>>>>>>>>>>>>>> supposed to be transferable
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>>>>>>>>>>> пн, 13 мар. 2017 г. в 19:32,
>> > > > > >>> Dmitriy
>> > > > > >>>>>>>> Setrakyan
>> > > > > >>>>>>>>> <
>> > > > > >>>>>>>>>>>>>>>>>>>> dsetrakyan@apache.org
>> > > > > >>>>>>>>>>>>>>>>>>>>>> :
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>> It sounds a little scary to me
>> > > > > >>> that
>> > > > > >>>>> we
>> > > > > >>>>>>> are
>> > > > > >>>>>>>>>>> passing
>> > > > > >>>>>>>>>>>>>>>>> transaction
>> > > > > >>>>>>>>>>>>>>>>>>>>> objects
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>> around. Such object may contain
>> > > > > >>> all
>> > > > > >>>>>> sorts
>> > > > > >>>>>>>> of
>> > > > > >>>>>>>>>>> Ignite
>> > > > > >>>>>>>>>>>>>>>> context.
>> > > > > >>>>>>>>>>>>>>>>> If
>> > > > > >>>>>>>>>>>>>>>>>>>> some
>> > > > > >>>>>>>>>>>>>>>>>>>>>> data
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>> needs to be passed across, we
>> > > > > >>>> should
>> > > > > >>>>>>>> create a
>> > > > > >>>>>>>>>>>> special
>> > > > > >>>>>>>>>>>>>>>>> transfer
>> > > > > >>>>>>>>>>>>>>>>>>>> object
>> > > > > >>>>>>>>>>>>>>>>>>>>>> in
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>> this case.
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>> D.
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>> On Mon, Mar 13, 2017 at 9:10
>> > > > > >> AM,
>> > > > > >>>>>> ALEKSEY
>> > > > > >>>>>>>>>>> KUZNETSOV
>> > > > > >>>>>>>>>>>> <
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>> well, there a couple of
>> > > > > >> issues
>> > > > > >>>>>>> preventing
>> > > > > >>>>>>>>>>>>> transaction
>> > > > > >>>>>>>>>>>>>>>>>>> proceeding.
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>> At first, After transaction
>> > > > > >>>>>>> serialization
>> > > > > >>>>>>>>> and
>> > > > > >>>>>>>>>>>>>>>>> deserialization
>> > > > > >>>>>>>>>>>>>>>>>>> on
>> > > > > >>>>>>>>>>>>>>>>>>>>> the
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>> remote
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>> server, there is no txState.
>> > > > > >> So
>> > > > > >>>> im
>> > > > > >>>>>>> going
>> > > > > >>>>>>>> to
>> > > > > >>>>>>>>>> put
>> > > > > >>>>>>>>>>>> it
>> > > > > >>>>>>>>>>>>> in
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>
>> > > > > >> writeExternal()\readExternal()
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>> The last one is Deserialized
>> > > > > >>>>>>> transaction
>> > > > > >>>>>>>>>> lacks
>> > > > > >>>>>>>>>>> of
>> > > > > >>>>>>>>>>>>>>> shared
>> > > > > >>>>>>>>>>>>>>>>>> cache
>> > > > > >>>>>>>>>>>>>>>>>>>>>> context
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>> field at
>> > > > > >> TransactionProxyImpl.
>> > > > > >>>>>> Perhaps,
>> > > > > >>>>>>>> it
>> > > > > >>>>>>>>>> must
>> > > > > >>>>>>>>>>>> be
>> > > > > >>>>>>>>>>>>>>>> injected
>> > > > > >>>>>>>>>>>>>>>>>> by
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>> GridResourceProcessor ?
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>> пн, 13 мар. 2017 г. в 17:27,
>> > > > > >>>>> ALEKSEY
>> > > > > >>>>>>>>>> KUZNETSOV
>> > > > > >>>>>>>>>>> <
>> > > > > >>>>>>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>> :
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> while starting and
>> > > > > >> continuing
>> > > > > >>>>>>>> transaction
>> > > > > >>>>>>>>>> in
>> > > > > >>>>>>>>>>>>>>> different
>> > > > > >>>>>>>>>>>>>>>>> jvms
>> > > > > >>>>>>>>>>>>>>>>>>> in
>> > > > > >>>>>>>>>>>>>>>>>>>>> run
>> > > > > >>>>>>>>>>>>>>>>>>>>>>> into
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> serialization exception in
>> > > > > >>>>>>>>>> writeExternalMeta
>> > > > > >>>>>>>>>>> :
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> @Override public void
>> > > > > >>>>>>>>>>>> writeExternal(ObjectOutput
>> > > > > >>>>>>>>>>>>>> out)
>> > > > > >>>>>>>>>>>>>>>>>> throws
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>> IOException
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>> {
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>    writeExternalMeta(out);
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> some meta is cannot be
>> > > > > >>>>> serialized.
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> пт, 10 мар. 2017 г. в
>> > > > > >> 17:25,
>> > > > > >>>>> Alexey
>> > > > > >>>>>>>>>>> Goncharuk <
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>> alexey.goncharuk@gmail.com
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>> :
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> Aleksey,
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> I think I am starting to
>> > > > > >> get
>> > > > > >>>> what
>> > > > > >>>>>> you
>> > > > > >>>>>>>>> want,
>> > > > > >>>>>>>>>>>> but I
>> > > > > >>>>>>>>>>>>>>> have
>> > > > > >>>>>>>>>>>>>>>> a
>> > > > > >>>>>>>>>>>>>>>>>> few
>> > > > > >>>>>>>>>>>>>>>>>>>>>>> concerns:
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> - What is the API for the
>> > > > > >>>>> proposed
>> > > > > >>>>>>>>> change?
>> > > > > >>>>>>>>>>> In
>> > > > > >>>>>>>>>>>>> your
>> > > > > >>>>>>>>>>>>>>>> test,
>> > > > > >>>>>>>>>>>>>>>>>> you
>> > > > > >>>>>>>>>>>>>>>>>>>>> pass
>> > > > > >>>>>>>>>>>>>>>>>>>>>> an
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> instance of transaction
>> > > > > >>> created
>> > > > > >>>>> on
>> > > > > >>>>>>>>>> ignite(0)
>> > > > > >>>>>>>>>>> to
>> > > > > >>>>>>>>>>>>> the
>> > > > > >>>>>>>>>>>>>>>>> ignite
>> > > > > >>>>>>>>>>>>>>>>>>>>> instance
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> ignite(1). This is
>> > > > > >> obviously
>> > > > > >>>> not
>> > > > > >>>>>>>> possible
>> > > > > >>>>>>>>>> in
>> > > > > >>>>>>>>>>> a
>> > > > > >>>>>>>>>>>>>> truly
>> > > > > >>>>>>>>>>>>>>>>>>>> distributed
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> (multi-jvm) environment.
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> - How will you synchronize
>> > > > > >>>> cache
>> > > > > >>>>>>> update
>> > > > > >>>>>>>>>>> actions
>> > > > > >>>>>>>>>>>>> and
>> > > > > >>>>>>>>>>>>>>>>>>> transaction
>> > > > > >>>>>>>>>>>>>>>>>>>>>>> commit?
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> Say, you have one node that
>> > > > > >>>>> decided
>> > > > > >>>>>>> to
>> > > > > >>>>>>>>>>> commit,
>> > > > > >>>>>>>>>>>>> but
>> > > > > >>>>>>>>>>>>>>>>> another
>> > > > > >>>>>>>>>>>>>>>>>>> node
>> > > > > >>>>>>>>>>>>>>>>>>>>> is
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>> still
>> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> writing within this
>> > > > > >>>> transaction.
>> > > > > >>>>>> How
>> > > > > >>>>>>> do
>> > > > > >>>>>>>>> you
>> > > > > >>>>>>>>>>>> make
>> > > > > >>>>>>>>>>>>>> sure
>> > > > > >>>>>>>>>>>>>>>>> that
>> > > > > >>>>>>>>>>>>>>>
>>
>> --
>
> *Best Regards,*
>
> *Kuznetsov Aleksey*
>
-- 

*Best Regards,*

*Kuznetsov Aleksey*

Re: distributed transaction of non-single coordinator

Posted by ALEKSEY KUZNETSOV <al...@gmail.com>.
so what do u think on my idea?

пт, 31 Мар 2017 г., 11:05 ALEKSEY KUZNETSOV <al...@gmail.com>:

> sorry for misleading you. We planned to support multi-node transactions,
> but failed.
>
> пт, 31 мар. 2017 г. в 10:51, Alexey Goncharuk <alexey.goncharuk@gmail.com
> >:
>
> Well, now the scenario is more clear, but it has nothing to do with
> multiple coordinators :) Let me think a little bit about it.
>
> 2017-03-31 9:53 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:
>
> > so what do u think on the issue ?
> >
> > чт, 30 Мар 2017 г., 17:49 ALEKSEY KUZNETSOV <al...@gmail.com>:
> >
> > > Hi ! Thanks for help. I've created ticket :
> > > https://issues.apache.org/jira/browse/IGNITE-4887
> > > and a commit :
> > > https://github.com/voipp/ignite/commit/aa3487bd9c203394f534c605f84e06
> > 436b638e5c
> > > We really need this feature
> > >
> > > чт, 30 мар. 2017 г. в 11:31, Alexey Goncharuk <
> > alexey.goncharuk@gmail.com
> > > >:
> > >
> > > Aleksey,
> > >
> > > I doubt your approach works as expected. Current transaction recovery
> > > protocol heavily relies on the originating node ID in its internal
> logic.
> > > For example, currently a transaction will be rolled back if you want to
> > > transfer a transaction ownership to another node and original tx owner
> > > fails. An attempt to commit such a transaction on another node may fail
> > > with all sorts of assertions. After transaction ownership changed, you
> > need
> > > to notify all current transaction participants about this change, and
> it
> > > should also be done failover-safe, let alone that you did not add any
> > tests
> > > for these cases.
> > >
> > > I back Denis here. Please create a ticket first and come up with clear
> > > use-cases, API and protocol changes design. It is hard to reason about
> > the
> > > changes you've made when we do not even understand why you are making
> > these
> > > changes and how they are supposed to work.
> > >
> > > --AG
> > >
> > > 2017-03-30 10:43 GMT+03:00 ALEKSEY KUZNETSOV <alkuznetsov.sb@gmail.com
> >:
> > >
> > > > So, what do u think on my idea ?
> > > >
> > > > ср, 29 мар. 2017 г. в 10:35, ALEKSEY KUZNETSOV <
> > alkuznetsov.sb@gmail.com
> > > >:
> > > >
> > > > > Hi! No, i dont have ticket for this.
> > > > > In the ticket i have implemented methods that change transaction
> > status
> > > > to
> > > > > STOP, thus letting it to commit transaction in another thread. In
> > > another
> > > > > thread you r going to restart transaction in order to commit it.
> > > > > The mechanism behind it is obvious : we change thread id to newer
> one
> > > in
> > > > > ThreadMap, and make use of serialization of txState, transactions
> > > itself
> > > > to
> > > > > transfer them into another thread.
> > > > >
> > > > >
> > > > > вт, 28 мар. 2017 г. в 20:15, Denis Magda <dm...@apache.org>:
> > > > >
> > > > > Aleksey,
> > > > >
> > > > > Do you have a ticket for this? Could you briefly list what exactly
> > was
> > > > > done and how the things work.
> > > > >
> > > > > —
> > > > > Denis
> > > > >
> > > > > > On Mar 28, 2017, at 8:32 AM, ALEKSEY KUZNETSOV <
> > > > alkuznetsov.sb@gmail.com>
> > > > > wrote:
> > > > > >
> > > > > > Hi, Igniters! I 've made implementation of transactions of
> > non-single
> > > > > > coordinator. Here you can start transaction in one thread and
> > commit
> > > it
> > > > > in
> > > > > > another thread.
> > > > > > Take a look on it. Give your thoughts on it.
> > > > > >
> > > > > >
> > > > > https://github.com/voipp/ignite/pull/10/commits/
> > > > 3a3d90aa6ac84f125e4c3ce4ced4f269a695ef45
> > > > > >
> > > > > > пт, 17 мар. 2017 г. в 19:26, Sergi Vladykin <
> > > sergi.vladykin@gmail.com
> > > > >:
> > > > > >
> > > > > >> You know better, go ahead! :)
> > > > > >>
> > > > > >> Sergi
> > > > > >>
> > > > > >> 2017-03-17 16:16 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > alkuznetsov.sb@gmail.com
> > > > > >:
> > > > > >>
> > > > > >>> we've discovered several problems regarding your "accumulation"
> > > > > >>> approach.These are
> > > > > >>>
> > > > > >>>   1. perfomance issues when transfering data from temporary
> cache
> > > to
> > > > > >>>   permanent one. Keep in mind big deal of concurent
> transactions
> > in
> > > > > >>> Service
> > > > > >>>   commiter
> > > > > >>>   2. extreme memory load when keeping temporary cache in memory
> > > > > >>>   3. As long as user is not acquainted with ignite, working
> with
> > > > cache
> > > > > >>>   must be transparent for him. Keep this in mind. User's node
> can
> > > > > >> evaluate
> > > > > >>>   logic with no transaction at all, so we should deal with both
> > > types
> > > > > of
> > > > > >>>   execution flow : transactional and non-transactional.Another
> > one
> > > > > >>> problem is
> > > > > >>>   transaction id support at the user node. We would have
> handled
> > > all
> > > > > >> this
> > > > > >>>   issues and many more.
> > > > > >>>   4. we cannot pessimistically lock entity.
> > > > > >>>
> > > > > >>> As a result, we decided to move on building distributed
> > > transaction.
> > > > We
> > > > > >> put
> > > > > >>> aside your "accumulation" approach until we realize how to
> solve
> > > > > >>> difficulties above .
> > > > > >>>
> > > > > >>> чт, 16 мар. 2017 г. в 16:56, Sergi Vladykin <
> > > > sergi.vladykin@gmail.com
> > > > > >:
> > > > > >>>
> > > > > >>>> The problem "How to run millions of entities, and millions of
> > > > > >> operations
> > > > > >>> on
> > > > > >>>> a single Pentium3" is out of scope here. Do the math, plan
> > > capacity
> > > > > >>>> reasonably.
> > > > > >>>>
> > > > > >>>> Sergi
> > > > > >>>>
> > > > > >>>> 2017-03-16 15:54 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > alkuznetsov.sb@gmail.com
> > > > > >>> :
> > > > > >>>>
> > > > > >>>>> hmm, If we have millions of entities, and millions of
> > operations,
> > > > > >> would
> > > > > >>>> not
> > > > > >>>>> this approache lead to memory overflow and perfomance
> > degradation
> > > > > >>>>>
> > > > > >>>>> чт, 16 мар. 2017 г. в 15:42, Sergi Vladykin <
> > > > > >> sergi.vladykin@gmail.com
> > > > > >>>> :
> > > > > >>>>>
> > > > > >>>>>> 1. Actually you have to check versions on all the values you
> > > have
> > > > > >>> read
> > > > > >>>>>> during the tx.
> > > > > >>>>>>
> > > > > >>>>>> For example if we have [k1 => v1, k2 => v2] and do:
> > > > > >>>>>>
> > > > > >>>>>> put(k1, get(k2) + 5)
> > > > > >>>>>>
> > > > > >>>>>> We have to remember the version for k2. This logic can be
> > > > > >> relatively
> > > > > >>>>> easily
> > > > > >>>>>> encapsulated in a framework atop of Ignite. You need to
> > > implement
> > > > > >> one
> > > > > >>>> to
> > > > > >>>>>> make all this stuff usable.
> > > > > >>>>>>
> > > > > >>>>>> 2. I suggest to avoid any locking here, because you easily
> > will
> > > > end
> > > > > >>> up
> > > > > >>>>> with
> > > > > >>>>>> deadlocks. If you do not have too frequent updates for your
> > > keys,
> > > > > >>>>>> optimistic approach will work just fine.
> > > > > >>>>>>
> > > > > >>>>>> Theoretically in the Committer Service you can start a
> thread
> > > for
> > > > > >> the
> > > > > >>>>>> lifetime of the whole distributed transaction, take a lock
> on
> > > the
> > > > > >> key
> > > > > >>>>> using
> > > > > >>>>>> IgniteCache.lock(K key) before executing any Services, wait
> > for
> > > > all
> > > > > >>> the
> > > > > >>>>>> services to complete, execute optimistic commit in the same
> > > thread
> > > > > >>>> while
> > > > > >>>>>> keeping this lock and then release it. Notice that all the
> > > Ignite
> > > > > >>>>>> transactions inside of all Services must be optimistic here
> to
> > > be
> > > > > >>> able
> > > > > >>>> to
> > > > > >>>>>> read this locked key.
> > > > > >>>>>>
> > > > > >>>>>> But again I do not recommend you using this approach until
> you
> > > > > >> have a
> > > > > >>>>>> reliable deadlock avoidance scheme.
> > > > > >>>>>>
> > > > > >>>>>> Sergi
> > > > > >>>>>>
> > > > > >>>>>>
> > > > > >>>>>>
> > > > > >>>>>>
> > > > > >>>>>>
> > > > > >>>>>>
> > > > > >>>>>>
> > > > > >>>>>> 2017-03-16 12:53 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > >>> alkuznetsov.sb@gmail.com
> > > > > >>>>> :
> > > > > >>>>>>
> > > > > >>>>>>> Yeah, now i got it.
> > > > > >>>>>>> There are some doubts on this approach
> > > > > >>>>>>> 1) During optimistic commit phase, when you assure no one
> > > altered
> > > > > >>> the
> > > > > >>>>>>> original values, you must check versions of other dependent
> > > keys.
> > > > > >>> How
> > > > > >>>>>> could
> > > > > >>>>>>> we obtain those keys(in an automative manner, of course) ?
> > > > > >>>>>>> 2) How could we lock a key before some Service A introduce
> > > > > >> changes?
> > > > > >>>> So
> > > > > >>>>> no
> > > > > >>>>>>> other service is allowed to change this key-value?(sort of
> > > > > >>>> pessimistic
> > > > > >>>>>>> blocking)
> > > > > >>>>>>> May be you know some implementations of such approach ?
> > > > > >>>>>>>
> > > > > >>>>>>> ср, 15 мар. 2017 г. в 17:54, ALEKSEY KUZNETSOV <
> > > > > >>>>> alkuznetsov.sb@gmail.com
> > > > > >>>>>>> :
> > > > > >>>>>>>
> > > > > >>>>>>>> Thank you very much for help.  I will answer later.
> > > > > >>>>>>>>
> > > > > >>>>>>>> ср, 15 мар. 2017 г. в 17:39, Sergi Vladykin <
> > > > > >>>>> sergi.vladykin@gmail.com
> > > > > >>>>>>> :
> > > > > >>>>>>>>
> > > > > >>>>>>>> All the services do not update key in place, but only
> > generate
> > > > > >>> new
> > > > > >>>>> keys
> > > > > >>>>>>>> augmented by otx and store the updated value in the same
> > cache
> > > > > >> +
> > > > > >>>>>> remember
> > > > > >>>>>>>> the keys and versions participating in the transaction in
> > some
> > > > > >>>>> separate
> > > > > >>>>>>>> atomic cache.
> > > > > >>>>>>>>
> > > > > >>>>>>>> Follow this sequence of changes applied to cache contents
> by
> > > > > >> each
> > > > > >>>>>>> Service:
> > > > > >>>>>>>>
> > > > > >>>>>>>> Initial cache contents:
> > > > > >>>>>>>>            [k1 => v1]
> > > > > >>>>>>>>            [k2 => v2]
> > > > > >>>>>>>>            [k3 => v3]
> > > > > >>>>>>>>
> > > > > >>>>>>>> Cache contents after Service A:
> > > > > >>>>>>>>            [k1 => v1]
> > > > > >>>>>>>>            [k2 => v2]
> > > > > >>>>>>>>            [k3 => v3]
> > > > > >>>>>>>>            [k1x => v1a]
> > > > > >>>>>>>>            [k2x => v2a]
> > > > > >>>>>>>>
> > > > > >>>>>>>>         + [x => (k1 -> ver1, k2 -> ver2)] in some separate
> > > > > >>> atomic
> > > > > >>>>>> cache
> > > > > >>>>>>>>
> > > > > >>>>>>>> Cache contents after Service B:
> > > > > >>>>>>>>            [k1 => v1]
> > > > > >>>>>>>>            [k2 => v2]
> > > > > >>>>>>>>            [k3 => v3]
> > > > > >>>>>>>>            [k1x => v1a]
> > > > > >>>>>>>>            [k2x => v2ab]
> > > > > >>>>>>>>            [k3x => v3b]
> > > > > >>>>>>>>
> > > > > >>>>>>>>        + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in
> some
> > > > > >>>>> separate
> > > > > >>>>>>>> atomic cache
> > > > > >>>>>>>>
> > > > > >>>>>>>> Finally the Committer Service takes this map of updated
> keys
> > > > > >> and
> > > > > >>>>> their
> > > > > >>>>>>>> versions from some separate atomic cache, starts Ignite
> > > > > >>> transaction
> > > > > >>>>> and
> > > > > >>>>>>>> replaces all the values for k* keys to values taken from
> k*x
> > > > > >>> keys.
> > > > > >>>>> The
> > > > > >>>>>>>> successful result must be the following:
> > > > > >>>>>>>>
> > > > > >>>>>>>>            [k1 => v1a]
> > > > > >>>>>>>>            [k2 => v2ab]
> > > > > >>>>>>>>            [k3 => v3b]
> > > > > >>>>>>>>            [k1x => v1a]
> > > > > >>>>>>>>            [k2x => v2ab]
> > > > > >>>>>>>>            [k3x => v3b]
> > > > > >>>>>>>>
> > > > > >>>>>>>>        + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in
> some
> > > > > >>>>> separate
> > > > > >>>>>>>> atomic cache
> > > > > >>>>>>>>
> > > > > >>>>>>>> But Committer Service also has to check that no one
> updated
> > > the
> > > > > >>>>>> original
> > > > > >>>>>>>> values before us, because otherwise we can not give any
> > > > > >>>>> serializability
> > > > > >>>>>>>> guarantee for these distributed transactions. Here we may
> > need
> > > > > >> to
> > > > > >>>>> check
> > > > > >>>>>>> not
> > > > > >>>>>>>> only versions of the updated keys, but also versions of
> any
> > > > > >> other
> > > > > >>>>> keys
> > > > > >>>>>>> end
> > > > > >>>>>>>> result depends on.
> > > > > >>>>>>>>
> > > > > >>>>>>>> After that Committer Service has to do a cleanup (may be
> > > > > >> outside
> > > > > >>> of
> > > > > >>>>> the
> > > > > >>>>>>>> committing tx) to come to the following final state:
> > > > > >>>>>>>>
> > > > > >>>>>>>>            [k1 => v1a]
> > > > > >>>>>>>>            [k2 => v2ab]
> > > > > >>>>>>>>            [k3 => v3b]
> > > > > >>>>>>>>
> > > > > >>>>>>>> Makes sense?
> > > > > >>>>>>>>
> > > > > >>>>>>>> Sergi
> > > > > >>>>>>>>
> > > > > >>>>>>>>
> > > > > >>>>>>>> 2017-03-15 16:54 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > >>>>> alkuznetsov.sb@gmail.com
> > > > > >>>>>>> :
> > > > > >>>>>>>>
> > > > > >>>>>>>>>   - what do u mean by saying "
> > > > > >>>>>>>>> *in a single transaction checks value versions for all
> the
> > > > > >> old
> > > > > >>>>> values
> > > > > >>>>>>>>>    and replaces them with calculated new ones *"? Every
> > time
> > > > > >>> you
> > > > > >>>>>>> change
> > > > > >>>>>>>>>   value(in some service), you store it to *some special
> > > > > >> atomic
> > > > > >>>>>> cache*
> > > > > >>>>>>> ,
> > > > > >>>>>>>> so
> > > > > >>>>>>>>>   when all services ceased working, Service commiter got
> a
> > > > > >>>> values
> > > > > >>>>>> with
> > > > > >>>>>>>> the
> > > > > >>>>>>>>>   last versions.
> > > > > >>>>>>>>>   - After "*does cleanup of temporary keys and values*"
> > > > > >>> Service
> > > > > >>>>>>> commiter
> > > > > >>>>>>>>>   persists them into permanent store, isn't it ?
> > > > > >>>>>>>>>   - I cant grasp your though, you say "*in case of
> version
> > > > > >>>>> mismatch
> > > > > >>>>>> or
> > > > > >>>>>>>> TX
> > > > > >>>>>>>>>   timeout just rollbacks*". But what versions would it
> > > > > >> match?
> > > > > >>>>>>>>>
> > > > > >>>>>>>>>
> > > > > >>>>>>>>> ср, 15 мар. 2017 г. в 15:34, Sergi Vladykin <
> > > > > >>>>>> sergi.vladykin@gmail.com
> > > > > >>>>>>>> :
> > > > > >>>>>>>>>
> > > > > >>>>>>>>>> Ok, here is what you actually need to implement at the
> > > > > >>>>> application
> > > > > >>>>>>>> level.
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>> Lets say we have to call 2 services in the following
> > order:
> > > > > >>>>>>>>>> - Service A: wants to update keys [k1 => v1,   k2 => v2]
> > > > > >> to
> > > > > >>>>> [k1
> > > > > >>>>>> =>
> > > > > >>>>>>>>> v1a,
> > > > > >>>>>>>>>>  k2 => v2a]
> > > > > >>>>>>>>>> - Service B: wants to update keys [k2 => v2a, k3 => v3]
> > > > > >> to
> > > > > >>>> [k2
> > > > > >>>>>> =>
> > > > > >>>>>>>>> v2ab,
> > > > > >>>>>>>>>> k3 => v3b]
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>> The change
> > > > > >>>>>>>>>>    from [ k1 => v1,   k2 => v2,     k3 => v3   ]
> > > > > >>>>>>>>>>    to     [ k1 => v1a, k2 => v2ab, k3 => v3b ]
> > > > > >>>>>>>>>> must happen in a single transaction.
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>> Optimistic protocol to solve this:
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>> Each cache key must have a field `otx`, which is a
> unique
> > > > > >>>>>>> orchestrator
> > > > > >>>>>>>> TX
> > > > > >>>>>>>>>> identifier - it must be a parameter passed to all the
> > > > > >>> services.
> > > > > >>>>> If
> > > > > >>>>>>>> `otx`
> > > > > >>>>>>>>> is
> > > > > >>>>>>>>>> set to some value it means that it is an intermediate
> key
> > > > > >> and
> > > > > >>>> is
> > > > > >>>>>>>> visible
> > > > > >>>>>>>>>> only inside of some transaction, for the finalized key
> > > > > >> `otx`
> > > > > >>>> must
> > > > > >>>>>> be
> > > > > >>>>>>>>> null -
> > > > > >>>>>>>>>> it means the key is committed and visible for everyone.
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>> Each cache value must have a field `ver` which is a
> > version
> > > > > >>> of
> > > > > >>>>> that
> > > > > >>>>>>>>> value.
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>> For both fields (`otx` and `ver`) the safest way is to
> use
> > > > > >>>> UUID.
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>> Workflow is the following:
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>> Orchestrator starts the distributed transaction with
> `otx`
> > > > > >> =
> > > > > >>> x
> > > > > >>>>> and
> > > > > >>>>>>>> passes
> > > > > >>>>>>>>>> this parameter to all the services.
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>> Service A:
> > > > > >>>>>>>>>> - does some computations
> > > > > >>>>>>>>>> - stores [k1x => v1a, k2x => v2a]  with TTL = Za
> > > > > >>>>>>>>>>      where
> > > > > >>>>>>>>>>          Za - left time from max Orchestrator TX
> duration
> > > > > >>>> after
> > > > > >>>>>>>> Service
> > > > > >>>>>>>>> A
> > > > > >>>>>>>>>> end
> > > > > >>>>>>>>>>          k1x, k2x - new temporary keys with field `otx`
> =
> > > > > >> x
> > > > > >>>>>>>>>>          v2a has updated version `ver`
> > > > > >>>>>>>>>> - returns a set of updated keys and all the old versions
> > > > > >> to
> > > > > >>>> the
> > > > > >>>>>>>>>> orchestrator
> > > > > >>>>>>>>>>       or just stores it in some special atomic cache
> like
> > > > > >>>>>>>>>>       [x => (k1 -> ver1, k2 -> ver2)] TTL = Za
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>> Service B:
> > > > > >>>>>>>>>> - retrieves the updated value k2x => v2a because it
> knows
> > > > > >>>> `otx`
> > > > > >>>>> =
> > > > > >>>>>> x
> > > > > >>>>>>>>>> - does computations
> > > > > >>>>>>>>>> - stores [k2x => v2ab, k3x => v3b] TTL = Zb
> > > > > >>>>>>>>>> - updates the set of updated keys like [x => (k1 ->
> ver1,
> > > > > >> k2
> > > > > >>>> ->
> > > > > >>>>>>> ver2,
> > > > > >>>>>>>> k3
> > > > > >>>>>>>>>> -> ver3)] TTL = Zb
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>> Service Committer (may be embedded into Orchestrator):
> > > > > >>>>>>>>>> - takes all the updated keys and versions for `otx` = x
> > > > > >>>>>>>>>>       [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)]
> > > > > >>>>>>>>>> - in a single transaction checks value versions for all
> > > > > >> the
> > > > > >>>> old
> > > > > >>>>>>> values
> > > > > >>>>>>>>>>       and replaces them with calculated new ones
> > > > > >>>>>>>>>> - does cleanup of temporary keys and values
> > > > > >>>>>>>>>> - in case of version mismatch or TX timeout just
> rollbacks
> > > > > >>> and
> > > > > >>>>>>> signals
> > > > > >>>>>>>>>>        to Orchestrator to restart the job with new `otx`
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>> PROFIT!!
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>> This approach even allows you to run independent parts
> of
> > > > > >> the
> > > > > >>>>> graph
> > > > > >>>>>>> in
> > > > > >>>>>>>>>> parallel (with TX transfer you will always run only one
> at
> > > > > >> a
> > > > > >>>>> time).
> > > > > >>>>>>>> Also
> > > > > >>>>>>>>> it
> > > > > >>>>>>>>>> does not require inventing any special fault tolerance
> > > > > >>> technics
> > > > > >>>>>>> because
> > > > > >>>>>>>>>> Ignite caches are already fault tolerant and all the
> > > > > >>>> intermediate
> > > > > >>>>>>>> results
> > > > > >>>>>>>>>> are virtually invisible and stored with TTL, thus in
> case
> > > > > >> of
> > > > > >>>> any
> > > > > >>>>>>> crash
> > > > > >>>>>>>>> you
> > > > > >>>>>>>>>> will not have inconsistent state or garbage.
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>> Sergi
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>> 2017-03-15 11:42 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > >>>>>>> alkuznetsov.sb@gmail.com
> > > > > >>>>>>>>> :
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>>> Okay, we are open for proposals on business task. I
> mean,
> > > > > >>> we
> > > > > >>>>> can
> > > > > >>>>>>> make
> > > > > >>>>>>>>> use
> > > > > >>>>>>>>>>> of some other thing, not distributed transaction. Not
> > > > > >>>>> transaction
> > > > > >>>>>>>> yet.
> > > > > >>>>>>>>>>>
> > > > > >>>>>>>>>>> ср, 15 мар. 2017 г. в 11:24, Vladimir Ozerov <
> > > > > >>>>>> vozerov@gridgain.com
> > > > > >>>>>>>> :
> > > > > >>>>>>>>>>>
> > > > > >>>>>>>>>>>> IMO the use case makes sense. However, as Sergi
> already
> > > > > >>>>>>> mentioned,
> > > > > >>>>>>>>> the
> > > > > >>>>>>>>>>>> problem is far more complex, than simply passing TX
> > > > > >> state
> > > > > >>>>> over
> > > > > >>>>>> a
> > > > > >>>>>>>>> wire.
> > > > > >>>>>>>>>>> Most
> > > > > >>>>>>>>>>>> probably a kind of coordinator will be required still
> > > > > >> to
> > > > > >>>>> manage
> > > > > >>>>>>> all
> > > > > >>>>>>>>>> kinds
> > > > > >>>>>>>>>>>> of failures. This task should be started with clean
> > > > > >>> design
> > > > > >>>>>>> proposal
> > > > > >>>>>>>>>>>> explaining how we handle all these concurrent events.
> > > > > >> And
> > > > > >>>>> only
> > > > > >>>>>>>> then,
> > > > > >>>>>>>>>> when
> > > > > >>>>>>>>>>>> we understand all implications, we should move to
> > > > > >>>> development
> > > > > >>>>>>>> stage.
> > > > > >>>>>>>>>>>>
> > > > > >>>>>>>>>>>> On Wed, Mar 15, 2017 at 10:38 AM, ALEKSEY KUZNETSOV <
> > > > > >>>>>>>>>>>> alkuznetsov.sb@gmail.com> wrote:
> > > > > >>>>>>>>>>>>
> > > > > >>>>>>>>>>>>> Right
> > > > > >>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>> ср, 15 мар. 2017 г. в 10:35, Sergi Vladykin <
> > > > > >>>>>>>>>> sergi.vladykin@gmail.com
> > > > > >>>>>>>>>>>> :
> > > > > >>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>> Good! Basically your orchestrator just takes some
> > > > > >>>>>> predefined
> > > > > >>>>>>>>> graph
> > > > > >>>>>>>>>> of
> > > > > >>>>>>>>>>>>>> distributed services to be invoked, calls them by
> > > > > >>> some
> > > > > >>>>> kind
> > > > > >>>>>>> of
> > > > > >>>>>>>>> RPC
> > > > > >>>>>>>>>>> and
> > > > > >>>>>>>>>>>>>> passes the needed parameters between them, right?
> > > > > >>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>> Sergi
> > > > > >>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>> 2017-03-14 22:46 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > >>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > > > >>>>>>>>>>>>> :
> > > > > >>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>> orchestrator is a custom thing. He is responsible
> > > > > >>> for
> > > > > >>>>>>>> managing
> > > > > >>>>>>>>>>>> business
> > > > > >>>>>>>>>>>>>>> scenarios flows. Many nodes are involved in
> > > > > >>>> scenarios.
> > > > > >>>>>> They
> > > > > >>>>>>>>>>> exchange
> > > > > >>>>>>>>>>>>> data
> > > > > >>>>>>>>>>>>>>> and folow one another. If you acquinted with BPMN
> > > > > >>>>>>> framework,
> > > > > >>>>>>>> so
> > > > > >>>>>>>>>>>>>>> orchestrator is like bpmn engine.
> > > > > >>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>> вт, 14 Мар 2017 г., 18:56 Sergi Vladykin <
> > > > > >>>>>>>>>> sergi.vladykin@gmail.com
> > > > > >>>>>>>>>>>> :
> > > > > >>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>> What is Orchestrator for you? Is it a thing
> > > > > >> from
> > > > > >>>>>>> Microsoft
> > > > > >>>>>>>> or
> > > > > >>>>>>>>>>> your
> > > > > >>>>>>>>>>>>>> custom
> > > > > >>>>>>>>>>>>>>>> in-house software?
> > > > > >>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>> Sergi
> > > > > >>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>> 2017-03-14 18:00 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > >>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > > > >>>>>>>>>>>>>>> :
> > > > > >>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>> Fine. Let's say we've got multiple servers
> > > > > >>> which
> > > > > >>>>>>> fulfills
> > > > > >>>>>>>>>>> custom
> > > > > >>>>>>>>>>>>>> logic.
> > > > > >>>>>>>>>>>>>>>>> This servers compound oriented graph (BPMN
> > > > > >>>> process)
> > > > > >>>>>>> which
> > > > > >>>>>>>>>>>>> controlled
> > > > > >>>>>>>>>>>>>> by
> > > > > >>>>>>>>>>>>>>>>> Orchestrator.
> > > > > >>>>>>>>>>>>>>>>> For instance, *server1  *creates *variable A
> > > > > >>>> *with
> > > > > >>>>>>> value
> > > > > >>>>>>>> 1,
> > > > > >>>>>>>>>>>>> persists
> > > > > >>>>>>>>>>>>>> it
> > > > > >>>>>>>>>>>>>>>> to
> > > > > >>>>>>>>>>>>>>>>> IGNITE cache and creates *variable B *and
> > > > > >> sends
> > > > > >>>> it
> > > > > >>>>>> to*
> > > > > >>>>>>>>>> server2.
> > > > > >>>>>>>>>>>>> *The
> > > > > >>>>>>>>>>>>>>>>> latests receives *variable B*, do some logic
> > > > > >>> with
> > > > > >>>>> it
> > > > > >>>>>>> and
> > > > > >>>>>>>>>> stores
> > > > > >>>>>>>>>>>> to
> > > > > >>>>>>>>>>>>>>>> IGNITE.
> > > > > >>>>>>>>>>>>>>>>> All the work made by both servers must be
> > > > > >>>> fulfilled
> > > > > >>>>>> in
> > > > > >>>>>>>>> *one*
> > > > > >>>>>>>>>>>>>>> transaction.
> > > > > >>>>>>>>>>>>>>>>> Because we need all information done, or
> > > > > >>>>>>>>> nothing(rollbacked).
> > > > > >>>>>>>>>>> The
> > > > > >>>>>>>>>>>>>>>> scenario
> > > > > >>>>>>>>>>>>>>>>> is managed by orchestrator.
> > > > > >>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 17:31, Sergi Vladykin <
> > > > > >>>>>>>>>>>>>> sergi.vladykin@gmail.com
> > > > > >>>>>>>>>>>>>>>> :
> > > > > >>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>> Ok, it is not a business case, it is your
> > > > > >>> wrong
> > > > > >>>>>>>> solution
> > > > > >>>>>>>>>> for
> > > > > >>>>>>>>>>>> it.
> > > > > >>>>>>>>>>>>>>>>>> Lets try again, what is the business case?
> > > > > >>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>> Sergi
> > > > > >>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>> 2017-03-14 16:42 GMT+03:00 ALEKSEY
> > > > > >> KUZNETSOV
> > > > > >>> <
> > > > > >>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > > > >>>>>>>>>>>>>>>>> :
> > > > > >>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>> The case is the following, One starts
> > > > > >>>>> transaction
> > > > > >>>>>>> in
> > > > > >>>>>>>>> one
> > > > > >>>>>>>>>>>> node,
> > > > > >>>>>>>>>>>>>> and
> > > > > >>>>>>>>>>>>>>>>> commit
> > > > > >>>>>>>>>>>>>>>>>>> this transaction in another jvm node(or
> > > > > >>>>> rollback
> > > > > >>>>>> it
> > > > > >>>>>>>>>>>> remotely).
> > > > > >>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 16:30, Sergi
> > > > > >>> Vladykin <
> > > > > >>>>>>>>>>>>>>>> sergi.vladykin@gmail.com
> > > > > >>>>>>>>>>>>>>>>>> :
> > > > > >>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>> Because even if you make it work for
> > > > > >> some
> > > > > >>>>>>>> simplistic
> > > > > >>>>>>>>>>>>> scenario,
> > > > > >>>>>>>>>>>>>>> get
> > > > > >>>>>>>>>>>>>>>>>> ready
> > > > > >>>>>>>>>>>>>>>>>>> to
> > > > > >>>>>>>>>>>>>>>>>>>> write many fault tolerance tests and
> > > > > >> make
> > > > > >>>>> sure
> > > > > >>>>>>> that
> > > > > >>>>>>>>> you
> > > > > >>>>>>>>>>> TXs
> > > > > >>>>>>>>>>>>>> work
> > > > > >>>>>>>>>>>>>>>>>>> gracefully
> > > > > >>>>>>>>>>>>>>>>>>>> in all modes in case of crashes. Also
> > > > > >>> make
> > > > > >>>>> sure
> > > > > >>>>>>>> that
> > > > > >>>>>>>>> we
> > > > > >>>>>>>>>>> do
> > > > > >>>>>>>>>>>>> not
> > > > > >>>>>>>>>>>>>>> have
> > > > > >>>>>>>>>>>>>>>>> any
> > > > > >>>>>>>>>>>>>>>>>>>> performance drops after all your
> > > > > >> changes
> > > > > >>> in
> > > > > >>>>>>>> existing
> > > > > >>>>>>>>>>>>>> benchmarks.
> > > > > >>>>>>>>>>>>>>>> All
> > > > > >>>>>>>>>>>>>>>>> in
> > > > > >>>>>>>>>>>>>>>>>>> all
> > > > > >>>>>>>>>>>>>>>>>>>> I don't believe these conditions will
> > > > > >> be
> > > > > >>>> met
> > > > > >>>>>> and
> > > > > >>>>>>>> your
> > > > > >>>>>>>>>>>>>>> contribution
> > > > > >>>>>>>>>>>>>>>>> will
> > > > > >>>>>>>>>>>>>>>>>>> be
> > > > > >>>>>>>>>>>>>>>>>>>> accepted.
> > > > > >>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>> Better solution to what problem?
> > > > > >> Sending
> > > > > >>> TX
> > > > > >>>>> to
> > > > > >>>>>>>>> another
> > > > > >>>>>>>>>>>> node?
> > > > > >>>>>>>>>>>>>> The
> > > > > >>>>>>>>>>>>>>>>>> problem
> > > > > >>>>>>>>>>>>>>>>>>>> statement itself is already wrong. What
> > > > > >>>>>> business
> > > > > >>>>>>>> case
> > > > > >>>>>>>>>> you
> > > > > >>>>>>>>>>>> are
> > > > > >>>>>>>>>>>>>>>> trying
> > > > > >>>>>>>>>>>>>>>>> to
> > > > > >>>>>>>>>>>>>>>>>>>> solve? I'm sure everything you need can
> > > > > >>> be
> > > > > >>>>> done
> > > > > >>>>>>> in
> > > > > >>>>>>>> a
> > > > > >>>>>>>>>> much
> > > > > >>>>>>>>>>>>> more
> > > > > >>>>>>>>>>>>>>>> simple
> > > > > >>>>>>>>>>>>>>>>>> and
> > > > > >>>>>>>>>>>>>>>>>>>> efficient way at the application level.
> > > > > >>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>> Sergi
> > > > > >>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>> 2017-03-14 16:03 GMT+03:00 ALEKSEY
> > > > > >>>> KUZNETSOV
> > > > > >>>>> <
> > > > > >>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > > > >>>>>>>>>>>>>>>>>>> :
> > > > > >>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>> Why wrong ? You know the better
> > > > > >>> solution?
> > > > > >>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 15:46, Sergi
> > > > > >>>>> Vladykin <
> > > > > >>>>>>>>>>>>>>>>>> sergi.vladykin@gmail.com
> > > > > >>>>>>>>>>>>>>>>>>>> :
> > > > > >>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>> Just serializing TX object and
> > > > > >>>>>> deserializing
> > > > > >>>>>>> it
> > > > > >>>>>>>>> on
> > > > > >>>>>>>>>>>>> another
> > > > > >>>>>>>>>>>>>>> node
> > > > > >>>>>>>>>>>>>>>>> is
> > > > > >>>>>>>>>>>>>>>>>>>>>> meaningless, because other nodes
> > > > > >>>>>>> participating
> > > > > >>>>>>>> in
> > > > > >>>>>>>>>> the
> > > > > >>>>>>>>>>>> TX
> > > > > >>>>>>>>>>>>>> have
> > > > > >>>>>>>>>>>>>>>> to
> > > > > >>>>>>>>>>>>>>>>>> know
> > > > > >>>>>>>>>>>>>>>>>>>>> about
> > > > > >>>>>>>>>>>>>>>>>>>>>> the new coordinator. This will
> > > > > >>> require
> > > > > >>>>>>> protocol
> > > > > >>>>>>>>>>>> changes,
> > > > > >>>>>>>>>>>>> we
> > > > > >>>>>>>>>>>>>>>>>>> definitely
> > > > > >>>>>>>>>>>>>>>>>>>>> will
> > > > > >>>>>>>>>>>>>>>>>>>>>> have fault tolerance and
> > > > > >> performance
> > > > > >>>>>> issues.
> > > > > >>>>>>>> IMO
> > > > > >>>>>>>>>> the
> > > > > >>>>>>>>>>>>> whole
> > > > > >>>>>>>>>>>>>>> idea
> > > > > >>>>>>>>>>>>>>>>> is
> > > > > >>>>>>>>>>>>>>>>>>>> wrong
> > > > > >>>>>>>>>>>>>>>>>>>>>> and it makes no sense to waste time
> > > > > >>> on
> > > > > >>>>> it.
> > > > > >>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>> Sergi
> > > > > >>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>> 2017-03-14 10:57 GMT+03:00 ALEKSEY
> > > > > >>>>>> KUZNETSOV
> > > > > >>>>>>> <
> > > > > >>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > > > >>>>>>>>>>>>>>>>>>>>> :
> > > > > >>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>>> IgniteTransactionState
> > > > > >>>> implememntation
> > > > > >>>>>>>> contains
> > > > > >>>>>>>>>>>>>>>> IgniteTxEntry's
> > > > > >>>>>>>>>>>>>>>>>>> which
> > > > > >>>>>>>>>>>>>>>>>>>>> is
> > > > > >>>>>>>>>>>>>>>>>>>>>>> supposed to be transferable
> > > > > >>>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>>> пн, 13 мар. 2017 г. в 19:32,
> > > > > >>> Dmitriy
> > > > > >>>>>>>> Setrakyan
> > > > > >>>>>>>>> <
> > > > > >>>>>>>>>>>>>>>>>>>> dsetrakyan@apache.org
> > > > > >>>>>>>>>>>>>>>>>>>>>> :
> > > > > >>>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>>>> It sounds a little scary to me
> > > > > >>> that
> > > > > >>>>> we
> > > > > >>>>>>> are
> > > > > >>>>>>>>>>> passing
> > > > > >>>>>>>>>>>>>>>>> transaction
> > > > > >>>>>>>>>>>>>>>>>>>>> objects
> > > > > >>>>>>>>>>>>>>>>>>>>>>>> around. Such object may contain
> > > > > >>> all
> > > > > >>>>>> sorts
> > > > > >>>>>>>> of
> > > > > >>>>>>>>>>> Ignite
> > > > > >>>>>>>>>>>>>>>> context.
> > > > > >>>>>>>>>>>>>>>>> If
> > > > > >>>>>>>>>>>>>>>>>>>> some
> > > > > >>>>>>>>>>>>>>>>>>>>>> data
> > > > > >>>>>>>>>>>>>>>>>>>>>>>> needs to be passed across, we
> > > > > >>>> should
> > > > > >>>>>>>> create a
> > > > > >>>>>>>>>>>> special
> > > > > >>>>>>>>>>>>>>>>> transfer
> > > > > >>>>>>>>>>>>>>>>>>>> object
> > > > > >>>>>>>>>>>>>>>>>>>>>> in
> > > > > >>>>>>>>>>>>>>>>>>>>>>>> this case.
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>>>> D.
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>>>> On Mon, Mar 13, 2017 at 9:10
> > > > > >> AM,
> > > > > >>>>>> ALEKSEY
> > > > > >>>>>>>>>>> KUZNETSOV
> > > > > >>>>>>>>>>>> <
> > > > > >>>>>>>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>> well, there a couple of
> > > > > >> issues
> > > > > >>>>>>> preventing
> > > > > >>>>>>>>>>>>> transaction
> > > > > >>>>>>>>>>>>>>>>>>> proceeding.
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>> At first, After transaction
> > > > > >>>>>>> serialization
> > > > > >>>>>>>>> and
> > > > > >>>>>>>>>>>>>>>>> deserialization
> > > > > >>>>>>>>>>>>>>>>>>> on
> > > > > >>>>>>>>>>>>>>>>>>>>> the
> > > > > >>>>>>>>>>>>>>>>>>>>>>>> remote
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>> server, there is no txState.
> > > > > >> So
> > > > > >>>> im
> > > > > >>>>>>> going
> > > > > >>>>>>>> to
> > > > > >>>>>>>>>> put
> > > > > >>>>>>>>>>>> it
> > > > > >>>>>>>>>>>>> in
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>
> > > > > >> writeExternal()\readExternal()
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>> The last one is Deserialized
> > > > > >>>>>>> transaction
> > > > > >>>>>>>>>> lacks
> > > > > >>>>>>>>>>> of
> > > > > >>>>>>>>>>>>>>> shared
> > > > > >>>>>>>>>>>>>>>>>> cache
> > > > > >>>>>>>>>>>>>>>>>>>>>> context
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>> field at
> > > > > >> TransactionProxyImpl.
> > > > > >>>>>> Perhaps,
> > > > > >>>>>>>> it
> > > > > >>>>>>>>>> must
> > > > > >>>>>>>>>>>> be
> > > > > >>>>>>>>>>>>>>>> injected
> > > > > >>>>>>>>>>>>>>>>>> by
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>> GridResourceProcessor ?
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>> пн, 13 мар. 2017 г. в 17:27,
> > > > > >>>>> ALEKSEY
> > > > > >>>>>>>>>> KUZNETSOV
> > > > > >>>>>>>>>>> <
> > > > > >>>>>>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>> :
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> while starting and
> > > > > >> continuing
> > > > > >>>>>>>> transaction
> > > > > >>>>>>>>>> in
> > > > > >>>>>>>>>>>>>>> different
> > > > > >>>>>>>>>>>>>>>>> jvms
> > > > > >>>>>>>>>>>>>>>>>>> in
> > > > > >>>>>>>>>>>>>>>>>>>>> run
> > > > > >>>>>>>>>>>>>>>>>>>>>>> into
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> serialization exception in
> > > > > >>>>>>>>>> writeExternalMeta
> > > > > >>>>>>>>>>> :
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> @Override public void
> > > > > >>>>>>>>>>>> writeExternal(ObjectOutput
> > > > > >>>>>>>>>>>>>> out)
> > > > > >>>>>>>>>>>>>>>>>> throws
> > > > > >>>>>>>>>>>>>>>>>>>>>>>> IOException
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>> {
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>    writeExternalMeta(out);
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> some meta is cannot be
> > > > > >>>>> serialized.
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> пт, 10 мар. 2017 г. в
> > > > > >> 17:25,
> > > > > >>>>> Alexey
> > > > > >>>>>>>>>>> Goncharuk <
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>> alexey.goncharuk@gmail.com
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>> :
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> Aleksey,
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> I think I am starting to
> > > > > >> get
> > > > > >>>> what
> > > > > >>>>>> you
> > > > > >>>>>>>>> want,
> > > > > >>>>>>>>>>>> but I
> > > > > >>>>>>>>>>>>>>> have
> > > > > >>>>>>>>>>>>>>>> a
> > > > > >>>>>>>>>>>>>>>>>> few
> > > > > >>>>>>>>>>>>>>>>>>>>>>> concerns:
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> - What is the API for the
> > > > > >>>>> proposed
> > > > > >>>>>>>>> change?
> > > > > >>>>>>>>>>> In
> > > > > >>>>>>>>>>>>> your
> > > > > >>>>>>>>>>>>>>>> test,
> > > > > >>>>>>>>>>>>>>>>>> you
> > > > > >>>>>>>>>>>>>>>>>>>>> pass
> > > > > >>>>>>>>>>>>>>>>>>>>>> an
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> instance of transaction
> > > > > >>> created
> > > > > >>>>> on
> > > > > >>>>>>>>>> ignite(0)
> > > > > >>>>>>>>>>> to
> > > > > >>>>>>>>>>>>> the
> > > > > >>>>>>>>>>>>>>>>> ignite
> > > > > >>>>>>>>>>>>>>>>>>>>> instance
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> ignite(1). This is
> > > > > >> obviously
> > > > > >>>> not
> > > > > >>>>>>>> possible
> > > > > >>>>>>>>>> in
> > > > > >>>>>>>>>>> a
> > > > > >>>>>>>>>>>>>> truly
> > > > > >>>>>>>>>>>>>>>>>>>> distributed
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> (multi-jvm) environment.
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> - How will you synchronize
> > > > > >>>> cache
> > > > > >>>>>>> update
> > > > > >>>>>>>>>>> actions
> > > > > >>>>>>>>>>>>> and
> > > > > >>>>>>>>>>>>>>>>>>> transaction
> > > > > >>>>>>>>>>>>>>>>>>>>>>> commit?
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> Say, you have one node that
> > > > > >>>>> decided
> > > > > >>>>>>> to
> > > > > >>>>>>>>>>> commit,
> > > > > >>>>>>>>>>>>> but
> > > > > >>>>>>>>>>>>>>>>> another
> > > > > >>>>>>>>>>>>>>>>>>> node
> > > > > >>>>>>>>>>>>>>>>>>>>> is
> > > > > >>>>>>>>>>>>>>>>>>>>>>>> still
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> writing within this
> > > > > >>>> transaction.
> > > > > >>>>>> How
> > > > > >>>>>>> do
> > > > > >>>>>>>>> you
> > > > > >>>>>>>>>>>> make
> > > > > >>>>>>>>>>>>>> sure
> > > > > >>>>>>>>>>>>>>>>> that
> > > > > >>>>>>>>>>>>>>>
>
> --

*Best Regards,*

*Kuznetsov Aleksey*

Re: distributed transaction of non-single coordinator

Posted by ALEKSEY KUZNETSOV <al...@gmail.com>.
sorry for misleading you. We planned to support multi-node transactions,
but failed.

пт, 31 мар. 2017 г. в 10:51, Alexey Goncharuk <al...@gmail.com>:

> Well, now the scenario is more clear, but it has nothing to do with
> multiple coordinators :) Let me think a little bit about it.
>
> 2017-03-31 9:53 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:
>
> > so what do u think on the issue ?
> >
> > чт, 30 Мар 2017 г., 17:49 ALEKSEY KUZNETSOV <al...@gmail.com>:
> >
> > > Hi ! Thanks for help. I've created ticket :
> > > https://issues.apache.org/jira/browse/IGNITE-4887
> > > and a commit :
> > > https://github.com/voipp/ignite/commit/aa3487bd9c203394f534c605f84e06
> > 436b638e5c
> > > We really need this feature
> > >
> > > чт, 30 мар. 2017 г. в 11:31, Alexey Goncharuk <
> > alexey.goncharuk@gmail.com
> > > >:
> > >
> > > Aleksey,
> > >
> > > I doubt your approach works as expected. Current transaction recovery
> > > protocol heavily relies on the originating node ID in its internal
> logic.
> > > For example, currently a transaction will be rolled back if you want to
> > > transfer a transaction ownership to another node and original tx owner
> > > fails. An attempt to commit such a transaction on another node may fail
> > > with all sorts of assertions. After transaction ownership changed, you
> > need
> > > to notify all current transaction participants about this change, and
> it
> > > should also be done failover-safe, let alone that you did not add any
> > tests
> > > for these cases.
> > >
> > > I back Denis here. Please create a ticket first and come up with clear
> > > use-cases, API and protocol changes design. It is hard to reason about
> > the
> > > changes you've made when we do not even understand why you are making
> > these
> > > changes and how they are supposed to work.
> > >
> > > --AG
> > >
> > > 2017-03-30 10:43 GMT+03:00 ALEKSEY KUZNETSOV <alkuznetsov.sb@gmail.com
> >:
> > >
> > > > So, what do u think on my idea ?
> > > >
> > > > ср, 29 мар. 2017 г. в 10:35, ALEKSEY KUZNETSOV <
> > alkuznetsov.sb@gmail.com
> > > >:
> > > >
> > > > > Hi! No, i dont have ticket for this.
> > > > > In the ticket i have implemented methods that change transaction
> > status
> > > > to
> > > > > STOP, thus letting it to commit transaction in another thread. In
> > > another
> > > > > thread you r going to restart transaction in order to commit it.
> > > > > The mechanism behind it is obvious : we change thread id to newer
> one
> > > in
> > > > > ThreadMap, and make use of serialization of txState, transactions
> > > itself
> > > > to
> > > > > transfer them into another thread.
> > > > >
> > > > >
> > > > > вт, 28 мар. 2017 г. в 20:15, Denis Magda <dm...@apache.org>:
> > > > >
> > > > > Aleksey,
> > > > >
> > > > > Do you have a ticket for this? Could you briefly list what exactly
> > was
> > > > > done and how the things work.
> > > > >
> > > > > —
> > > > > Denis
> > > > >
> > > > > > On Mar 28, 2017, at 8:32 AM, ALEKSEY KUZNETSOV <
> > > > alkuznetsov.sb@gmail.com>
> > > > > wrote:
> > > > > >
> > > > > > Hi, Igniters! I 've made implementation of transactions of
> > non-single
> > > > > > coordinator. Here you can start transaction in one thread and
> > commit
> > > it
> > > > > in
> > > > > > another thread.
> > > > > > Take a look on it. Give your thoughts on it.
> > > > > >
> > > > > >
> > > > > https://github.com/voipp/ignite/pull/10/commits/
> > > > 3a3d90aa6ac84f125e4c3ce4ced4f269a695ef45
> > > > > >
> > > > > > пт, 17 мар. 2017 г. в 19:26, Sergi Vladykin <
> > > sergi.vladykin@gmail.com
> > > > >:
> > > > > >
> > > > > >> You know better, go ahead! :)
> > > > > >>
> > > > > >> Sergi
> > > > > >>
> > > > > >> 2017-03-17 16:16 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > alkuznetsov.sb@gmail.com
> > > > > >:
> > > > > >>
> > > > > >>> we've discovered several problems regarding your "accumulation"
> > > > > >>> approach.These are
> > > > > >>>
> > > > > >>>   1. perfomance issues when transfering data from temporary
> cache
> > > to
> > > > > >>>   permanent one. Keep in mind big deal of concurent
> transactions
> > in
> > > > > >>> Service
> > > > > >>>   commiter
> > > > > >>>   2. extreme memory load when keeping temporary cache in memory
> > > > > >>>   3. As long as user is not acquainted with ignite, working
> with
> > > > cache
> > > > > >>>   must be transparent for him. Keep this in mind. User's node
> can
> > > > > >> evaluate
> > > > > >>>   logic with no transaction at all, so we should deal with both
> > > types
> > > > > of
> > > > > >>>   execution flow : transactional and non-transactional.Another
> > one
> > > > > >>> problem is
> > > > > >>>   transaction id support at the user node. We would have
> handled
> > > all
> > > > > >> this
> > > > > >>>   issues and many more.
> > > > > >>>   4. we cannot pessimistically lock entity.
> > > > > >>>
> > > > > >>> As a result, we decided to move on building distributed
> > > transaction.
> > > > We
> > > > > >> put
> > > > > >>> aside your "accumulation" approach until we realize how to
> solve
> > > > > >>> difficulties above .
> > > > > >>>
> > > > > >>> чт, 16 мар. 2017 г. в 16:56, Sergi Vladykin <
> > > > sergi.vladykin@gmail.com
> > > > > >:
> > > > > >>>
> > > > > >>>> The problem "How to run millions of entities, and millions of
> > > > > >> operations
> > > > > >>> on
> > > > > >>>> a single Pentium3" is out of scope here. Do the math, plan
> > > capacity
> > > > > >>>> reasonably.
> > > > > >>>>
> > > > > >>>> Sergi
> > > > > >>>>
> > > > > >>>> 2017-03-16 15:54 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > alkuznetsov.sb@gmail.com
> > > > > >>> :
> > > > > >>>>
> > > > > >>>>> hmm, If we have millions of entities, and millions of
> > operations,
> > > > > >> would
> > > > > >>>> not
> > > > > >>>>> this approache lead to memory overflow and perfomance
> > degradation
> > > > > >>>>>
> > > > > >>>>> чт, 16 мар. 2017 г. в 15:42, Sergi Vladykin <
> > > > > >> sergi.vladykin@gmail.com
> > > > > >>>> :
> > > > > >>>>>
> > > > > >>>>>> 1. Actually you have to check versions on all the values you
> > > have
> > > > > >>> read
> > > > > >>>>>> during the tx.
> > > > > >>>>>>
> > > > > >>>>>> For example if we have [k1 => v1, k2 => v2] and do:
> > > > > >>>>>>
> > > > > >>>>>> put(k1, get(k2) + 5)
> > > > > >>>>>>
> > > > > >>>>>> We have to remember the version for k2. This logic can be
> > > > > >> relatively
> > > > > >>>>> easily
> > > > > >>>>>> encapsulated in a framework atop of Ignite. You need to
> > > implement
> > > > > >> one
> > > > > >>>> to
> > > > > >>>>>> make all this stuff usable.
> > > > > >>>>>>
> > > > > >>>>>> 2. I suggest to avoid any locking here, because you easily
> > will
> > > > end
> > > > > >>> up
> > > > > >>>>> with
> > > > > >>>>>> deadlocks. If you do not have too frequent updates for your
> > > keys,
> > > > > >>>>>> optimistic approach will work just fine.
> > > > > >>>>>>
> > > > > >>>>>> Theoretically in the Committer Service you can start a
> thread
> > > for
> > > > > >> the
> > > > > >>>>>> lifetime of the whole distributed transaction, take a lock
> on
> > > the
> > > > > >> key
> > > > > >>>>> using
> > > > > >>>>>> IgniteCache.lock(K key) before executing any Services, wait
> > for
> > > > all
> > > > > >>> the
> > > > > >>>>>> services to complete, execute optimistic commit in the same
> > > thread
> > > > > >>>> while
> > > > > >>>>>> keeping this lock and then release it. Notice that all the
> > > Ignite
> > > > > >>>>>> transactions inside of all Services must be optimistic here
> to
> > > be
> > > > > >>> able
> > > > > >>>> to
> > > > > >>>>>> read this locked key.
> > > > > >>>>>>
> > > > > >>>>>> But again I do not recommend you using this approach until
> you
> > > > > >> have a
> > > > > >>>>>> reliable deadlock avoidance scheme.
> > > > > >>>>>>
> > > > > >>>>>> Sergi
> > > > > >>>>>>
> > > > > >>>>>>
> > > > > >>>>>>
> > > > > >>>>>>
> > > > > >>>>>>
> > > > > >>>>>>
> > > > > >>>>>>
> > > > > >>>>>> 2017-03-16 12:53 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > >>> alkuznetsov.sb@gmail.com
> > > > > >>>>> :
> > > > > >>>>>>
> > > > > >>>>>>> Yeah, now i got it.
> > > > > >>>>>>> There are some doubts on this approach
> > > > > >>>>>>> 1) During optimistic commit phase, when you assure no one
> > > altered
> > > > > >>> the
> > > > > >>>>>>> original values, you must check versions of other dependent
> > > keys.
> > > > > >>> How
> > > > > >>>>>> could
> > > > > >>>>>>> we obtain those keys(in an automative manner, of course) ?
> > > > > >>>>>>> 2) How could we lock a key before some Service A introduce
> > > > > >> changes?
> > > > > >>>> So
> > > > > >>>>> no
> > > > > >>>>>>> other service is allowed to change this key-value?(sort of
> > > > > >>>> pessimistic
> > > > > >>>>>>> blocking)
> > > > > >>>>>>> May be you know some implementations of such approach ?
> > > > > >>>>>>>
> > > > > >>>>>>> ср, 15 мар. 2017 г. в 17:54, ALEKSEY KUZNETSOV <
> > > > > >>>>> alkuznetsov.sb@gmail.com
> > > > > >>>>>>> :
> > > > > >>>>>>>
> > > > > >>>>>>>> Thank you very much for help.  I will answer later.
> > > > > >>>>>>>>
> > > > > >>>>>>>> ср, 15 мар. 2017 г. в 17:39, Sergi Vladykin <
> > > > > >>>>> sergi.vladykin@gmail.com
> > > > > >>>>>>> :
> > > > > >>>>>>>>
> > > > > >>>>>>>> All the services do not update key in place, but only
> > generate
> > > > > >>> new
> > > > > >>>>> keys
> > > > > >>>>>>>> augmented by otx and store the updated value in the same
> > cache
> > > > > >> +
> > > > > >>>>>> remember
> > > > > >>>>>>>> the keys and versions participating in the transaction in
> > some
> > > > > >>>>> separate
> > > > > >>>>>>>> atomic cache.
> > > > > >>>>>>>>
> > > > > >>>>>>>> Follow this sequence of changes applied to cache contents
> by
> > > > > >> each
> > > > > >>>>>>> Service:
> > > > > >>>>>>>>
> > > > > >>>>>>>> Initial cache contents:
> > > > > >>>>>>>>            [k1 => v1]
> > > > > >>>>>>>>            [k2 => v2]
> > > > > >>>>>>>>            [k3 => v3]
> > > > > >>>>>>>>
> > > > > >>>>>>>> Cache contents after Service A:
> > > > > >>>>>>>>            [k1 => v1]
> > > > > >>>>>>>>            [k2 => v2]
> > > > > >>>>>>>>            [k3 => v3]
> > > > > >>>>>>>>            [k1x => v1a]
> > > > > >>>>>>>>            [k2x => v2a]
> > > > > >>>>>>>>
> > > > > >>>>>>>>         + [x => (k1 -> ver1, k2 -> ver2)] in some separate
> > > > > >>> atomic
> > > > > >>>>>> cache
> > > > > >>>>>>>>
> > > > > >>>>>>>> Cache contents after Service B:
> > > > > >>>>>>>>            [k1 => v1]
> > > > > >>>>>>>>            [k2 => v2]
> > > > > >>>>>>>>            [k3 => v3]
> > > > > >>>>>>>>            [k1x => v1a]
> > > > > >>>>>>>>            [k2x => v2ab]
> > > > > >>>>>>>>            [k3x => v3b]
> > > > > >>>>>>>>
> > > > > >>>>>>>>        + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in
> some
> > > > > >>>>> separate
> > > > > >>>>>>>> atomic cache
> > > > > >>>>>>>>
> > > > > >>>>>>>> Finally the Committer Service takes this map of updated
> keys
> > > > > >> and
> > > > > >>>>> their
> > > > > >>>>>>>> versions from some separate atomic cache, starts Ignite
> > > > > >>> transaction
> > > > > >>>>> and
> > > > > >>>>>>>> replaces all the values for k* keys to values taken from
> k*x
> > > > > >>> keys.
> > > > > >>>>> The
> > > > > >>>>>>>> successful result must be the following:
> > > > > >>>>>>>>
> > > > > >>>>>>>>            [k1 => v1a]
> > > > > >>>>>>>>            [k2 => v2ab]
> > > > > >>>>>>>>            [k3 => v3b]
> > > > > >>>>>>>>            [k1x => v1a]
> > > > > >>>>>>>>            [k2x => v2ab]
> > > > > >>>>>>>>            [k3x => v3b]
> > > > > >>>>>>>>
> > > > > >>>>>>>>        + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in
> some
> > > > > >>>>> separate
> > > > > >>>>>>>> atomic cache
> > > > > >>>>>>>>
> > > > > >>>>>>>> But Committer Service also has to check that no one
> updated
> > > the
> > > > > >>>>>> original
> > > > > >>>>>>>> values before us, because otherwise we can not give any
> > > > > >>>>> serializability
> > > > > >>>>>>>> guarantee for these distributed transactions. Here we may
> > need
> > > > > >> to
> > > > > >>>>> check
> > > > > >>>>>>> not
> > > > > >>>>>>>> only versions of the updated keys, but also versions of
> any
> > > > > >> other
> > > > > >>>>> keys
> > > > > >>>>>>> end
> > > > > >>>>>>>> result depends on.
> > > > > >>>>>>>>
> > > > > >>>>>>>> After that Committer Service has to do a cleanup (may be
> > > > > >> outside
> > > > > >>> of
> > > > > >>>>> the
> > > > > >>>>>>>> committing tx) to come to the following final state:
> > > > > >>>>>>>>
> > > > > >>>>>>>>            [k1 => v1a]
> > > > > >>>>>>>>            [k2 => v2ab]
> > > > > >>>>>>>>            [k3 => v3b]
> > > > > >>>>>>>>
> > > > > >>>>>>>> Makes sense?
> > > > > >>>>>>>>
> > > > > >>>>>>>> Sergi
> > > > > >>>>>>>>
> > > > > >>>>>>>>
> > > > > >>>>>>>> 2017-03-15 16:54 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > >>>>> alkuznetsov.sb@gmail.com
> > > > > >>>>>>> :
> > > > > >>>>>>>>
> > > > > >>>>>>>>>   - what do u mean by saying "
> > > > > >>>>>>>>> *in a single transaction checks value versions for all
> the
> > > > > >> old
> > > > > >>>>> values
> > > > > >>>>>>>>>    and replaces them with calculated new ones *"? Every
> > time
> > > > > >>> you
> > > > > >>>>>>> change
> > > > > >>>>>>>>>   value(in some service), you store it to *some special
> > > > > >> atomic
> > > > > >>>>>> cache*
> > > > > >>>>>>> ,
> > > > > >>>>>>>> so
> > > > > >>>>>>>>>   when all services ceased working, Service commiter got
> a
> > > > > >>>> values
> > > > > >>>>>> with
> > > > > >>>>>>>> the
> > > > > >>>>>>>>>   last versions.
> > > > > >>>>>>>>>   - After "*does cleanup of temporary keys and values*"
> > > > > >>> Service
> > > > > >>>>>>> commiter
> > > > > >>>>>>>>>   persists them into permanent store, isn't it ?
> > > > > >>>>>>>>>   - I cant grasp your though, you say "*in case of
> version
> > > > > >>>>> mismatch
> > > > > >>>>>> or
> > > > > >>>>>>>> TX
> > > > > >>>>>>>>>   timeout just rollbacks*". But what versions would it
> > > > > >> match?
> > > > > >>>>>>>>>
> > > > > >>>>>>>>>
> > > > > >>>>>>>>> ср, 15 мар. 2017 г. в 15:34, Sergi Vladykin <
> > > > > >>>>>> sergi.vladykin@gmail.com
> > > > > >>>>>>>> :
> > > > > >>>>>>>>>
> > > > > >>>>>>>>>> Ok, here is what you actually need to implement at the
> > > > > >>>>> application
> > > > > >>>>>>>> level.
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>> Lets say we have to call 2 services in the following
> > order:
> > > > > >>>>>>>>>> - Service A: wants to update keys [k1 => v1,   k2 => v2]
> > > > > >> to
> > > > > >>>>> [k1
> > > > > >>>>>> =>
> > > > > >>>>>>>>> v1a,
> > > > > >>>>>>>>>>  k2 => v2a]
> > > > > >>>>>>>>>> - Service B: wants to update keys [k2 => v2a, k3 => v3]
> > > > > >> to
> > > > > >>>> [k2
> > > > > >>>>>> =>
> > > > > >>>>>>>>> v2ab,
> > > > > >>>>>>>>>> k3 => v3b]
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>> The change
> > > > > >>>>>>>>>>    from [ k1 => v1,   k2 => v2,     k3 => v3   ]
> > > > > >>>>>>>>>>    to     [ k1 => v1a, k2 => v2ab, k3 => v3b ]
> > > > > >>>>>>>>>> must happen in a single transaction.
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>> Optimistic protocol to solve this:
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>> Each cache key must have a field `otx`, which is a
> unique
> > > > > >>>>>>> orchestrator
> > > > > >>>>>>>> TX
> > > > > >>>>>>>>>> identifier - it must be a parameter passed to all the
> > > > > >>> services.
> > > > > >>>>> If
> > > > > >>>>>>>> `otx`
> > > > > >>>>>>>>> is
> > > > > >>>>>>>>>> set to some value it means that it is an intermediate
> key
> > > > > >> and
> > > > > >>>> is
> > > > > >>>>>>>> visible
> > > > > >>>>>>>>>> only inside of some transaction, for the finalized key
> > > > > >> `otx`
> > > > > >>>> must
> > > > > >>>>>> be
> > > > > >>>>>>>>> null -
> > > > > >>>>>>>>>> it means the key is committed and visible for everyone.
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>> Each cache value must have a field `ver` which is a
> > version
> > > > > >>> of
> > > > > >>>>> that
> > > > > >>>>>>>>> value.
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>> For both fields (`otx` and `ver`) the safest way is to
> use
> > > > > >>>> UUID.
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>> Workflow is the following:
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>> Orchestrator starts the distributed transaction with
> `otx`
> > > > > >> =
> > > > > >>> x
> > > > > >>>>> and
> > > > > >>>>>>>> passes
> > > > > >>>>>>>>>> this parameter to all the services.
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>> Service A:
> > > > > >>>>>>>>>> - does some computations
> > > > > >>>>>>>>>> - stores [k1x => v1a, k2x => v2a]  with TTL = Za
> > > > > >>>>>>>>>>      where
> > > > > >>>>>>>>>>          Za - left time from max Orchestrator TX
> duration
> > > > > >>>> after
> > > > > >>>>>>>> Service
> > > > > >>>>>>>>> A
> > > > > >>>>>>>>>> end
> > > > > >>>>>>>>>>          k1x, k2x - new temporary keys with field `otx`
> =
> > > > > >> x
> > > > > >>>>>>>>>>          v2a has updated version `ver`
> > > > > >>>>>>>>>> - returns a set of updated keys and all the old versions
> > > > > >> to
> > > > > >>>> the
> > > > > >>>>>>>>>> orchestrator
> > > > > >>>>>>>>>>       or just stores it in some special atomic cache
> like
> > > > > >>>>>>>>>>       [x => (k1 -> ver1, k2 -> ver2)] TTL = Za
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>> Service B:
> > > > > >>>>>>>>>> - retrieves the updated value k2x => v2a because it
> knows
> > > > > >>>> `otx`
> > > > > >>>>> =
> > > > > >>>>>> x
> > > > > >>>>>>>>>> - does computations
> > > > > >>>>>>>>>> - stores [k2x => v2ab, k3x => v3b] TTL = Zb
> > > > > >>>>>>>>>> - updates the set of updated keys like [x => (k1 ->
> ver1,
> > > > > >> k2
> > > > > >>>> ->
> > > > > >>>>>>> ver2,
> > > > > >>>>>>>> k3
> > > > > >>>>>>>>>> -> ver3)] TTL = Zb
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>> Service Committer (may be embedded into Orchestrator):
> > > > > >>>>>>>>>> - takes all the updated keys and versions for `otx` = x
> > > > > >>>>>>>>>>       [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)]
> > > > > >>>>>>>>>> - in a single transaction checks value versions for all
> > > > > >> the
> > > > > >>>> old
> > > > > >>>>>>> values
> > > > > >>>>>>>>>>       and replaces them with calculated new ones
> > > > > >>>>>>>>>> - does cleanup of temporary keys and values
> > > > > >>>>>>>>>> - in case of version mismatch or TX timeout just
> rollbacks
> > > > > >>> and
> > > > > >>>>>>> signals
> > > > > >>>>>>>>>>        to Orchestrator to restart the job with new `otx`
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>> PROFIT!!
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>> This approach even allows you to run independent parts
> of
> > > > > >> the
> > > > > >>>>> graph
> > > > > >>>>>>> in
> > > > > >>>>>>>>>> parallel (with TX transfer you will always run only one
> at
> > > > > >> a
> > > > > >>>>> time).
> > > > > >>>>>>>> Also
> > > > > >>>>>>>>> it
> > > > > >>>>>>>>>> does not require inventing any special fault tolerance
> > > > > >>> technics
> > > > > >>>>>>> because
> > > > > >>>>>>>>>> Ignite caches are already fault tolerant and all the
> > > > > >>>> intermediate
> > > > > >>>>>>>> results
> > > > > >>>>>>>>>> are virtually invisible and stored with TTL, thus in
> case
> > > > > >> of
> > > > > >>>> any
> > > > > >>>>>>> crash
> > > > > >>>>>>>>> you
> > > > > >>>>>>>>>> will not have inconsistent state or garbage.
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>> Sergi
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>> 2017-03-15 11:42 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > >>>>>>> alkuznetsov.sb@gmail.com
> > > > > >>>>>>>>> :
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>>> Okay, we are open for proposals on business task. I
> mean,
> > > > > >>> we
> > > > > >>>>> can
> > > > > >>>>>>> make
> > > > > >>>>>>>>> use
> > > > > >>>>>>>>>>> of some other thing, not distributed transaction. Not
> > > > > >>>>> transaction
> > > > > >>>>>>>> yet.
> > > > > >>>>>>>>>>>
> > > > > >>>>>>>>>>> ср, 15 мар. 2017 г. в 11:24, Vladimir Ozerov <
> > > > > >>>>>> vozerov@gridgain.com
> > > > > >>>>>>>> :
> > > > > >>>>>>>>>>>
> > > > > >>>>>>>>>>>> IMO the use case makes sense. However, as Sergi
> already
> > > > > >>>>>>> mentioned,
> > > > > >>>>>>>>> the
> > > > > >>>>>>>>>>>> problem is far more complex, than simply passing TX
> > > > > >> state
> > > > > >>>>> over
> > > > > >>>>>> a
> > > > > >>>>>>>>> wire.
> > > > > >>>>>>>>>>> Most
> > > > > >>>>>>>>>>>> probably a kind of coordinator will be required still
> > > > > >> to
> > > > > >>>>> manage
> > > > > >>>>>>> all
> > > > > >>>>>>>>>> kinds
> > > > > >>>>>>>>>>>> of failures. This task should be started with clean
> > > > > >>> design
> > > > > >>>>>>> proposal
> > > > > >>>>>>>>>>>> explaining how we handle all these concurrent events.
> > > > > >> And
> > > > > >>>>> only
> > > > > >>>>>>>> then,
> > > > > >>>>>>>>>> when
> > > > > >>>>>>>>>>>> we understand all implications, we should move to
> > > > > >>>> development
> > > > > >>>>>>>> stage.
> > > > > >>>>>>>>>>>>
> > > > > >>>>>>>>>>>> On Wed, Mar 15, 2017 at 10:38 AM, ALEKSEY KUZNETSOV <
> > > > > >>>>>>>>>>>> alkuznetsov.sb@gmail.com> wrote:
> > > > > >>>>>>>>>>>>
> > > > > >>>>>>>>>>>>> Right
> > > > > >>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>> ср, 15 мар. 2017 г. в 10:35, Sergi Vladykin <
> > > > > >>>>>>>>>> sergi.vladykin@gmail.com
> > > > > >>>>>>>>>>>> :
> > > > > >>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>> Good! Basically your orchestrator just takes some
> > > > > >>>>>> predefined
> > > > > >>>>>>>>> graph
> > > > > >>>>>>>>>> of
> > > > > >>>>>>>>>>>>>> distributed services to be invoked, calls them by
> > > > > >>> some
> > > > > >>>>> kind
> > > > > >>>>>>> of
> > > > > >>>>>>>>> RPC
> > > > > >>>>>>>>>>> and
> > > > > >>>>>>>>>>>>>> passes the needed parameters between them, right?
> > > > > >>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>> Sergi
> > > > > >>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>> 2017-03-14 22:46 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > >>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > > > >>>>>>>>>>>>> :
> > > > > >>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>> orchestrator is a custom thing. He is responsible
> > > > > >>> for
> > > > > >>>>>>>> managing
> > > > > >>>>>>>>>>>> business
> > > > > >>>>>>>>>>>>>>> scenarios flows. Many nodes are involved in
> > > > > >>>> scenarios.
> > > > > >>>>>> They
> > > > > >>>>>>>>>>> exchange
> > > > > >>>>>>>>>>>>> data
> > > > > >>>>>>>>>>>>>>> and folow one another. If you acquinted with BPMN
> > > > > >>>>>>> framework,
> > > > > >>>>>>>> so
> > > > > >>>>>>>>>>>>>>> orchestrator is like bpmn engine.
> > > > > >>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>> вт, 14 Мар 2017 г., 18:56 Sergi Vladykin <
> > > > > >>>>>>>>>> sergi.vladykin@gmail.com
> > > > > >>>>>>>>>>>> :
> > > > > >>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>> What is Orchestrator for you? Is it a thing
> > > > > >> from
> > > > > >>>>>>> Microsoft
> > > > > >>>>>>>> or
> > > > > >>>>>>>>>>> your
> > > > > >>>>>>>>>>>>>> custom
> > > > > >>>>>>>>>>>>>>>> in-house software?
> > > > > >>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>> Sergi
> > > > > >>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>> 2017-03-14 18:00 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > >>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > > > >>>>>>>>>>>>>>> :
> > > > > >>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>> Fine. Let's say we've got multiple servers
> > > > > >>> which
> > > > > >>>>>>> fulfills
> > > > > >>>>>>>>>>> custom
> > > > > >>>>>>>>>>>>>> logic.
> > > > > >>>>>>>>>>>>>>>>> This servers compound oriented graph (BPMN
> > > > > >>>> process)
> > > > > >>>>>>> which
> > > > > >>>>>>>>>>>>> controlled
> > > > > >>>>>>>>>>>>>> by
> > > > > >>>>>>>>>>>>>>>>> Orchestrator.
> > > > > >>>>>>>>>>>>>>>>> For instance, *server1  *creates *variable A
> > > > > >>>> *with
> > > > > >>>>>>> value
> > > > > >>>>>>>> 1,
> > > > > >>>>>>>>>>>>> persists
> > > > > >>>>>>>>>>>>>> it
> > > > > >>>>>>>>>>>>>>>> to
> > > > > >>>>>>>>>>>>>>>>> IGNITE cache and creates *variable B *and
> > > > > >> sends
> > > > > >>>> it
> > > > > >>>>>> to*
> > > > > >>>>>>>>>> server2.
> > > > > >>>>>>>>>>>>> *The
> > > > > >>>>>>>>>>>>>>>>> latests receives *variable B*, do some logic
> > > > > >>> with
> > > > > >>>>> it
> > > > > >>>>>>> and
> > > > > >>>>>>>>>> stores
> > > > > >>>>>>>>>>>> to
> > > > > >>>>>>>>>>>>>>>> IGNITE.
> > > > > >>>>>>>>>>>>>>>>> All the work made by both servers must be
> > > > > >>>> fulfilled
> > > > > >>>>>> in
> > > > > >>>>>>>>> *one*
> > > > > >>>>>>>>>>>>>>> transaction.
> > > > > >>>>>>>>>>>>>>>>> Because we need all information done, or
> > > > > >>>>>>>>> nothing(rollbacked).
> > > > > >>>>>>>>>>> The
> > > > > >>>>>>>>>>>>>>>> scenario
> > > > > >>>>>>>>>>>>>>>>> is managed by orchestrator.
> > > > > >>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 17:31, Sergi Vladykin <
> > > > > >>>>>>>>>>>>>> sergi.vladykin@gmail.com
> > > > > >>>>>>>>>>>>>>>> :
> > > > > >>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>> Ok, it is not a business case, it is your
> > > > > >>> wrong
> > > > > >>>>>>>> solution
> > > > > >>>>>>>>>> for
> > > > > >>>>>>>>>>>> it.
> > > > > >>>>>>>>>>>>>>>>>> Lets try again, what is the business case?
> > > > > >>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>> Sergi
> > > > > >>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>> 2017-03-14 16:42 GMT+03:00 ALEKSEY
> > > > > >> KUZNETSOV
> > > > > >>> <
> > > > > >>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > > > >>>>>>>>>>>>>>>>> :
> > > > > >>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>> The case is the following, One starts
> > > > > >>>>> transaction
> > > > > >>>>>>> in
> > > > > >>>>>>>>> one
> > > > > >>>>>>>>>>>> node,
> > > > > >>>>>>>>>>>>>> and
> > > > > >>>>>>>>>>>>>>>>> commit
> > > > > >>>>>>>>>>>>>>>>>>> this transaction in another jvm node(or
> > > > > >>>>> rollback
> > > > > >>>>>> it
> > > > > >>>>>>>>>>>> remotely).
> > > > > >>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 16:30, Sergi
> > > > > >>> Vladykin <
> > > > > >>>>>>>>>>>>>>>> sergi.vladykin@gmail.com
> > > > > >>>>>>>>>>>>>>>>>> :
> > > > > >>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>> Because even if you make it work for
> > > > > >> some
> > > > > >>>>>>>> simplistic
> > > > > >>>>>>>>>>>>> scenario,
> > > > > >>>>>>>>>>>>>>> get
> > > > > >>>>>>>>>>>>>>>>>> ready
> > > > > >>>>>>>>>>>>>>>>>>> to
> > > > > >>>>>>>>>>>>>>>>>>>> write many fault tolerance tests and
> > > > > >> make
> > > > > >>>>> sure
> > > > > >>>>>>> that
> > > > > >>>>>>>>> you
> > > > > >>>>>>>>>>> TXs
> > > > > >>>>>>>>>>>>>> work
> > > > > >>>>>>>>>>>>>>>>>>> gracefully
> > > > > >>>>>>>>>>>>>>>>>>>> in all modes in case of crashes. Also
> > > > > >>> make
> > > > > >>>>> sure
> > > > > >>>>>>>> that
> > > > > >>>>>>>>> we
> > > > > >>>>>>>>>>> do
> > > > > >>>>>>>>>>>>> not
> > > > > >>>>>>>>>>>>>>> have
> > > > > >>>>>>>>>>>>>>>>> any
> > > > > >>>>>>>>>>>>>>>>>>>> performance drops after all your
> > > > > >> changes
> > > > > >>> in
> > > > > >>>>>>>> existing
> > > > > >>>>>>>>>>>>>> benchmarks.
> > > > > >>>>>>>>>>>>>>>> All
> > > > > >>>>>>>>>>>>>>>>> in
> > > > > >>>>>>>>>>>>>>>>>>> all
> > > > > >>>>>>>>>>>>>>>>>>>> I don't believe these conditions will
> > > > > >> be
> > > > > >>>> met
> > > > > >>>>>> and
> > > > > >>>>>>>> your
> > > > > >>>>>>>>>>>>>>> contribution
> > > > > >>>>>>>>>>>>>>>>> will
> > > > > >>>>>>>>>>>>>>>>>>> be
> > > > > >>>>>>>>>>>>>>>>>>>> accepted.
> > > > > >>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>> Better solution to what problem?
> > > > > >> Sending
> > > > > >>> TX
> > > > > >>>>> to
> > > > > >>>>>>>>> another
> > > > > >>>>>>>>>>>> node?
> > > > > >>>>>>>>>>>>>> The
> > > > > >>>>>>>>>>>>>>>>>> problem
> > > > > >>>>>>>>>>>>>>>>>>>> statement itself is already wrong. What
> > > > > >>>>>> business
> > > > > >>>>>>>> case
> > > > > >>>>>>>>>> you
> > > > > >>>>>>>>>>>> are
> > > > > >>>>>>>>>>>>>>>> trying
> > > > > >>>>>>>>>>>>>>>>> to
> > > > > >>>>>>>>>>>>>>>>>>>> solve? I'm sure everything you need can
> > > > > >>> be
> > > > > >>>>> done
> > > > > >>>>>>> in
> > > > > >>>>>>>> a
> > > > > >>>>>>>>>> much
> > > > > >>>>>>>>>>>>> more
> > > > > >>>>>>>>>>>>>>>> simple
> > > > > >>>>>>>>>>>>>>>>>> and
> > > > > >>>>>>>>>>>>>>>>>>>> efficient way at the application level.
> > > > > >>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>> Sergi
> > > > > >>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>> 2017-03-14 16:03 GMT+03:00 ALEKSEY
> > > > > >>>> KUZNETSOV
> > > > > >>>>> <
> > > > > >>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > > > >>>>>>>>>>>>>>>>>>> :
> > > > > >>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>> Why wrong ? You know the better
> > > > > >>> solution?
> > > > > >>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 15:46, Sergi
> > > > > >>>>> Vladykin <
> > > > > >>>>>>>>>>>>>>>>>> sergi.vladykin@gmail.com
> > > > > >>>>>>>>>>>>>>>>>>>> :
> > > > > >>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>> Just serializing TX object and
> > > > > >>>>>> deserializing
> > > > > >>>>>>> it
> > > > > >>>>>>>>> on
> > > > > >>>>>>>>>>>>> another
> > > > > >>>>>>>>>>>>>>> node
> > > > > >>>>>>>>>>>>>>>>> is
> > > > > >>>>>>>>>>>>>>>>>>>>>> meaningless, because other nodes
> > > > > >>>>>>> participating
> > > > > >>>>>>>> in
> > > > > >>>>>>>>>> the
> > > > > >>>>>>>>>>>> TX
> > > > > >>>>>>>>>>>>>> have
> > > > > >>>>>>>>>>>>>>>> to
> > > > > >>>>>>>>>>>>>>>>>> know
> > > > > >>>>>>>>>>>>>>>>>>>>> about
> > > > > >>>>>>>>>>>>>>>>>>>>>> the new coordinator. This will
> > > > > >>> require
> > > > > >>>>>>> protocol
> > > > > >>>>>>>>>>>> changes,
> > > > > >>>>>>>>>>>>> we
> > > > > >>>>>>>>>>>>>>>>>>> definitely
> > > > > >>>>>>>>>>>>>>>>>>>>> will
> > > > > >>>>>>>>>>>>>>>>>>>>>> have fault tolerance and
> > > > > >> performance
> > > > > >>>>>> issues.
> > > > > >>>>>>>> IMO
> > > > > >>>>>>>>>> the
> > > > > >>>>>>>>>>>>> whole
> > > > > >>>>>>>>>>>>>>> idea
> > > > > >>>>>>>>>>>>>>>>> is
> > > > > >>>>>>>>>>>>>>>>>>>> wrong
> > > > > >>>>>>>>>>>>>>>>>>>>>> and it makes no sense to waste time
> > > > > >>> on
> > > > > >>>>> it.
> > > > > >>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>> Sergi
> > > > > >>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>> 2017-03-14 10:57 GMT+03:00 ALEKSEY
> > > > > >>>>>> KUZNETSOV
> > > > > >>>>>>> <
> > > > > >>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > > > >>>>>>>>>>>>>>>>>>>>> :
> > > > > >>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>>> IgniteTransactionState
> > > > > >>>> implememntation
> > > > > >>>>>>>> contains
> > > > > >>>>>>>>>>>>>>>> IgniteTxEntry's
> > > > > >>>>>>>>>>>>>>>>>>> which
> > > > > >>>>>>>>>>>>>>>>>>>>> is
> > > > > >>>>>>>>>>>>>>>>>>>>>>> supposed to be transferable
> > > > > >>>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>>> пн, 13 мар. 2017 г. в 19:32,
> > > > > >>> Dmitriy
> > > > > >>>>>>>> Setrakyan
> > > > > >>>>>>>>> <
> > > > > >>>>>>>>>>>>>>>>>>>> dsetrakyan@apache.org
> > > > > >>>>>>>>>>>>>>>>>>>>>> :
> > > > > >>>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>>>> It sounds a little scary to me
> > > > > >>> that
> > > > > >>>>> we
> > > > > >>>>>>> are
> > > > > >>>>>>>>>>> passing
> > > > > >>>>>>>>>>>>>>>>> transaction
> > > > > >>>>>>>>>>>>>>>>>>>>> objects
> > > > > >>>>>>>>>>>>>>>>>>>>>>>> around. Such object may contain
> > > > > >>> all
> > > > > >>>>>> sorts
> > > > > >>>>>>>> of
> > > > > >>>>>>>>>>> Ignite
> > > > > >>>>>>>>>>>>>>>> context.
> > > > > >>>>>>>>>>>>>>>>> If
> > > > > >>>>>>>>>>>>>>>>>>>> some
> > > > > >>>>>>>>>>>>>>>>>>>>>> data
> > > > > >>>>>>>>>>>>>>>>>>>>>>>> needs to be passed across, we
> > > > > >>>> should
> > > > > >>>>>>>> create a
> > > > > >>>>>>>>>>>> special
> > > > > >>>>>>>>>>>>>>>>> transfer
> > > > > >>>>>>>>>>>>>>>>>>>> object
> > > > > >>>>>>>>>>>>>>>>>>>>>> in
> > > > > >>>>>>>>>>>>>>>>>>>>>>>> this case.
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>>>> D.
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>>>> On Mon, Mar 13, 2017 at 9:10
> > > > > >> AM,
> > > > > >>>>>> ALEKSEY
> > > > > >>>>>>>>>>> KUZNETSOV
> > > > > >>>>>>>>>>>> <
> > > > > >>>>>>>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>> well, there a couple of
> > > > > >> issues
> > > > > >>>>>>> preventing
> > > > > >>>>>>>>>>>>> transaction
> > > > > >>>>>>>>>>>>>>>>>>> proceeding.
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>> At first, After transaction
> > > > > >>>>>>> serialization
> > > > > >>>>>>>>> and
> > > > > >>>>>>>>>>>>>>>>> deserialization
> > > > > >>>>>>>>>>>>>>>>>>> on
> > > > > >>>>>>>>>>>>>>>>>>>>> the
> > > > > >>>>>>>>>>>>>>>>>>>>>>>> remote
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>> server, there is no txState.
> > > > > >> So
> > > > > >>>> im
> > > > > >>>>>>> going
> > > > > >>>>>>>> to
> > > > > >>>>>>>>>> put
> > > > > >>>>>>>>>>>> it
> > > > > >>>>>>>>>>>>> in
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>
> > > > > >> writeExternal()\readExternal()
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>> The last one is Deserialized
> > > > > >>>>>>> transaction
> > > > > >>>>>>>>>> lacks
> > > > > >>>>>>>>>>> of
> > > > > >>>>>>>>>>>>>>> shared
> > > > > >>>>>>>>>>>>>>>>>> cache
> > > > > >>>>>>>>>>>>>>>>>>>>>> context
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>> field at
> > > > > >> TransactionProxyImpl.
> > > > > >>>>>> Perhaps,
> > > > > >>>>>>>> it
> > > > > >>>>>>>>>> must
> > > > > >>>>>>>>>>>> be
> > > > > >>>>>>>>>>>>>>>> injected
> > > > > >>>>>>>>>>>>>>>>>> by
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>> GridResourceProcessor ?
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>> пн, 13 мар. 2017 г. в 17:27,
> > > > > >>>>> ALEKSEY
> > > > > >>>>>>>>>> KUZNETSOV
> > > > > >>>>>>>>>>> <
> > > > > >>>>>>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>> :
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> while starting and
> > > > > >> continuing
> > > > > >>>>>>>> transaction
> > > > > >>>>>>>>>> in
> > > > > >>>>>>>>>>>>>>> different
> > > > > >>>>>>>>>>>>>>>>> jvms
> > > > > >>>>>>>>>>>>>>>>>>> in
> > > > > >>>>>>>>>>>>>>>>>>>>> run
> > > > > >>>>>>>>>>>>>>>>>>>>>>> into
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> serialization exception in
> > > > > >>>>>>>>>> writeExternalMeta
> > > > > >>>>>>>>>>> :
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> @Override public void
> > > > > >>>>>>>>>>>> writeExternal(ObjectOutput
> > > > > >>>>>>>>>>>>>> out)
> > > > > >>>>>>>>>>>>>>>>>> throws
> > > > > >>>>>>>>>>>>>>>>>>>>>>>> IOException
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>> {
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>    writeExternalMeta(out);
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> some meta is cannot be
> > > > > >>>>> serialized.
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> пт, 10 мар. 2017 г. в
> > > > > >> 17:25,
> > > > > >>>>> Alexey
> > > > > >>>>>>>>>>> Goncharuk <
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>> alexey.goncharuk@gmail.com
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>> :
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> Aleksey,
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> I think I am starting to
> > > > > >> get
> > > > > >>>> what
> > > > > >>>>>> you
> > > > > >>>>>>>>> want,
> > > > > >>>>>>>>>>>> but I
> > > > > >>>>>>>>>>>>>>> have
> > > > > >>>>>>>>>>>>>>>> a
> > > > > >>>>>>>>>>>>>>>>>> few
> > > > > >>>>>>>>>>>>>>>>>>>>>>> concerns:
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> - What is the API for the
> > > > > >>>>> proposed
> > > > > >>>>>>>>> change?
> > > > > >>>>>>>>>>> In
> > > > > >>>>>>>>>>>>> your
> > > > > >>>>>>>>>>>>>>>> test,
> > > > > >>>>>>>>>>>>>>>>>> you
> > > > > >>>>>>>>>>>>>>>>>>>>> pass
> > > > > >>>>>>>>>>>>>>>>>>>>>> an
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> instance of transaction
> > > > > >>> created
> > > > > >>>>> on
> > > > > >>>>>>>>>> ignite(0)
> > > > > >>>>>>>>>>> to
> > > > > >>>>>>>>>>>>> the
> > > > > >>>>>>>>>>>>>>>>> ignite
> > > > > >>>>>>>>>>>>>>>>>>>>> instance
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> ignite(1). This is
> > > > > >> obviously
> > > > > >>>> not
> > > > > >>>>>>>> possible
> > > > > >>>>>>>>>> in
> > > > > >>>>>>>>>>> a
> > > > > >>>>>>>>>>>>>> truly
> > > > > >>>>>>>>>>>>>>>>>>>> distributed
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> (multi-jvm) environment.
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> - How will you synchronize
> > > > > >>>> cache
> > > > > >>>>>>> update
> > > > > >>>>>>>>>>> actions
> > > > > >>>>>>>>>>>>> and
> > > > > >>>>>>>>>>>>>>>>>>> transaction
> > > > > >>>>>>>>>>>>>>>>>>>>>>> commit?
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> Say, you have one node that
> > > > > >>>>> decided
> > > > > >>>>>>> to
> > > > > >>>>>>>>>>> commit,
> > > > > >>>>>>>>>>>>> but
> > > > > >>>>>>>>>>>>>>>>> another
> > > > > >>>>>>>>>>>>>>>>>>> node
> > > > > >>>>>>>>>>>>>>>>>>>>> is
> > > > > >>>>>>>>>>>>>>>>>>>>>>>> still
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> writing within this
> > > > > >>>> transaction.
> > > > > >>>>>> How
> > > > > >>>>>>> do
> > > > > >>>>>>>>> you
> > > > > >>>>>>>>>>>> make
> > > > > >>>>>>>>>>>>>> sure
> > > > > >>>>>>>>>>>>>>>>> that
> > > > > >>>>>>>>>>>>>>>>>>> two
> > > > > >>>>>>>>>>>>>>>>>>>>>> nodes
> > > > > >>>>>>>>>>>>>>>>>>>>>>>> will
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> not call commit() and
> > > > > >>>> rollback()
> > > > > >>>>>>>>>>>> simultaneously?
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> - How do you make sure
> > > > > >> that
> > > > > >>>>> either
> > > > > >>>>>>>>>> commit()
> > > > > >>>>>>>>>>> or
> > > > > >>>>>>>>>>>>>>>>> rollback()
> > > > > >>>>>>>>>>>>>>>>>> is
> > > > > >>>>>>>>>>>>>>>>>>>>>> called
> > > > > >>>>>>>>>>>>>>>>>>>>>>> if
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>> an
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> originator failed?
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> 2017-03-10 15:38 GMT+03:00
> > > > > >>>>> Дмитрий
> > > > > >>>>>>>> Рябов
> > > > > >>>>>>>>> <
> > > > > >>>>>>>>>>>>>>>>>>>> somefireone@gmail.com
> > > > > >>>>>>>>>>>>>>>>>>>>>> :
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>> Alexey Goncharuk, heh, my
> > > > > >>>>> initial
> > > > > >>>>>>>>>>>> understanding
> > > > > >>>>>>>>>>>>>> was
> > > > > >>>>>>>>>>>>>>>>> that
> > > > > >>>>>>>>>>>>>>>>>>>>>>> transferring
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>> of
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> tx
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>> ownership from one node
> > > > > >> to
> > > > > >>>>>> another
> > > > > >>>>>>>> will
> > > > > >>>>>>>>>> be
> > > > > >>>>>>>>>>>>>> happened
> > > > > >>>>>>>>>>>>>>>>>>>>> automatically
> > > > > >>>>>>>>>>>>>>>>>>>>>>>> when
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>> originating node is gone
> > > > > >>>> down.
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>> 2017-03-10 15:36
> > > > > >> GMT+03:00
> > > > > >>>>>> ALEKSEY
> > > > > >>>>>>>>>>> KUZNETSOV
> > > > > >>>>>>>>>>>> <
> > > > > >>>>>>>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> :
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>>> Im aiming to span
> > > > > >>>> transaction
> > > > > >>>>>> on
> > > > > >>>>>>>>>> multiple
> > > > > >>>>>>>>>>>>>>> threads,
> > > > > >>>>>>>>>>>>>>>>>> nodes,
> > > > > >>>>>>>>>>>>>>>>>>>>>>>> jvms(soon).
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>> So
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>>> every node is able to
> > > > > >>>>> rollback,
> > > > > >>>>>>> or
> > > > > >>>>>>>>>> commit
> > > > > >>>>>>>>>>>>>> common
> > > > > >>>>>>>>>>>>>>>>>>>>> transaction.It
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>> turned
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>> up i
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>>> need to transfer tx
> > > > > >>> between
> > > > > >>>>>> nodes
> > > > > >>>>>>>> in
> > > > > >>>>>>>>>>> order
> > > > > >>>>>>>>>>>> to
> > > > > >>>>>>>>>>>>>>>> commit
> > > > > >>>>>>>>>>>>>>>>>>>>>> transaction
> > > > > >>>>>>>>>>>>>>>>>>>>>>> in
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>>> different node(in the
> > > > > >>> same
> > > > > >>>>>> jvm).
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>>> пт, 10 мар. 2017 г. в
> > > > > >>>> 15:20,
> > > > > >>>>>>> Alexey
> > > > > >>>>>>>>>>>>> Goncharuk <
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > > > >> alexey.goncharuk@gmail.com
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>> :
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Aleksey,
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Do you mean that you
> > > > > >>>> want a
> > > > > >>>>>>>> concept
> > > > > >>>>>>>>>> of
> > > > > >>>>>>>>>>>>>>>> transferring
> > > > > >>>>>>>>>>>>>>>>>> of
> > > > > >>>>>>>>>>>>>>>>>>> tx
> > > > > >>>>>>>>>>>>>>>>>>>>>>>> ownership
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>> from
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>> one node to another?
> > > > > >> My
> > > > > >>>>>> initial
> > > > > >>>>>>>>>>>>> understanding
> > > > > >>>>>>>>>>>>>>> was
> > > > > >>>>>>>>>>>>>>>>>> that
> > > > > >>>>>>>>>>>>>>>>>>>> you
> > > > > >>>>>>>>>>>>>>>>>>>>>> want
> > > > > >>>>>>>>>>>>>>>>>>>>>>>> to
> > > > > >>>>>>>>>>>>>>>>>>>>>>>>> be
> > > > > >>>>>>>>>>>>>>>>
> > >
> > > --
> >
> > *Best Regards,*
> >
> > *Kuznetsov Aleksey*
> >
>
-- 

*Best Regards,*

*Kuznetsov Aleksey*

Re: distributed transaction of non-single coordinator

Posted by Alexey Goncharuk <al...@gmail.com>.
Well, now the scenario is more clear, but it has nothing to do with
multiple coordinators :) Let me think a little bit about it.

2017-03-31 9:53 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:

> so what do u think on the issue ?
>
> чт, 30 Мар 2017 г., 17:49 ALEKSEY KUZNETSOV <al...@gmail.com>:
>
> > Hi ! Thanks for help. I've created ticket :
> > https://issues.apache.org/jira/browse/IGNITE-4887
> > and a commit :
> > https://github.com/voipp/ignite/commit/aa3487bd9c203394f534c605f84e06
> 436b638e5c
> > We really need this feature
> >
> > чт, 30 мар. 2017 г. в 11:31, Alexey Goncharuk <
> alexey.goncharuk@gmail.com
> > >:
> >
> > Aleksey,
> >
> > I doubt your approach works as expected. Current transaction recovery
> > protocol heavily relies on the originating node ID in its internal logic.
> > For example, currently a transaction will be rolled back if you want to
> > transfer a transaction ownership to another node and original tx owner
> > fails. An attempt to commit such a transaction on another node may fail
> > with all sorts of assertions. After transaction ownership changed, you
> need
> > to notify all current transaction participants about this change, and it
> > should also be done failover-safe, let alone that you did not add any
> tests
> > for these cases.
> >
> > I back Denis here. Please create a ticket first and come up with clear
> > use-cases, API and protocol changes design. It is hard to reason about
> the
> > changes you've made when we do not even understand why you are making
> these
> > changes and how they are supposed to work.
> >
> > --AG
> >
> > 2017-03-30 10:43 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:
> >
> > > So, what do u think on my idea ?
> > >
> > > ср, 29 мар. 2017 г. в 10:35, ALEKSEY KUZNETSOV <
> alkuznetsov.sb@gmail.com
> > >:
> > >
> > > > Hi! No, i dont have ticket for this.
> > > > In the ticket i have implemented methods that change transaction
> status
> > > to
> > > > STOP, thus letting it to commit transaction in another thread. In
> > another
> > > > thread you r going to restart transaction in order to commit it.
> > > > The mechanism behind it is obvious : we change thread id to newer one
> > in
> > > > ThreadMap, and make use of serialization of txState, transactions
> > itself
> > > to
> > > > transfer them into another thread.
> > > >
> > > >
> > > > вт, 28 мар. 2017 г. в 20:15, Denis Magda <dm...@apache.org>:
> > > >
> > > > Aleksey,
> > > >
> > > > Do you have a ticket for this? Could you briefly list what exactly
> was
> > > > done and how the things work.
> > > >
> > > > —
> > > > Denis
> > > >
> > > > > On Mar 28, 2017, at 8:32 AM, ALEKSEY KUZNETSOV <
> > > alkuznetsov.sb@gmail.com>
> > > > wrote:
> > > > >
> > > > > Hi, Igniters! I 've made implementation of transactions of
> non-single
> > > > > coordinator. Here you can start transaction in one thread and
> commit
> > it
> > > > in
> > > > > another thread.
> > > > > Take a look on it. Give your thoughts on it.
> > > > >
> > > > >
> > > > https://github.com/voipp/ignite/pull/10/commits/
> > > 3a3d90aa6ac84f125e4c3ce4ced4f269a695ef45
> > > > >
> > > > > пт, 17 мар. 2017 г. в 19:26, Sergi Vladykin <
> > sergi.vladykin@gmail.com
> > > >:
> > > > >
> > > > >> You know better, go ahead! :)
> > > > >>
> > > > >> Sergi
> > > > >>
> > > > >> 2017-03-17 16:16 GMT+03:00 ALEKSEY KUZNETSOV <
> > > alkuznetsov.sb@gmail.com
> > > > >:
> > > > >>
> > > > >>> we've discovered several problems regarding your "accumulation"
> > > > >>> approach.These are
> > > > >>>
> > > > >>>   1. perfomance issues when transfering data from temporary cache
> > to
> > > > >>>   permanent one. Keep in mind big deal of concurent transactions
> in
> > > > >>> Service
> > > > >>>   commiter
> > > > >>>   2. extreme memory load when keeping temporary cache in memory
> > > > >>>   3. As long as user is not acquainted with ignite, working with
> > > cache
> > > > >>>   must be transparent for him. Keep this in mind. User's node can
> > > > >> evaluate
> > > > >>>   logic with no transaction at all, so we should deal with both
> > types
> > > > of
> > > > >>>   execution flow : transactional and non-transactional.Another
> one
> > > > >>> problem is
> > > > >>>   transaction id support at the user node. We would have handled
> > all
> > > > >> this
> > > > >>>   issues and many more.
> > > > >>>   4. we cannot pessimistically lock entity.
> > > > >>>
> > > > >>> As a result, we decided to move on building distributed
> > transaction.
> > > We
> > > > >> put
> > > > >>> aside your "accumulation" approach until we realize how to solve
> > > > >>> difficulties above .
> > > > >>>
> > > > >>> чт, 16 мар. 2017 г. в 16:56, Sergi Vladykin <
> > > sergi.vladykin@gmail.com
> > > > >:
> > > > >>>
> > > > >>>> The problem "How to run millions of entities, and millions of
> > > > >> operations
> > > > >>> on
> > > > >>>> a single Pentium3" is out of scope here. Do the math, plan
> > capacity
> > > > >>>> reasonably.
> > > > >>>>
> > > > >>>> Sergi
> > > > >>>>
> > > > >>>> 2017-03-16 15:54 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > alkuznetsov.sb@gmail.com
> > > > >>> :
> > > > >>>>
> > > > >>>>> hmm, If we have millions of entities, and millions of
> operations,
> > > > >> would
> > > > >>>> not
> > > > >>>>> this approache lead to memory overflow and perfomance
> degradation
> > > > >>>>>
> > > > >>>>> чт, 16 мар. 2017 г. в 15:42, Sergi Vladykin <
> > > > >> sergi.vladykin@gmail.com
> > > > >>>> :
> > > > >>>>>
> > > > >>>>>> 1. Actually you have to check versions on all the values you
> > have
> > > > >>> read
> > > > >>>>>> during the tx.
> > > > >>>>>>
> > > > >>>>>> For example if we have [k1 => v1, k2 => v2] and do:
> > > > >>>>>>
> > > > >>>>>> put(k1, get(k2) + 5)
> > > > >>>>>>
> > > > >>>>>> We have to remember the version for k2. This logic can be
> > > > >> relatively
> > > > >>>>> easily
> > > > >>>>>> encapsulated in a framework atop of Ignite. You need to
> > implement
> > > > >> one
> > > > >>>> to
> > > > >>>>>> make all this stuff usable.
> > > > >>>>>>
> > > > >>>>>> 2. I suggest to avoid any locking here, because you easily
> will
> > > end
> > > > >>> up
> > > > >>>>> with
> > > > >>>>>> deadlocks. If you do not have too frequent updates for your
> > keys,
> > > > >>>>>> optimistic approach will work just fine.
> > > > >>>>>>
> > > > >>>>>> Theoretically in the Committer Service you can start a thread
> > for
> > > > >> the
> > > > >>>>>> lifetime of the whole distributed transaction, take a lock on
> > the
> > > > >> key
> > > > >>>>> using
> > > > >>>>>> IgniteCache.lock(K key) before executing any Services, wait
> for
> > > all
> > > > >>> the
> > > > >>>>>> services to complete, execute optimistic commit in the same
> > thread
> > > > >>>> while
> > > > >>>>>> keeping this lock and then release it. Notice that all the
> > Ignite
> > > > >>>>>> transactions inside of all Services must be optimistic here to
> > be
> > > > >>> able
> > > > >>>> to
> > > > >>>>>> read this locked key.
> > > > >>>>>>
> > > > >>>>>> But again I do not recommend you using this approach until you
> > > > >> have a
> > > > >>>>>> reliable deadlock avoidance scheme.
> > > > >>>>>>
> > > > >>>>>> Sergi
> > > > >>>>>>
> > > > >>>>>>
> > > > >>>>>>
> > > > >>>>>>
> > > > >>>>>>
> > > > >>>>>>
> > > > >>>>>>
> > > > >>>>>> 2017-03-16 12:53 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > >>> alkuznetsov.sb@gmail.com
> > > > >>>>> :
> > > > >>>>>>
> > > > >>>>>>> Yeah, now i got it.
> > > > >>>>>>> There are some doubts on this approach
> > > > >>>>>>> 1) During optimistic commit phase, when you assure no one
> > altered
> > > > >>> the
> > > > >>>>>>> original values, you must check versions of other dependent
> > keys.
> > > > >>> How
> > > > >>>>>> could
> > > > >>>>>>> we obtain those keys(in an automative manner, of course) ?
> > > > >>>>>>> 2) How could we lock a key before some Service A introduce
> > > > >> changes?
> > > > >>>> So
> > > > >>>>> no
> > > > >>>>>>> other service is allowed to change this key-value?(sort of
> > > > >>>> pessimistic
> > > > >>>>>>> blocking)
> > > > >>>>>>> May be you know some implementations of such approach ?
> > > > >>>>>>>
> > > > >>>>>>> ср, 15 мар. 2017 г. в 17:54, ALEKSEY KUZNETSOV <
> > > > >>>>> alkuznetsov.sb@gmail.com
> > > > >>>>>>> :
> > > > >>>>>>>
> > > > >>>>>>>> Thank you very much for help.  I will answer later.
> > > > >>>>>>>>
> > > > >>>>>>>> ср, 15 мар. 2017 г. в 17:39, Sergi Vladykin <
> > > > >>>>> sergi.vladykin@gmail.com
> > > > >>>>>>> :
> > > > >>>>>>>>
> > > > >>>>>>>> All the services do not update key in place, but only
> generate
> > > > >>> new
> > > > >>>>> keys
> > > > >>>>>>>> augmented by otx and store the updated value in the same
> cache
> > > > >> +
> > > > >>>>>> remember
> > > > >>>>>>>> the keys and versions participating in the transaction in
> some
> > > > >>>>> separate
> > > > >>>>>>>> atomic cache.
> > > > >>>>>>>>
> > > > >>>>>>>> Follow this sequence of changes applied to cache contents by
> > > > >> each
> > > > >>>>>>> Service:
> > > > >>>>>>>>
> > > > >>>>>>>> Initial cache contents:
> > > > >>>>>>>>            [k1 => v1]
> > > > >>>>>>>>            [k2 => v2]
> > > > >>>>>>>>            [k3 => v3]
> > > > >>>>>>>>
> > > > >>>>>>>> Cache contents after Service A:
> > > > >>>>>>>>            [k1 => v1]
> > > > >>>>>>>>            [k2 => v2]
> > > > >>>>>>>>            [k3 => v3]
> > > > >>>>>>>>            [k1x => v1a]
> > > > >>>>>>>>            [k2x => v2a]
> > > > >>>>>>>>
> > > > >>>>>>>>         + [x => (k1 -> ver1, k2 -> ver2)] in some separate
> > > > >>> atomic
> > > > >>>>>> cache
> > > > >>>>>>>>
> > > > >>>>>>>> Cache contents after Service B:
> > > > >>>>>>>>            [k1 => v1]
> > > > >>>>>>>>            [k2 => v2]
> > > > >>>>>>>>            [k3 => v3]
> > > > >>>>>>>>            [k1x => v1a]
> > > > >>>>>>>>            [k2x => v2ab]
> > > > >>>>>>>>            [k3x => v3b]
> > > > >>>>>>>>
> > > > >>>>>>>>        + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in some
> > > > >>>>> separate
> > > > >>>>>>>> atomic cache
> > > > >>>>>>>>
> > > > >>>>>>>> Finally the Committer Service takes this map of updated keys
> > > > >> and
> > > > >>>>> their
> > > > >>>>>>>> versions from some separate atomic cache, starts Ignite
> > > > >>> transaction
> > > > >>>>> and
> > > > >>>>>>>> replaces all the values for k* keys to values taken from k*x
> > > > >>> keys.
> > > > >>>>> The
> > > > >>>>>>>> successful result must be the following:
> > > > >>>>>>>>
> > > > >>>>>>>>            [k1 => v1a]
> > > > >>>>>>>>            [k2 => v2ab]
> > > > >>>>>>>>            [k3 => v3b]
> > > > >>>>>>>>            [k1x => v1a]
> > > > >>>>>>>>            [k2x => v2ab]
> > > > >>>>>>>>            [k3x => v3b]
> > > > >>>>>>>>
> > > > >>>>>>>>        + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in some
> > > > >>>>> separate
> > > > >>>>>>>> atomic cache
> > > > >>>>>>>>
> > > > >>>>>>>> But Committer Service also has to check that no one updated
> > the
> > > > >>>>>> original
> > > > >>>>>>>> values before us, because otherwise we can not give any
> > > > >>>>> serializability
> > > > >>>>>>>> guarantee for these distributed transactions. Here we may
> need
> > > > >> to
> > > > >>>>> check
> > > > >>>>>>> not
> > > > >>>>>>>> only versions of the updated keys, but also versions of any
> > > > >> other
> > > > >>>>> keys
> > > > >>>>>>> end
> > > > >>>>>>>> result depends on.
> > > > >>>>>>>>
> > > > >>>>>>>> After that Committer Service has to do a cleanup (may be
> > > > >> outside
> > > > >>> of
> > > > >>>>> the
> > > > >>>>>>>> committing tx) to come to the following final state:
> > > > >>>>>>>>
> > > > >>>>>>>>            [k1 => v1a]
> > > > >>>>>>>>            [k2 => v2ab]
> > > > >>>>>>>>            [k3 => v3b]
> > > > >>>>>>>>
> > > > >>>>>>>> Makes sense?
> > > > >>>>>>>>
> > > > >>>>>>>> Sergi
> > > > >>>>>>>>
> > > > >>>>>>>>
> > > > >>>>>>>> 2017-03-15 16:54 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > >>>>> alkuznetsov.sb@gmail.com
> > > > >>>>>>> :
> > > > >>>>>>>>
> > > > >>>>>>>>>   - what do u mean by saying "
> > > > >>>>>>>>> *in a single transaction checks value versions for all the
> > > > >> old
> > > > >>>>> values
> > > > >>>>>>>>>    and replaces them with calculated new ones *"? Every
> time
> > > > >>> you
> > > > >>>>>>> change
> > > > >>>>>>>>>   value(in some service), you store it to *some special
> > > > >> atomic
> > > > >>>>>> cache*
> > > > >>>>>>> ,
> > > > >>>>>>>> so
> > > > >>>>>>>>>   when all services ceased working, Service commiter got a
> > > > >>>> values
> > > > >>>>>> with
> > > > >>>>>>>> the
> > > > >>>>>>>>>   last versions.
> > > > >>>>>>>>>   - After "*does cleanup of temporary keys and values*"
> > > > >>> Service
> > > > >>>>>>> commiter
> > > > >>>>>>>>>   persists them into permanent store, isn't it ?
> > > > >>>>>>>>>   - I cant grasp your though, you say "*in case of version
> > > > >>>>> mismatch
> > > > >>>>>> or
> > > > >>>>>>>> TX
> > > > >>>>>>>>>   timeout just rollbacks*". But what versions would it
> > > > >> match?
> > > > >>>>>>>>>
> > > > >>>>>>>>>
> > > > >>>>>>>>> ср, 15 мар. 2017 г. в 15:34, Sergi Vladykin <
> > > > >>>>>> sergi.vladykin@gmail.com
> > > > >>>>>>>> :
> > > > >>>>>>>>>
> > > > >>>>>>>>>> Ok, here is what you actually need to implement at the
> > > > >>>>> application
> > > > >>>>>>>> level.
> > > > >>>>>>>>>>
> > > > >>>>>>>>>> Lets say we have to call 2 services in the following
> order:
> > > > >>>>>>>>>> - Service A: wants to update keys [k1 => v1,   k2 => v2]
> > > > >> to
> > > > >>>>> [k1
> > > > >>>>>> =>
> > > > >>>>>>>>> v1a,
> > > > >>>>>>>>>>  k2 => v2a]
> > > > >>>>>>>>>> - Service B: wants to update keys [k2 => v2a, k3 => v3]
> > > > >> to
> > > > >>>> [k2
> > > > >>>>>> =>
> > > > >>>>>>>>> v2ab,
> > > > >>>>>>>>>> k3 => v3b]
> > > > >>>>>>>>>>
> > > > >>>>>>>>>> The change
> > > > >>>>>>>>>>    from [ k1 => v1,   k2 => v2,     k3 => v3   ]
> > > > >>>>>>>>>>    to     [ k1 => v1a, k2 => v2ab, k3 => v3b ]
> > > > >>>>>>>>>> must happen in a single transaction.
> > > > >>>>>>>>>>
> > > > >>>>>>>>>>
> > > > >>>>>>>>>> Optimistic protocol to solve this:
> > > > >>>>>>>>>>
> > > > >>>>>>>>>> Each cache key must have a field `otx`, which is a unique
> > > > >>>>>>> orchestrator
> > > > >>>>>>>> TX
> > > > >>>>>>>>>> identifier - it must be a parameter passed to all the
> > > > >>> services.
> > > > >>>>> If
> > > > >>>>>>>> `otx`
> > > > >>>>>>>>> is
> > > > >>>>>>>>>> set to some value it means that it is an intermediate key
> > > > >> and
> > > > >>>> is
> > > > >>>>>>>> visible
> > > > >>>>>>>>>> only inside of some transaction, for the finalized key
> > > > >> `otx`
> > > > >>>> must
> > > > >>>>>> be
> > > > >>>>>>>>> null -
> > > > >>>>>>>>>> it means the key is committed and visible for everyone.
> > > > >>>>>>>>>>
> > > > >>>>>>>>>> Each cache value must have a field `ver` which is a
> version
> > > > >>> of
> > > > >>>>> that
> > > > >>>>>>>>> value.
> > > > >>>>>>>>>>
> > > > >>>>>>>>>> For both fields (`otx` and `ver`) the safest way is to use
> > > > >>>> UUID.
> > > > >>>>>>>>>>
> > > > >>>>>>>>>> Workflow is the following:
> > > > >>>>>>>>>>
> > > > >>>>>>>>>> Orchestrator starts the distributed transaction with `otx`
> > > > >> =
> > > > >>> x
> > > > >>>>> and
> > > > >>>>>>>> passes
> > > > >>>>>>>>>> this parameter to all the services.
> > > > >>>>>>>>>>
> > > > >>>>>>>>>> Service A:
> > > > >>>>>>>>>> - does some computations
> > > > >>>>>>>>>> - stores [k1x => v1a, k2x => v2a]  with TTL = Za
> > > > >>>>>>>>>>      where
> > > > >>>>>>>>>>          Za - left time from max Orchestrator TX duration
> > > > >>>> after
> > > > >>>>>>>> Service
> > > > >>>>>>>>> A
> > > > >>>>>>>>>> end
> > > > >>>>>>>>>>          k1x, k2x - new temporary keys with field `otx` =
> > > > >> x
> > > > >>>>>>>>>>          v2a has updated version `ver`
> > > > >>>>>>>>>> - returns a set of updated keys and all the old versions
> > > > >> to
> > > > >>>> the
> > > > >>>>>>>>>> orchestrator
> > > > >>>>>>>>>>       or just stores it in some special atomic cache like
> > > > >>>>>>>>>>       [x => (k1 -> ver1, k2 -> ver2)] TTL = Za
> > > > >>>>>>>>>>
> > > > >>>>>>>>>> Service B:
> > > > >>>>>>>>>> - retrieves the updated value k2x => v2a because it knows
> > > > >>>> `otx`
> > > > >>>>> =
> > > > >>>>>> x
> > > > >>>>>>>>>> - does computations
> > > > >>>>>>>>>> - stores [k2x => v2ab, k3x => v3b] TTL = Zb
> > > > >>>>>>>>>> - updates the set of updated keys like [x => (k1 -> ver1,
> > > > >> k2
> > > > >>>> ->
> > > > >>>>>>> ver2,
> > > > >>>>>>>> k3
> > > > >>>>>>>>>> -> ver3)] TTL = Zb
> > > > >>>>>>>>>>
> > > > >>>>>>>>>> Service Committer (may be embedded into Orchestrator):
> > > > >>>>>>>>>> - takes all the updated keys and versions for `otx` = x
> > > > >>>>>>>>>>       [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)]
> > > > >>>>>>>>>> - in a single transaction checks value versions for all
> > > > >> the
> > > > >>>> old
> > > > >>>>>>> values
> > > > >>>>>>>>>>       and replaces them with calculated new ones
> > > > >>>>>>>>>> - does cleanup of temporary keys and values
> > > > >>>>>>>>>> - in case of version mismatch or TX timeout just rollbacks
> > > > >>> and
> > > > >>>>>>> signals
> > > > >>>>>>>>>>        to Orchestrator to restart the job with new `otx`
> > > > >>>>>>>>>>
> > > > >>>>>>>>>> PROFIT!!
> > > > >>>>>>>>>>
> > > > >>>>>>>>>> This approach even allows you to run independent parts of
> > > > >> the
> > > > >>>>> graph
> > > > >>>>>>> in
> > > > >>>>>>>>>> parallel (with TX transfer you will always run only one at
> > > > >> a
> > > > >>>>> time).
> > > > >>>>>>>> Also
> > > > >>>>>>>>> it
> > > > >>>>>>>>>> does not require inventing any special fault tolerance
> > > > >>> technics
> > > > >>>>>>> because
> > > > >>>>>>>>>> Ignite caches are already fault tolerant and all the
> > > > >>>> intermediate
> > > > >>>>>>>> results
> > > > >>>>>>>>>> are virtually invisible and stored with TTL, thus in case
> > > > >> of
> > > > >>>> any
> > > > >>>>>>> crash
> > > > >>>>>>>>> you
> > > > >>>>>>>>>> will not have inconsistent state or garbage.
> > > > >>>>>>>>>>
> > > > >>>>>>>>>> Sergi
> > > > >>>>>>>>>>
> > > > >>>>>>>>>>
> > > > >>>>>>>>>> 2017-03-15 11:42 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > >>>>>>> alkuznetsov.sb@gmail.com
> > > > >>>>>>>>> :
> > > > >>>>>>>>>>
> > > > >>>>>>>>>>> Okay, we are open for proposals on business task. I mean,
> > > > >>> we
> > > > >>>>> can
> > > > >>>>>>> make
> > > > >>>>>>>>> use
> > > > >>>>>>>>>>> of some other thing, not distributed transaction. Not
> > > > >>>>> transaction
> > > > >>>>>>>> yet.
> > > > >>>>>>>>>>>
> > > > >>>>>>>>>>> ср, 15 мар. 2017 г. в 11:24, Vladimir Ozerov <
> > > > >>>>>> vozerov@gridgain.com
> > > > >>>>>>>> :
> > > > >>>>>>>>>>>
> > > > >>>>>>>>>>>> IMO the use case makes sense. However, as Sergi already
> > > > >>>>>>> mentioned,
> > > > >>>>>>>>> the
> > > > >>>>>>>>>>>> problem is far more complex, than simply passing TX
> > > > >> state
> > > > >>>>> over
> > > > >>>>>> a
> > > > >>>>>>>>> wire.
> > > > >>>>>>>>>>> Most
> > > > >>>>>>>>>>>> probably a kind of coordinator will be required still
> > > > >> to
> > > > >>>>> manage
> > > > >>>>>>> all
> > > > >>>>>>>>>> kinds
> > > > >>>>>>>>>>>> of failures. This task should be started with clean
> > > > >>> design
> > > > >>>>>>> proposal
> > > > >>>>>>>>>>>> explaining how we handle all these concurrent events.
> > > > >> And
> > > > >>>>> only
> > > > >>>>>>>> then,
> > > > >>>>>>>>>> when
> > > > >>>>>>>>>>>> we understand all implications, we should move to
> > > > >>>> development
> > > > >>>>>>>> stage.
> > > > >>>>>>>>>>>>
> > > > >>>>>>>>>>>> On Wed, Mar 15, 2017 at 10:38 AM, ALEKSEY KUZNETSOV <
> > > > >>>>>>>>>>>> alkuznetsov.sb@gmail.com> wrote:
> > > > >>>>>>>>>>>>
> > > > >>>>>>>>>>>>> Right
> > > > >>>>>>>>>>>>>
> > > > >>>>>>>>>>>>> ср, 15 мар. 2017 г. в 10:35, Sergi Vladykin <
> > > > >>>>>>>>>> sergi.vladykin@gmail.com
> > > > >>>>>>>>>>>> :
> > > > >>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>> Good! Basically your orchestrator just takes some
> > > > >>>>>> predefined
> > > > >>>>>>>>> graph
> > > > >>>>>>>>>> of
> > > > >>>>>>>>>>>>>> distributed services to be invoked, calls them by
> > > > >>> some
> > > > >>>>> kind
> > > > >>>>>>> of
> > > > >>>>>>>>> RPC
> > > > >>>>>>>>>>> and
> > > > >>>>>>>>>>>>>> passes the needed parameters between them, right?
> > > > >>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>> Sergi
> > > > >>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>> 2017-03-14 22:46 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > >>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > > >>>>>>>>>>>>> :
> > > > >>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>> orchestrator is a custom thing. He is responsible
> > > > >>> for
> > > > >>>>>>>> managing
> > > > >>>>>>>>>>>> business
> > > > >>>>>>>>>>>>>>> scenarios flows. Many nodes are involved in
> > > > >>>> scenarios.
> > > > >>>>>> They
> > > > >>>>>>>>>>> exchange
> > > > >>>>>>>>>>>>> data
> > > > >>>>>>>>>>>>>>> and folow one another. If you acquinted with BPMN
> > > > >>>>>>> framework,
> > > > >>>>>>>> so
> > > > >>>>>>>>>>>>>>> orchestrator is like bpmn engine.
> > > > >>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>> вт, 14 Мар 2017 г., 18:56 Sergi Vladykin <
> > > > >>>>>>>>>> sergi.vladykin@gmail.com
> > > > >>>>>>>>>>>> :
> > > > >>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>> What is Orchestrator for you? Is it a thing
> > > > >> from
> > > > >>>>>>> Microsoft
> > > > >>>>>>>> or
> > > > >>>>>>>>>>> your
> > > > >>>>>>>>>>>>>> custom
> > > > >>>>>>>>>>>>>>>> in-house software?
> > > > >>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>> Sergi
> > > > >>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>> 2017-03-14 18:00 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > >>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > > >>>>>>>>>>>>>>> :
> > > > >>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>>> Fine. Let's say we've got multiple servers
> > > > >>> which
> > > > >>>>>>> fulfills
> > > > >>>>>>>>>>> custom
> > > > >>>>>>>>>>>>>> logic.
> > > > >>>>>>>>>>>>>>>>> This servers compound oriented graph (BPMN
> > > > >>>> process)
> > > > >>>>>>> which
> > > > >>>>>>>>>>>>> controlled
> > > > >>>>>>>>>>>>>> by
> > > > >>>>>>>>>>>>>>>>> Orchestrator.
> > > > >>>>>>>>>>>>>>>>> For instance, *server1  *creates *variable A
> > > > >>>> *with
> > > > >>>>>>> value
> > > > >>>>>>>> 1,
> > > > >>>>>>>>>>>>> persists
> > > > >>>>>>>>>>>>>> it
> > > > >>>>>>>>>>>>>>>> to
> > > > >>>>>>>>>>>>>>>>> IGNITE cache and creates *variable B *and
> > > > >> sends
> > > > >>>> it
> > > > >>>>>> to*
> > > > >>>>>>>>>> server2.
> > > > >>>>>>>>>>>>> *The
> > > > >>>>>>>>>>>>>>>>> latests receives *variable B*, do some logic
> > > > >>> with
> > > > >>>>> it
> > > > >>>>>>> and
> > > > >>>>>>>>>> stores
> > > > >>>>>>>>>>>> to
> > > > >>>>>>>>>>>>>>>> IGNITE.
> > > > >>>>>>>>>>>>>>>>> All the work made by both servers must be
> > > > >>>> fulfilled
> > > > >>>>>> in
> > > > >>>>>>>>> *one*
> > > > >>>>>>>>>>>>>>> transaction.
> > > > >>>>>>>>>>>>>>>>> Because we need all information done, or
> > > > >>>>>>>>> nothing(rollbacked).
> > > > >>>>>>>>>>> The
> > > > >>>>>>>>>>>>>>>> scenario
> > > > >>>>>>>>>>>>>>>>> is managed by orchestrator.
> > > > >>>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 17:31, Sergi Vladykin <
> > > > >>>>>>>>>>>>>> sergi.vladykin@gmail.com
> > > > >>>>>>>>>>>>>>>> :
> > > > >>>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>>>> Ok, it is not a business case, it is your
> > > > >>> wrong
> > > > >>>>>>>> solution
> > > > >>>>>>>>>> for
> > > > >>>>>>>>>>>> it.
> > > > >>>>>>>>>>>>>>>>>> Lets try again, what is the business case?
> > > > >>>>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>>>> Sergi
> > > > >>>>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>>>> 2017-03-14 16:42 GMT+03:00 ALEKSEY
> > > > >> KUZNETSOV
> > > > >>> <
> > > > >>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > > >>>>>>>>>>>>>>>>> :
> > > > >>>>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>>>>> The case is the following, One starts
> > > > >>>>> transaction
> > > > >>>>>>> in
> > > > >>>>>>>>> one
> > > > >>>>>>>>>>>> node,
> > > > >>>>>>>>>>>>>> and
> > > > >>>>>>>>>>>>>>>>> commit
> > > > >>>>>>>>>>>>>>>>>>> this transaction in another jvm node(or
> > > > >>>>> rollback
> > > > >>>>>> it
> > > > >>>>>>>>>>>> remotely).
> > > > >>>>>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 16:30, Sergi
> > > > >>> Vladykin <
> > > > >>>>>>>>>>>>>>>> sergi.vladykin@gmail.com
> > > > >>>>>>>>>>>>>>>>>> :
> > > > >>>>>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>>>>>> Because even if you make it work for
> > > > >> some
> > > > >>>>>>>> simplistic
> > > > >>>>>>>>>>>>> scenario,
> > > > >>>>>>>>>>>>>>> get
> > > > >>>>>>>>>>>>>>>>>> ready
> > > > >>>>>>>>>>>>>>>>>>> to
> > > > >>>>>>>>>>>>>>>>>>>> write many fault tolerance tests and
> > > > >> make
> > > > >>>>> sure
> > > > >>>>>>> that
> > > > >>>>>>>>> you
> > > > >>>>>>>>>>> TXs
> > > > >>>>>>>>>>>>>> work
> > > > >>>>>>>>>>>>>>>>>>> gracefully
> > > > >>>>>>>>>>>>>>>>>>>> in all modes in case of crashes. Also
> > > > >>> make
> > > > >>>>> sure
> > > > >>>>>>>> that
> > > > >>>>>>>>> we
> > > > >>>>>>>>>>> do
> > > > >>>>>>>>>>>>> not
> > > > >>>>>>>>>>>>>>> have
> > > > >>>>>>>>>>>>>>>>> any
> > > > >>>>>>>>>>>>>>>>>>>> performance drops after all your
> > > > >> changes
> > > > >>> in
> > > > >>>>>>>> existing
> > > > >>>>>>>>>>>>>> benchmarks.
> > > > >>>>>>>>>>>>>>>> All
> > > > >>>>>>>>>>>>>>>>> in
> > > > >>>>>>>>>>>>>>>>>>> all
> > > > >>>>>>>>>>>>>>>>>>>> I don't believe these conditions will
> > > > >> be
> > > > >>>> met
> > > > >>>>>> and
> > > > >>>>>>>> your
> > > > >>>>>>>>>>>>>>> contribution
> > > > >>>>>>>>>>>>>>>>> will
> > > > >>>>>>>>>>>>>>>>>>> be
> > > > >>>>>>>>>>>>>>>>>>>> accepted.
> > > > >>>>>>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>>>>>> Better solution to what problem?
> > > > >> Sending
> > > > >>> TX
> > > > >>>>> to
> > > > >>>>>>>>> another
> > > > >>>>>>>>>>>> node?
> > > > >>>>>>>>>>>>>> The
> > > > >>>>>>>>>>>>>>>>>> problem
> > > > >>>>>>>>>>>>>>>>>>>> statement itself is already wrong. What
> > > > >>>>>> business
> > > > >>>>>>>> case
> > > > >>>>>>>>>> you
> > > > >>>>>>>>>>>> are
> > > > >>>>>>>>>>>>>>>> trying
> > > > >>>>>>>>>>>>>>>>> to
> > > > >>>>>>>>>>>>>>>>>>>> solve? I'm sure everything you need can
> > > > >>> be
> > > > >>>>> done
> > > > >>>>>>> in
> > > > >>>>>>>> a
> > > > >>>>>>>>>> much
> > > > >>>>>>>>>>>>> more
> > > > >>>>>>>>>>>>>>>> simple
> > > > >>>>>>>>>>>>>>>>>> and
> > > > >>>>>>>>>>>>>>>>>>>> efficient way at the application level.
> > > > >>>>>>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>>>>>> Sergi
> > > > >>>>>>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>>>>>> 2017-03-14 16:03 GMT+03:00 ALEKSEY
> > > > >>>> KUZNETSOV
> > > > >>>>> <
> > > > >>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > > >>>>>>>>>>>>>>>>>>> :
> > > > >>>>>>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>>>>>>> Why wrong ? You know the better
> > > > >>> solution?
> > > > >>>>>>>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 15:46, Sergi
> > > > >>>>> Vladykin <
> > > > >>>>>>>>>>>>>>>>>> sergi.vladykin@gmail.com
> > > > >>>>>>>>>>>>>>>>>>>> :
> > > > >>>>>>>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>>>>>>>> Just serializing TX object and
> > > > >>>>>> deserializing
> > > > >>>>>>> it
> > > > >>>>>>>>> on
> > > > >>>>>>>>>>>>> another
> > > > >>>>>>>>>>>>>>> node
> > > > >>>>>>>>>>>>>>>>> is
> > > > >>>>>>>>>>>>>>>>>>>>>> meaningless, because other nodes
> > > > >>>>>>> participating
> > > > >>>>>>>> in
> > > > >>>>>>>>>> the
> > > > >>>>>>>>>>>> TX
> > > > >>>>>>>>>>>>>> have
> > > > >>>>>>>>>>>>>>>> to
> > > > >>>>>>>>>>>>>>>>>> know
> > > > >>>>>>>>>>>>>>>>>>>>> about
> > > > >>>>>>>>>>>>>>>>>>>>>> the new coordinator. This will
> > > > >>> require
> > > > >>>>>>> protocol
> > > > >>>>>>>>>>>> changes,
> > > > >>>>>>>>>>>>> we
> > > > >>>>>>>>>>>>>>>>>>> definitely
> > > > >>>>>>>>>>>>>>>>>>>>> will
> > > > >>>>>>>>>>>>>>>>>>>>>> have fault tolerance and
> > > > >> performance
> > > > >>>>>> issues.
> > > > >>>>>>>> IMO
> > > > >>>>>>>>>> the
> > > > >>>>>>>>>>>>> whole
> > > > >>>>>>>>>>>>>>> idea
> > > > >>>>>>>>>>>>>>>>> is
> > > > >>>>>>>>>>>>>>>>>>>> wrong
> > > > >>>>>>>>>>>>>>>>>>>>>> and it makes no sense to waste time
> > > > >>> on
> > > > >>>>> it.
> > > > >>>>>>>>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>>>>>>>> Sergi
> > > > >>>>>>>>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>>>>>>>> 2017-03-14 10:57 GMT+03:00 ALEKSEY
> > > > >>>>>> KUZNETSOV
> > > > >>>>>>> <
> > > > >>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > > >>>>>>>>>>>>>>>>>>>>> :
> > > > >>>>>>>>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>>>>>>>>> IgniteTransactionState
> > > > >>>> implememntation
> > > > >>>>>>>> contains
> > > > >>>>>>>>>>>>>>>> IgniteTxEntry's
> > > > >>>>>>>>>>>>>>>>>>> which
> > > > >>>>>>>>>>>>>>>>>>>>> is
> > > > >>>>>>>>>>>>>>>>>>>>>>> supposed to be transferable
> > > > >>>>>>>>>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>>>>>>>>> пн, 13 мар. 2017 г. в 19:32,
> > > > >>> Dmitriy
> > > > >>>>>>>> Setrakyan
> > > > >>>>>>>>> <
> > > > >>>>>>>>>>>>>>>>>>>> dsetrakyan@apache.org
> > > > >>>>>>>>>>>>>>>>>>>>>> :
> > > > >>>>>>>>>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>>>>>>>>>> It sounds a little scary to me
> > > > >>> that
> > > > >>>>> we
> > > > >>>>>>> are
> > > > >>>>>>>>>>> passing
> > > > >>>>>>>>>>>>>>>>> transaction
> > > > >>>>>>>>>>>>>>>>>>>>> objects
> > > > >>>>>>>>>>>>>>>>>>>>>>>> around. Such object may contain
> > > > >>> all
> > > > >>>>>> sorts
> > > > >>>>>>>> of
> > > > >>>>>>>>>>> Ignite
> > > > >>>>>>>>>>>>>>>> context.
> > > > >>>>>>>>>>>>>>>>> If
> > > > >>>>>>>>>>>>>>>>>>>> some
> > > > >>>>>>>>>>>>>>>>>>>>>> data
> > > > >>>>>>>>>>>>>>>>>>>>>>>> needs to be passed across, we
> > > > >>>> should
> > > > >>>>>>>> create a
> > > > >>>>>>>>>>>> special
> > > > >>>>>>>>>>>>>>>>> transfer
> > > > >>>>>>>>>>>>>>>>>>>> object
> > > > >>>>>>>>>>>>>>>>>>>>>> in
> > > > >>>>>>>>>>>>>>>>>>>>>>>> this case.
> > > > >>>>>>>>>>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>>>>>>>>>> D.
> > > > >>>>>>>>>>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>>>>>>>>>> On Mon, Mar 13, 2017 at 9:10
> > > > >> AM,
> > > > >>>>>> ALEKSEY
> > > > >>>>>>>>>>> KUZNETSOV
> > > > >>>>>>>>>>>> <
> > > > >>>>>>>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > > >>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> > > > >>>>>>>>>>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>>>>>>>>>>> well, there a couple of
> > > > >> issues
> > > > >>>>>>> preventing
> > > > >>>>>>>>>>>>> transaction
> > > > >>>>>>>>>>>>>>>>>>> proceeding.
> > > > >>>>>>>>>>>>>>>>>>>>>>>>> At first, After transaction
> > > > >>>>>>> serialization
> > > > >>>>>>>>> and
> > > > >>>>>>>>>>>>>>>>> deserialization
> > > > >>>>>>>>>>>>>>>>>>> on
> > > > >>>>>>>>>>>>>>>>>>>>> the
> > > > >>>>>>>>>>>>>>>>>>>>>>>> remote
> > > > >>>>>>>>>>>>>>>>>>>>>>>>> server, there is no txState.
> > > > >> So
> > > > >>>> im
> > > > >>>>>>> going
> > > > >>>>>>>> to
> > > > >>>>>>>>>> put
> > > > >>>>>>>>>>>> it
> > > > >>>>>>>>>>>>> in
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>
> > > > >> writeExternal()\readExternal()
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>>>>>>>>>>> The last one is Deserialized
> > > > >>>>>>> transaction
> > > > >>>>>>>>>> lacks
> > > > >>>>>>>>>>> of
> > > > >>>>>>>>>>>>>>> shared
> > > > >>>>>>>>>>>>>>>>>> cache
> > > > >>>>>>>>>>>>>>>>>>>>>> context
> > > > >>>>>>>>>>>>>>>>>>>>>>>>> field at
> > > > >> TransactionProxyImpl.
> > > > >>>>>> Perhaps,
> > > > >>>>>>>> it
> > > > >>>>>>>>>> must
> > > > >>>>>>>>>>>> be
> > > > >>>>>>>>>>>>>>>> injected
> > > > >>>>>>>>>>>>>>>>>> by
> > > > >>>>>>>>>>>>>>>>>>>>>>>>> GridResourceProcessor ?
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>>>>>>>>>>> пн, 13 мар. 2017 г. в 17:27,
> > > > >>>>> ALEKSEY
> > > > >>>>>>>>>> KUZNETSOV
> > > > >>>>>>>>>>> <
> > > > >>>>>>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > > >>>>>>>>>>>>>>>>>>>>>>>>> :
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>> while starting and
> > > > >> continuing
> > > > >>>>>>>> transaction
> > > > >>>>>>>>>> in
> > > > >>>>>>>>>>>>>>> different
> > > > >>>>>>>>>>>>>>>>> jvms
> > > > >>>>>>>>>>>>>>>>>>> in
> > > > >>>>>>>>>>>>>>>>>>>>> run
> > > > >>>>>>>>>>>>>>>>>>>>>>> into
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>> serialization exception in
> > > > >>>>>>>>>> writeExternalMeta
> > > > >>>>>>>>>>> :
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>> @Override public void
> > > > >>>>>>>>>>>> writeExternal(ObjectOutput
> > > > >>>>>>>>>>>>>> out)
> > > > >>>>>>>>>>>>>>>>>> throws
> > > > >>>>>>>>>>>>>>>>>>>>>>>> IOException
> > > > >>>>>>>>>>>>>>>>>>>>>>>>> {
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>>    writeExternalMeta(out);
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>> some meta is cannot be
> > > > >>>>> serialized.
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>> пт, 10 мар. 2017 г. в
> > > > >> 17:25,
> > > > >>>>> Alexey
> > > > >>>>>>>>>>> Goncharuk <
> > > > >>>>>>>>>>>>>>>>>>>>>>>>> alexey.goncharuk@gmail.com
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>>> :
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>> Aleksey,
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>> I think I am starting to
> > > > >> get
> > > > >>>> what
> > > > >>>>>> you
> > > > >>>>>>>>> want,
> > > > >>>>>>>>>>>> but I
> > > > >>>>>>>>>>>>>>> have
> > > > >>>>>>>>>>>>>>>> a
> > > > >>>>>>>>>>>>>>>>>> few
> > > > >>>>>>>>>>>>>>>>>>>>>>> concerns:
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>> - What is the API for the
> > > > >>>>> proposed
> > > > >>>>>>>>> change?
> > > > >>>>>>>>>>> In
> > > > >>>>>>>>>>>>> your
> > > > >>>>>>>>>>>>>>>> test,
> > > > >>>>>>>>>>>>>>>>>> you
> > > > >>>>>>>>>>>>>>>>>>>>> pass
> > > > >>>>>>>>>>>>>>>>>>>>>> an
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>> instance of transaction
> > > > >>> created
> > > > >>>>> on
> > > > >>>>>>>>>> ignite(0)
> > > > >>>>>>>>>>> to
> > > > >>>>>>>>>>>>> the
> > > > >>>>>>>>>>>>>>>>> ignite
> > > > >>>>>>>>>>>>>>>>>>>>> instance
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>> ignite(1). This is
> > > > >> obviously
> > > > >>>> not
> > > > >>>>>>>> possible
> > > > >>>>>>>>>> in
> > > > >>>>>>>>>>> a
> > > > >>>>>>>>>>>>>> truly
> > > > >>>>>>>>>>>>>>>>>>>> distributed
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>> (multi-jvm) environment.
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>> - How will you synchronize
> > > > >>>> cache
> > > > >>>>>>> update
> > > > >>>>>>>>>>> actions
> > > > >>>>>>>>>>>>> and
> > > > >>>>>>>>>>>>>>>>>>> transaction
> > > > >>>>>>>>>>>>>>>>>>>>>>> commit?
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>> Say, you have one node that
> > > > >>>>> decided
> > > > >>>>>>> to
> > > > >>>>>>>>>>> commit,
> > > > >>>>>>>>>>>>> but
> > > > >>>>>>>>>>>>>>>>> another
> > > > >>>>>>>>>>>>>>>>>>> node
> > > > >>>>>>>>>>>>>>>>>>>>> is
> > > > >>>>>>>>>>>>>>>>>>>>>>>> still
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>> writing within this
> > > > >>>> transaction.
> > > > >>>>>> How
> > > > >>>>>>> do
> > > > >>>>>>>>> you
> > > > >>>>>>>>>>>> make
> > > > >>>>>>>>>>>>>> sure
> > > > >>>>>>>>>>>>>>>>> that
> > > > >>>>>>>>>>>>>>>>>>> two
> > > > >>>>>>>>>>>>>>>>>>>>>> nodes
> > > > >>>>>>>>>>>>>>>>>>>>>>>> will
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>> not call commit() and
> > > > >>>> rollback()
> > > > >>>>>>>>>>>> simultaneously?
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>> - How do you make sure
> > > > >> that
> > > > >>>>> either
> > > > >>>>>>>>>> commit()
> > > > >>>>>>>>>>> or
> > > > >>>>>>>>>>>>>>>>> rollback()
> > > > >>>>>>>>>>>>>>>>>> is
> > > > >>>>>>>>>>>>>>>>>>>>>> called
> > > > >>>>>>>>>>>>>>>>>>>>>>> if
> > > > >>>>>>>>>>>>>>>>>>>>>>>>> an
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>> originator failed?
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>> 2017-03-10 15:38 GMT+03:00
> > > > >>>>> Дмитрий
> > > > >>>>>>>> Рябов
> > > > >>>>>>>>> <
> > > > >>>>>>>>>>>>>>>>>>>> somefireone@gmail.com
> > > > >>>>>>>>>>>>>>>>>>>>>> :
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>>> Alexey Goncharuk, heh, my
> > > > >>>>> initial
> > > > >>>>>>>>>>>> understanding
> > > > >>>>>>>>>>>>>> was
> > > > >>>>>>>>>>>>>>>>> that
> > > > >>>>>>>>>>>>>>>>>>>>>>> transferring
> > > > >>>>>>>>>>>>>>>>>>>>>>>>> of
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>> tx
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>>> ownership from one node
> > > > >> to
> > > > >>>>>> another
> > > > >>>>>>>> will
> > > > >>>>>>>>>> be
> > > > >>>>>>>>>>>>>> happened
> > > > >>>>>>>>>>>>>>>>>>>>> automatically
> > > > >>>>>>>>>>>>>>>>>>>>>>>> when
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>>> originating node is gone
> > > > >>>> down.
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>>> 2017-03-10 15:36
> > > > >> GMT+03:00
> > > > >>>>>> ALEKSEY
> > > > >>>>>>>>>>> KUZNETSOV
> > > > >>>>>>>>>>>> <
> > > > >>>>>>>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>> :
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>>>> Im aiming to span
> > > > >>>> transaction
> > > > >>>>>> on
> > > > >>>>>>>>>> multiple
> > > > >>>>>>>>>>>>>>> threads,
> > > > >>>>>>>>>>>>>>>>>> nodes,
> > > > >>>>>>>>>>>>>>>>>>>>>>>> jvms(soon).
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>> So
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>>>> every node is able to
> > > > >>>>> rollback,
> > > > >>>>>>> or
> > > > >>>>>>>>>> commit
> > > > >>>>>>>>>>>>>> common
> > > > >>>>>>>>>>>>>>>>>>>>> transaction.It
> > > > >>>>>>>>>>>>>>>>>>>>>>>>> turned
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>>> up i
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>>>> need to transfer tx
> > > > >>> between
> > > > >>>>>> nodes
> > > > >>>>>>>> in
> > > > >>>>>>>>>>> order
> > > > >>>>>>>>>>>> to
> > > > >>>>>>>>>>>>>>>> commit
> > > > >>>>>>>>>>>>>>>>>>>>>> transaction
> > > > >>>>>>>>>>>>>>>>>>>>>>> in
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>>>> different node(in the
> > > > >>> same
> > > > >>>>>> jvm).
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>>>> пт, 10 мар. 2017 г. в
> > > > >>>> 15:20,
> > > > >>>>>>> Alexey
> > > > >>>>>>>>>>>>> Goncharuk <
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > > >> alexey.goncharuk@gmail.com
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>> :
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Aleksey,
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Do you mean that you
> > > > >>>> want a
> > > > >>>>>>>> concept
> > > > >>>>>>>>>> of
> > > > >>>>>>>>>>>>>>>> transferring
> > > > >>>>>>>>>>>>>>>>>> of
> > > > >>>>>>>>>>>>>>>>>>> tx
> > > > >>>>>>>>>>>>>>>>>>>>>>>> ownership
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>>> from
> > > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>> one node to another?
> > > > >> My
> > > > >>>>>> initial
> > > > >>>>>>>>>>>>> understanding
> > > > >>>>>>>>>>>>>>> was
> > > > >>>>>>>>>>>>>>>>>> that
> > > > >>>>>>>>>>>>>>>>>>>> you
> > > > >>>>>>>>>>>>>>>>>>>>>> want
> > > > >>>>>>>>>>>>>>>>>>>>>>>> to
> > > > >>>>>>>>>>>>>>>>>>>>>>>>> be
> > > > >>>>>>>>>>>>>>>>
> >
> > --
>
> *Best Regards,*
>
> *Kuznetsov Aleksey*
>

Re: distributed transaction of non-single coordinator

Posted by ALEKSEY KUZNETSOV <al...@gmail.com>.
so what do u think on the issue ?

чт, 30 Мар 2017 г., 17:49 ALEKSEY KUZNETSOV <al...@gmail.com>:

> Hi ! Thanks for help. I've created ticket :
> https://issues.apache.org/jira/browse/IGNITE-4887
> and a commit :
> https://github.com/voipp/ignite/commit/aa3487bd9c203394f534c605f84e06436b638e5c
> We really need this feature
>
> чт, 30 мар. 2017 г. в 11:31, Alexey Goncharuk <alexey.goncharuk@gmail.com
> >:
>
> Aleksey,
>
> I doubt your approach works as expected. Current transaction recovery
> protocol heavily relies on the originating node ID in its internal logic.
> For example, currently a transaction will be rolled back if you want to
> transfer a transaction ownership to another node and original tx owner
> fails. An attempt to commit such a transaction on another node may fail
> with all sorts of assertions. After transaction ownership changed, you need
> to notify all current transaction participants about this change, and it
> should also be done failover-safe, let alone that you did not add any tests
> for these cases.
>
> I back Denis here. Please create a ticket first and come up with clear
> use-cases, API and protocol changes design. It is hard to reason about the
> changes you've made when we do not even understand why you are making these
> changes and how they are supposed to work.
>
> --AG
>
> 2017-03-30 10:43 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:
>
> > So, what do u think on my idea ?
> >
> > ср, 29 мар. 2017 г. в 10:35, ALEKSEY KUZNETSOV <alkuznetsov.sb@gmail.com
> >:
> >
> > > Hi! No, i dont have ticket for this.
> > > In the ticket i have implemented methods that change transaction status
> > to
> > > STOP, thus letting it to commit transaction in another thread. In
> another
> > > thread you r going to restart transaction in order to commit it.
> > > The mechanism behind it is obvious : we change thread id to newer one
> in
> > > ThreadMap, and make use of serialization of txState, transactions
> itself
> > to
> > > transfer them into another thread.
> > >
> > >
> > > вт, 28 мар. 2017 г. в 20:15, Denis Magda <dm...@apache.org>:
> > >
> > > Aleksey,
> > >
> > > Do you have a ticket for this? Could you briefly list what exactly was
> > > done and how the things work.
> > >
> > > —
> > > Denis
> > >
> > > > On Mar 28, 2017, at 8:32 AM, ALEKSEY KUZNETSOV <
> > alkuznetsov.sb@gmail.com>
> > > wrote:
> > > >
> > > > Hi, Igniters! I 've made implementation of transactions of non-single
> > > > coordinator. Here you can start transaction in one thread and commit
> it
> > > in
> > > > another thread.
> > > > Take a look on it. Give your thoughts on it.
> > > >
> > > >
> > > https://github.com/voipp/ignite/pull/10/commits/
> > 3a3d90aa6ac84f125e4c3ce4ced4f269a695ef45
> > > >
> > > > пт, 17 мар. 2017 г. в 19:26, Sergi Vladykin <
> sergi.vladykin@gmail.com
> > >:
> > > >
> > > >> You know better, go ahead! :)
> > > >>
> > > >> Sergi
> > > >>
> > > >> 2017-03-17 16:16 GMT+03:00 ALEKSEY KUZNETSOV <
> > alkuznetsov.sb@gmail.com
> > > >:
> > > >>
> > > >>> we've discovered several problems regarding your "accumulation"
> > > >>> approach.These are
> > > >>>
> > > >>>   1. perfomance issues when transfering data from temporary cache
> to
> > > >>>   permanent one. Keep in mind big deal of concurent transactions in
> > > >>> Service
> > > >>>   commiter
> > > >>>   2. extreme memory load when keeping temporary cache in memory
> > > >>>   3. As long as user is not acquainted with ignite, working with
> > cache
> > > >>>   must be transparent for him. Keep this in mind. User's node can
> > > >> evaluate
> > > >>>   logic with no transaction at all, so we should deal with both
> types
> > > of
> > > >>>   execution flow : transactional and non-transactional.Another one
> > > >>> problem is
> > > >>>   transaction id support at the user node. We would have handled
> all
> > > >> this
> > > >>>   issues and many more.
> > > >>>   4. we cannot pessimistically lock entity.
> > > >>>
> > > >>> As a result, we decided to move on building distributed
> transaction.
> > We
> > > >> put
> > > >>> aside your "accumulation" approach until we realize how to solve
> > > >>> difficulties above .
> > > >>>
> > > >>> чт, 16 мар. 2017 г. в 16:56, Sergi Vladykin <
> > sergi.vladykin@gmail.com
> > > >:
> > > >>>
> > > >>>> The problem "How to run millions of entities, and millions of
> > > >> operations
> > > >>> on
> > > >>>> a single Pentium3" is out of scope here. Do the math, plan
> capacity
> > > >>>> reasonably.
> > > >>>>
> > > >>>> Sergi
> > > >>>>
> > > >>>> 2017-03-16 15:54 GMT+03:00 ALEKSEY KUZNETSOV <
> > > alkuznetsov.sb@gmail.com
> > > >>> :
> > > >>>>
> > > >>>>> hmm, If we have millions of entities, and millions of operations,
> > > >> would
> > > >>>> not
> > > >>>>> this approache lead to memory overflow and perfomance degradation
> > > >>>>>
> > > >>>>> чт, 16 мар. 2017 г. в 15:42, Sergi Vladykin <
> > > >> sergi.vladykin@gmail.com
> > > >>>> :
> > > >>>>>
> > > >>>>>> 1. Actually you have to check versions on all the values you
> have
> > > >>> read
> > > >>>>>> during the tx.
> > > >>>>>>
> > > >>>>>> For example if we have [k1 => v1, k2 => v2] and do:
> > > >>>>>>
> > > >>>>>> put(k1, get(k2) + 5)
> > > >>>>>>
> > > >>>>>> We have to remember the version for k2. This logic can be
> > > >> relatively
> > > >>>>> easily
> > > >>>>>> encapsulated in a framework atop of Ignite. You need to
> implement
> > > >> one
> > > >>>> to
> > > >>>>>> make all this stuff usable.
> > > >>>>>>
> > > >>>>>> 2. I suggest to avoid any locking here, because you easily will
> > end
> > > >>> up
> > > >>>>> with
> > > >>>>>> deadlocks. If you do not have too frequent updates for your
> keys,
> > > >>>>>> optimistic approach will work just fine.
> > > >>>>>>
> > > >>>>>> Theoretically in the Committer Service you can start a thread
> for
> > > >> the
> > > >>>>>> lifetime of the whole distributed transaction, take a lock on
> the
> > > >> key
> > > >>>>> using
> > > >>>>>> IgniteCache.lock(K key) before executing any Services, wait for
> > all
> > > >>> the
> > > >>>>>> services to complete, execute optimistic commit in the same
> thread
> > > >>>> while
> > > >>>>>> keeping this lock and then release it. Notice that all the
> Ignite
> > > >>>>>> transactions inside of all Services must be optimistic here to
> be
> > > >>> able
> > > >>>> to
> > > >>>>>> read this locked key.
> > > >>>>>>
> > > >>>>>> But again I do not recommend you using this approach until you
> > > >> have a
> > > >>>>>> reliable deadlock avoidance scheme.
> > > >>>>>>
> > > >>>>>> Sergi
> > > >>>>>>
> > > >>>>>>
> > > >>>>>>
> > > >>>>>>
> > > >>>>>>
> > > >>>>>>
> > > >>>>>>
> > > >>>>>> 2017-03-16 12:53 GMT+03:00 ALEKSEY KUZNETSOV <
> > > >>> alkuznetsov.sb@gmail.com
> > > >>>>> :
> > > >>>>>>
> > > >>>>>>> Yeah, now i got it.
> > > >>>>>>> There are some doubts on this approach
> > > >>>>>>> 1) During optimistic commit phase, when you assure no one
> altered
> > > >>> the
> > > >>>>>>> original values, you must check versions of other dependent
> keys.
> > > >>> How
> > > >>>>>> could
> > > >>>>>>> we obtain those keys(in an automative manner, of course) ?
> > > >>>>>>> 2) How could we lock a key before some Service A introduce
> > > >> changes?
> > > >>>> So
> > > >>>>> no
> > > >>>>>>> other service is allowed to change this key-value?(sort of
> > > >>>> pessimistic
> > > >>>>>>> blocking)
> > > >>>>>>> May be you know some implementations of such approach ?
> > > >>>>>>>
> > > >>>>>>> ср, 15 мар. 2017 г. в 17:54, ALEKSEY KUZNETSOV <
> > > >>>>> alkuznetsov.sb@gmail.com
> > > >>>>>>> :
> > > >>>>>>>
> > > >>>>>>>> Thank you very much for help.  I will answer later.
> > > >>>>>>>>
> > > >>>>>>>> ср, 15 мар. 2017 г. в 17:39, Sergi Vladykin <
> > > >>>>> sergi.vladykin@gmail.com
> > > >>>>>>> :
> > > >>>>>>>>
> > > >>>>>>>> All the services do not update key in place, but only generate
> > > >>> new
> > > >>>>> keys
> > > >>>>>>>> augmented by otx and store the updated value in the same cache
> > > >> +
> > > >>>>>> remember
> > > >>>>>>>> the keys and versions participating in the transaction in some
> > > >>>>> separate
> > > >>>>>>>> atomic cache.
> > > >>>>>>>>
> > > >>>>>>>> Follow this sequence of changes applied to cache contents by
> > > >> each
> > > >>>>>>> Service:
> > > >>>>>>>>
> > > >>>>>>>> Initial cache contents:
> > > >>>>>>>>            [k1 => v1]
> > > >>>>>>>>            [k2 => v2]
> > > >>>>>>>>            [k3 => v3]
> > > >>>>>>>>
> > > >>>>>>>> Cache contents after Service A:
> > > >>>>>>>>            [k1 => v1]
> > > >>>>>>>>            [k2 => v2]
> > > >>>>>>>>            [k3 => v3]
> > > >>>>>>>>            [k1x => v1a]
> > > >>>>>>>>            [k2x => v2a]
> > > >>>>>>>>
> > > >>>>>>>>         + [x => (k1 -> ver1, k2 -> ver2)] in some separate
> > > >>> atomic
> > > >>>>>> cache
> > > >>>>>>>>
> > > >>>>>>>> Cache contents after Service B:
> > > >>>>>>>>            [k1 => v1]
> > > >>>>>>>>            [k2 => v2]
> > > >>>>>>>>            [k3 => v3]
> > > >>>>>>>>            [k1x => v1a]
> > > >>>>>>>>            [k2x => v2ab]
> > > >>>>>>>>            [k3x => v3b]
> > > >>>>>>>>
> > > >>>>>>>>        + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in some
> > > >>>>> separate
> > > >>>>>>>> atomic cache
> > > >>>>>>>>
> > > >>>>>>>> Finally the Committer Service takes this map of updated keys
> > > >> and
> > > >>>>> their
> > > >>>>>>>> versions from some separate atomic cache, starts Ignite
> > > >>> transaction
> > > >>>>> and
> > > >>>>>>>> replaces all the values for k* keys to values taken from k*x
> > > >>> keys.
> > > >>>>> The
> > > >>>>>>>> successful result must be the following:
> > > >>>>>>>>
> > > >>>>>>>>            [k1 => v1a]
> > > >>>>>>>>            [k2 => v2ab]
> > > >>>>>>>>            [k3 => v3b]
> > > >>>>>>>>            [k1x => v1a]
> > > >>>>>>>>            [k2x => v2ab]
> > > >>>>>>>>            [k3x => v3b]
> > > >>>>>>>>
> > > >>>>>>>>        + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in some
> > > >>>>> separate
> > > >>>>>>>> atomic cache
> > > >>>>>>>>
> > > >>>>>>>> But Committer Service also has to check that no one updated
> the
> > > >>>>>> original
> > > >>>>>>>> values before us, because otherwise we can not give any
> > > >>>>> serializability
> > > >>>>>>>> guarantee for these distributed transactions. Here we may need
> > > >> to
> > > >>>>> check
> > > >>>>>>> not
> > > >>>>>>>> only versions of the updated keys, but also versions of any
> > > >> other
> > > >>>>> keys
> > > >>>>>>> end
> > > >>>>>>>> result depends on.
> > > >>>>>>>>
> > > >>>>>>>> After that Committer Service has to do a cleanup (may be
> > > >> outside
> > > >>> of
> > > >>>>> the
> > > >>>>>>>> committing tx) to come to the following final state:
> > > >>>>>>>>
> > > >>>>>>>>            [k1 => v1a]
> > > >>>>>>>>            [k2 => v2ab]
> > > >>>>>>>>            [k3 => v3b]
> > > >>>>>>>>
> > > >>>>>>>> Makes sense?
> > > >>>>>>>>
> > > >>>>>>>> Sergi
> > > >>>>>>>>
> > > >>>>>>>>
> > > >>>>>>>> 2017-03-15 16:54 GMT+03:00 ALEKSEY KUZNETSOV <
> > > >>>>> alkuznetsov.sb@gmail.com
> > > >>>>>>> :
> > > >>>>>>>>
> > > >>>>>>>>>   - what do u mean by saying "
> > > >>>>>>>>> *in a single transaction checks value versions for all the
> > > >> old
> > > >>>>> values
> > > >>>>>>>>>    and replaces them with calculated new ones *"? Every time
> > > >>> you
> > > >>>>>>> change
> > > >>>>>>>>>   value(in some service), you store it to *some special
> > > >> atomic
> > > >>>>>> cache*
> > > >>>>>>> ,
> > > >>>>>>>> so
> > > >>>>>>>>>   when all services ceased working, Service commiter got a
> > > >>>> values
> > > >>>>>> with
> > > >>>>>>>> the
> > > >>>>>>>>>   last versions.
> > > >>>>>>>>>   - After "*does cleanup of temporary keys and values*"
> > > >>> Service
> > > >>>>>>> commiter
> > > >>>>>>>>>   persists them into permanent store, isn't it ?
> > > >>>>>>>>>   - I cant grasp your though, you say "*in case of version
> > > >>>>> mismatch
> > > >>>>>> or
> > > >>>>>>>> TX
> > > >>>>>>>>>   timeout just rollbacks*". But what versions would it
> > > >> match?
> > > >>>>>>>>>
> > > >>>>>>>>>
> > > >>>>>>>>> ср, 15 мар. 2017 г. в 15:34, Sergi Vladykin <
> > > >>>>>> sergi.vladykin@gmail.com
> > > >>>>>>>> :
> > > >>>>>>>>>
> > > >>>>>>>>>> Ok, here is what you actually need to implement at the
> > > >>>>> application
> > > >>>>>>>> level.
> > > >>>>>>>>>>
> > > >>>>>>>>>> Lets say we have to call 2 services in the following order:
> > > >>>>>>>>>> - Service A: wants to update keys [k1 => v1,   k2 => v2]
> > > >> to
> > > >>>>> [k1
> > > >>>>>> =>
> > > >>>>>>>>> v1a,
> > > >>>>>>>>>>  k2 => v2a]
> > > >>>>>>>>>> - Service B: wants to update keys [k2 => v2a, k3 => v3]
> > > >> to
> > > >>>> [k2
> > > >>>>>> =>
> > > >>>>>>>>> v2ab,
> > > >>>>>>>>>> k3 => v3b]
> > > >>>>>>>>>>
> > > >>>>>>>>>> The change
> > > >>>>>>>>>>    from [ k1 => v1,   k2 => v2,     k3 => v3   ]
> > > >>>>>>>>>>    to     [ k1 => v1a, k2 => v2ab, k3 => v3b ]
> > > >>>>>>>>>> must happen in a single transaction.
> > > >>>>>>>>>>
> > > >>>>>>>>>>
> > > >>>>>>>>>> Optimistic protocol to solve this:
> > > >>>>>>>>>>
> > > >>>>>>>>>> Each cache key must have a field `otx`, which is a unique
> > > >>>>>>> orchestrator
> > > >>>>>>>> TX
> > > >>>>>>>>>> identifier - it must be a parameter passed to all the
> > > >>> services.
> > > >>>>> If
> > > >>>>>>>> `otx`
> > > >>>>>>>>> is
> > > >>>>>>>>>> set to some value it means that it is an intermediate key
> > > >> and
> > > >>>> is
> > > >>>>>>>> visible
> > > >>>>>>>>>> only inside of some transaction, for the finalized key
> > > >> `otx`
> > > >>>> must
> > > >>>>>> be
> > > >>>>>>>>> null -
> > > >>>>>>>>>> it means the key is committed and visible for everyone.
> > > >>>>>>>>>>
> > > >>>>>>>>>> Each cache value must have a field `ver` which is a version
> > > >>> of
> > > >>>>> that
> > > >>>>>>>>> value.
> > > >>>>>>>>>>
> > > >>>>>>>>>> For both fields (`otx` and `ver`) the safest way is to use
> > > >>>> UUID.
> > > >>>>>>>>>>
> > > >>>>>>>>>> Workflow is the following:
> > > >>>>>>>>>>
> > > >>>>>>>>>> Orchestrator starts the distributed transaction with `otx`
> > > >> =
> > > >>> x
> > > >>>>> and
> > > >>>>>>>> passes
> > > >>>>>>>>>> this parameter to all the services.
> > > >>>>>>>>>>
> > > >>>>>>>>>> Service A:
> > > >>>>>>>>>> - does some computations
> > > >>>>>>>>>> - stores [k1x => v1a, k2x => v2a]  with TTL = Za
> > > >>>>>>>>>>      where
> > > >>>>>>>>>>          Za - left time from max Orchestrator TX duration
> > > >>>> after
> > > >>>>>>>> Service
> > > >>>>>>>>> A
> > > >>>>>>>>>> end
> > > >>>>>>>>>>          k1x, k2x - new temporary keys with field `otx` =
> > > >> x
> > > >>>>>>>>>>          v2a has updated version `ver`
> > > >>>>>>>>>> - returns a set of updated keys and all the old versions
> > > >> to
> > > >>>> the
> > > >>>>>>>>>> orchestrator
> > > >>>>>>>>>>       or just stores it in some special atomic cache like
> > > >>>>>>>>>>       [x => (k1 -> ver1, k2 -> ver2)] TTL = Za
> > > >>>>>>>>>>
> > > >>>>>>>>>> Service B:
> > > >>>>>>>>>> - retrieves the updated value k2x => v2a because it knows
> > > >>>> `otx`
> > > >>>>> =
> > > >>>>>> x
> > > >>>>>>>>>> - does computations
> > > >>>>>>>>>> - stores [k2x => v2ab, k3x => v3b] TTL = Zb
> > > >>>>>>>>>> - updates the set of updated keys like [x => (k1 -> ver1,
> > > >> k2
> > > >>>> ->
> > > >>>>>>> ver2,
> > > >>>>>>>> k3
> > > >>>>>>>>>> -> ver3)] TTL = Zb
> > > >>>>>>>>>>
> > > >>>>>>>>>> Service Committer (may be embedded into Orchestrator):
> > > >>>>>>>>>> - takes all the updated keys and versions for `otx` = x
> > > >>>>>>>>>>       [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)]
> > > >>>>>>>>>> - in a single transaction checks value versions for all
> > > >> the
> > > >>>> old
> > > >>>>>>> values
> > > >>>>>>>>>>       and replaces them with calculated new ones
> > > >>>>>>>>>> - does cleanup of temporary keys and values
> > > >>>>>>>>>> - in case of version mismatch or TX timeout just rollbacks
> > > >>> and
> > > >>>>>>> signals
> > > >>>>>>>>>>        to Orchestrator to restart the job with new `otx`
> > > >>>>>>>>>>
> > > >>>>>>>>>> PROFIT!!
> > > >>>>>>>>>>
> > > >>>>>>>>>> This approach even allows you to run independent parts of
> > > >> the
> > > >>>>> graph
> > > >>>>>>> in
> > > >>>>>>>>>> parallel (with TX transfer you will always run only one at
> > > >> a
> > > >>>>> time).
> > > >>>>>>>> Also
> > > >>>>>>>>> it
> > > >>>>>>>>>> does not require inventing any special fault tolerance
> > > >>> technics
> > > >>>>>>> because
> > > >>>>>>>>>> Ignite caches are already fault tolerant and all the
> > > >>>> intermediate
> > > >>>>>>>> results
> > > >>>>>>>>>> are virtually invisible and stored with TTL, thus in case
> > > >> of
> > > >>>> any
> > > >>>>>>> crash
> > > >>>>>>>>> you
> > > >>>>>>>>>> will not have inconsistent state or garbage.
> > > >>>>>>>>>>
> > > >>>>>>>>>> Sergi
> > > >>>>>>>>>>
> > > >>>>>>>>>>
> > > >>>>>>>>>> 2017-03-15 11:42 GMT+03:00 ALEKSEY KUZNETSOV <
> > > >>>>>>> alkuznetsov.sb@gmail.com
> > > >>>>>>>>> :
> > > >>>>>>>>>>
> > > >>>>>>>>>>> Okay, we are open for proposals on business task. I mean,
> > > >>> we
> > > >>>>> can
> > > >>>>>>> make
> > > >>>>>>>>> use
> > > >>>>>>>>>>> of some other thing, not distributed transaction. Not
> > > >>>>> transaction
> > > >>>>>>>> yet.
> > > >>>>>>>>>>>
> > > >>>>>>>>>>> ср, 15 мар. 2017 г. в 11:24, Vladimir Ozerov <
> > > >>>>>> vozerov@gridgain.com
> > > >>>>>>>> :
> > > >>>>>>>>>>>
> > > >>>>>>>>>>>> IMO the use case makes sense. However, as Sergi already
> > > >>>>>>> mentioned,
> > > >>>>>>>>> the
> > > >>>>>>>>>>>> problem is far more complex, than simply passing TX
> > > >> state
> > > >>>>> over
> > > >>>>>> a
> > > >>>>>>>>> wire.
> > > >>>>>>>>>>> Most
> > > >>>>>>>>>>>> probably a kind of coordinator will be required still
> > > >> to
> > > >>>>> manage
> > > >>>>>>> all
> > > >>>>>>>>>> kinds
> > > >>>>>>>>>>>> of failures. This task should be started with clean
> > > >>> design
> > > >>>>>>> proposal
> > > >>>>>>>>>>>> explaining how we handle all these concurrent events.
> > > >> And
> > > >>>>> only
> > > >>>>>>>> then,
> > > >>>>>>>>>> when
> > > >>>>>>>>>>>> we understand all implications, we should move to
> > > >>>> development
> > > >>>>>>>> stage.
> > > >>>>>>>>>>>>
> > > >>>>>>>>>>>> On Wed, Mar 15, 2017 at 10:38 AM, ALEKSEY KUZNETSOV <
> > > >>>>>>>>>>>> alkuznetsov.sb@gmail.com> wrote:
> > > >>>>>>>>>>>>
> > > >>>>>>>>>>>>> Right
> > > >>>>>>>>>>>>>
> > > >>>>>>>>>>>>> ср, 15 мар. 2017 г. в 10:35, Sergi Vladykin <
> > > >>>>>>>>>> sergi.vladykin@gmail.com
> > > >>>>>>>>>>>> :
> > > >>>>>>>>>>>>>
> > > >>>>>>>>>>>>>> Good! Basically your orchestrator just takes some
> > > >>>>>> predefined
> > > >>>>>>>>> graph
> > > >>>>>>>>>> of
> > > >>>>>>>>>>>>>> distributed services to be invoked, calls them by
> > > >>> some
> > > >>>>> kind
> > > >>>>>>> of
> > > >>>>>>>>> RPC
> > > >>>>>>>>>>> and
> > > >>>>>>>>>>>>>> passes the needed parameters between them, right?
> > > >>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>> Sergi
> > > >>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>> 2017-03-14 22:46 GMT+03:00 ALEKSEY KUZNETSOV <
> > > >>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > >>>>>>>>>>>>> :
> > > >>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>> orchestrator is a custom thing. He is responsible
> > > >>> for
> > > >>>>>>>> managing
> > > >>>>>>>>>>>> business
> > > >>>>>>>>>>>>>>> scenarios flows. Many nodes are involved in
> > > >>>> scenarios.
> > > >>>>>> They
> > > >>>>>>>>>>> exchange
> > > >>>>>>>>>>>>> data
> > > >>>>>>>>>>>>>>> and folow one another. If you acquinted with BPMN
> > > >>>>>>> framework,
> > > >>>>>>>> so
> > > >>>>>>>>>>>>>>> orchestrator is like bpmn engine.
> > > >>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>> вт, 14 Мар 2017 г., 18:56 Sergi Vladykin <
> > > >>>>>>>>>> sergi.vladykin@gmail.com
> > > >>>>>>>>>>>> :
> > > >>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>> What is Orchestrator for you? Is it a thing
> > > >> from
> > > >>>>>>> Microsoft
> > > >>>>>>>> or
> > > >>>>>>>>>>> your
> > > >>>>>>>>>>>>>> custom
> > > >>>>>>>>>>>>>>>> in-house software?
> > > >>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>> Sergi
> > > >>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>> 2017-03-14 18:00 GMT+03:00 ALEKSEY KUZNETSOV <
> > > >>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > >>>>>>>>>>>>>>> :
> > > >>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>> Fine. Let's say we've got multiple servers
> > > >>> which
> > > >>>>>>> fulfills
> > > >>>>>>>>>>> custom
> > > >>>>>>>>>>>>>> logic.
> > > >>>>>>>>>>>>>>>>> This servers compound oriented graph (BPMN
> > > >>>> process)
> > > >>>>>>> which
> > > >>>>>>>>>>>>> controlled
> > > >>>>>>>>>>>>>> by
> > > >>>>>>>>>>>>>>>>> Orchestrator.
> > > >>>>>>>>>>>>>>>>> For instance, *server1  *creates *variable A
> > > >>>> *with
> > > >>>>>>> value
> > > >>>>>>>> 1,
> > > >>>>>>>>>>>>> persists
> > > >>>>>>>>>>>>>> it
> > > >>>>>>>>>>>>>>>> to
> > > >>>>>>>>>>>>>>>>> IGNITE cache and creates *variable B *and
> > > >> sends
> > > >>>> it
> > > >>>>>> to*
> > > >>>>>>>>>> server2.
> > > >>>>>>>>>>>>> *The
> > > >>>>>>>>>>>>>>>>> latests receives *variable B*, do some logic
> > > >>> with
> > > >>>>> it
> > > >>>>>>> and
> > > >>>>>>>>>> stores
> > > >>>>>>>>>>>> to
> > > >>>>>>>>>>>>>>>> IGNITE.
> > > >>>>>>>>>>>>>>>>> All the work made by both servers must be
> > > >>>> fulfilled
> > > >>>>>> in
> > > >>>>>>>>> *one*
> > > >>>>>>>>>>>>>>> transaction.
> > > >>>>>>>>>>>>>>>>> Because we need all information done, or
> > > >>>>>>>>> nothing(rollbacked).
> > > >>>>>>>>>>> The
> > > >>>>>>>>>>>>>>>> scenario
> > > >>>>>>>>>>>>>>>>> is managed by orchestrator.
> > > >>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 17:31, Sergi Vladykin <
> > > >>>>>>>>>>>>>> sergi.vladykin@gmail.com
> > > >>>>>>>>>>>>>>>> :
> > > >>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>> Ok, it is not a business case, it is your
> > > >>> wrong
> > > >>>>>>>> solution
> > > >>>>>>>>>> for
> > > >>>>>>>>>>>> it.
> > > >>>>>>>>>>>>>>>>>> Lets try again, what is the business case?
> > > >>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>> Sergi
> > > >>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>> 2017-03-14 16:42 GMT+03:00 ALEKSEY
> > > >> KUZNETSOV
> > > >>> <
> > > >>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > >>>>>>>>>>>>>>>>> :
> > > >>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>> The case is the following, One starts
> > > >>>>> transaction
> > > >>>>>>> in
> > > >>>>>>>>> one
> > > >>>>>>>>>>>> node,
> > > >>>>>>>>>>>>>> and
> > > >>>>>>>>>>>>>>>>> commit
> > > >>>>>>>>>>>>>>>>>>> this transaction in another jvm node(or
> > > >>>>> rollback
> > > >>>>>> it
> > > >>>>>>>>>>>> remotely).
> > > >>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 16:30, Sergi
> > > >>> Vladykin <
> > > >>>>>>>>>>>>>>>> sergi.vladykin@gmail.com
> > > >>>>>>>>>>>>>>>>>> :
> > > >>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>> Because even if you make it work for
> > > >> some
> > > >>>>>>>> simplistic
> > > >>>>>>>>>>>>> scenario,
> > > >>>>>>>>>>>>>>> get
> > > >>>>>>>>>>>>>>>>>> ready
> > > >>>>>>>>>>>>>>>>>>> to
> > > >>>>>>>>>>>>>>>>>>>> write many fault tolerance tests and
> > > >> make
> > > >>>>> sure
> > > >>>>>>> that
> > > >>>>>>>>> you
> > > >>>>>>>>>>> TXs
> > > >>>>>>>>>>>>>> work
> > > >>>>>>>>>>>>>>>>>>> gracefully
> > > >>>>>>>>>>>>>>>>>>>> in all modes in case of crashes. Also
> > > >>> make
> > > >>>>> sure
> > > >>>>>>>> that
> > > >>>>>>>>> we
> > > >>>>>>>>>>> do
> > > >>>>>>>>>>>>> not
> > > >>>>>>>>>>>>>>> have
> > > >>>>>>>>>>>>>>>>> any
> > > >>>>>>>>>>>>>>>>>>>> performance drops after all your
> > > >> changes
> > > >>> in
> > > >>>>>>>> existing
> > > >>>>>>>>>>>>>> benchmarks.
> > > >>>>>>>>>>>>>>>> All
> > > >>>>>>>>>>>>>>>>> in
> > > >>>>>>>>>>>>>>>>>>> all
> > > >>>>>>>>>>>>>>>>>>>> I don't believe these conditions will
> > > >> be
> > > >>>> met
> > > >>>>>> and
> > > >>>>>>>> your
> > > >>>>>>>>>>>>>>> contribution
> > > >>>>>>>>>>>>>>>>> will
> > > >>>>>>>>>>>>>>>>>>> be
> > > >>>>>>>>>>>>>>>>>>>> accepted.
> > > >>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>> Better solution to what problem?
> > > >> Sending
> > > >>> TX
> > > >>>>> to
> > > >>>>>>>>> another
> > > >>>>>>>>>>>> node?
> > > >>>>>>>>>>>>>> The
> > > >>>>>>>>>>>>>>>>>> problem
> > > >>>>>>>>>>>>>>>>>>>> statement itself is already wrong. What
> > > >>>>>> business
> > > >>>>>>>> case
> > > >>>>>>>>>> you
> > > >>>>>>>>>>>> are
> > > >>>>>>>>>>>>>>>> trying
> > > >>>>>>>>>>>>>>>>> to
> > > >>>>>>>>>>>>>>>>>>>> solve? I'm sure everything you need can
> > > >>> be
> > > >>>>> done
> > > >>>>>>> in
> > > >>>>>>>> a
> > > >>>>>>>>>> much
> > > >>>>>>>>>>>>> more
> > > >>>>>>>>>>>>>>>> simple
> > > >>>>>>>>>>>>>>>>>> and
> > > >>>>>>>>>>>>>>>>>>>> efficient way at the application level.
> > > >>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>> Sergi
> > > >>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>> 2017-03-14 16:03 GMT+03:00 ALEKSEY
> > > >>>> KUZNETSOV
> > > >>>>> <
> > > >>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > >>>>>>>>>>>>>>>>>>> :
> > > >>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>> Why wrong ? You know the better
> > > >>> solution?
> > > >>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 15:46, Sergi
> > > >>>>> Vladykin <
> > > >>>>>>>>>>>>>>>>>> sergi.vladykin@gmail.com
> > > >>>>>>>>>>>>>>>>>>>> :
> > > >>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>> Just serializing TX object and
> > > >>>>>> deserializing
> > > >>>>>>> it
> > > >>>>>>>>> on
> > > >>>>>>>>>>>>> another
> > > >>>>>>>>>>>>>>> node
> > > >>>>>>>>>>>>>>>>> is
> > > >>>>>>>>>>>>>>>>>>>>>> meaningless, because other nodes
> > > >>>>>>> participating
> > > >>>>>>>> in
> > > >>>>>>>>>> the
> > > >>>>>>>>>>>> TX
> > > >>>>>>>>>>>>>> have
> > > >>>>>>>>>>>>>>>> to
> > > >>>>>>>>>>>>>>>>>> know
> > > >>>>>>>>>>>>>>>>>>>>> about
> > > >>>>>>>>>>>>>>>>>>>>>> the new coordinator. This will
> > > >>> require
> > > >>>>>>> protocol
> > > >>>>>>>>>>>> changes,
> > > >>>>>>>>>>>>> we
> > > >>>>>>>>>>>>>>>>>>> definitely
> > > >>>>>>>>>>>>>>>>>>>>> will
> > > >>>>>>>>>>>>>>>>>>>>>> have fault tolerance and
> > > >> performance
> > > >>>>>> issues.
> > > >>>>>>>> IMO
> > > >>>>>>>>>> the
> > > >>>>>>>>>>>>> whole
> > > >>>>>>>>>>>>>>> idea
> > > >>>>>>>>>>>>>>>>> is
> > > >>>>>>>>>>>>>>>>>>>> wrong
> > > >>>>>>>>>>>>>>>>>>>>>> and it makes no sense to waste time
> > > >>> on
> > > >>>>> it.
> > > >>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>> Sergi
> > > >>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>> 2017-03-14 10:57 GMT+03:00 ALEKSEY
> > > >>>>>> KUZNETSOV
> > > >>>>>>> <
> > > >>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > >>>>>>>>>>>>>>>>>>>>> :
> > > >>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>> IgniteTransactionState
> > > >>>> implememntation
> > > >>>>>>>> contains
> > > >>>>>>>>>>>>>>>> IgniteTxEntry's
> > > >>>>>>>>>>>>>>>>>>> which
> > > >>>>>>>>>>>>>>>>>>>>> is
> > > >>>>>>>>>>>>>>>>>>>>>>> supposed to be transferable
> > > >>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>> пн, 13 мар. 2017 г. в 19:32,
> > > >>> Dmitriy
> > > >>>>>>>> Setrakyan
> > > >>>>>>>>> <
> > > >>>>>>>>>>>>>>>>>>>> dsetrakyan@apache.org
> > > >>>>>>>>>>>>>>>>>>>>>> :
> > > >>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>> It sounds a little scary to me
> > > >>> that
> > > >>>>> we
> > > >>>>>>> are
> > > >>>>>>>>>>> passing
> > > >>>>>>>>>>>>>>>>> transaction
> > > >>>>>>>>>>>>>>>>>>>>> objects
> > > >>>>>>>>>>>>>>>>>>>>>>>> around. Such object may contain
> > > >>> all
> > > >>>>>> sorts
> > > >>>>>>>> of
> > > >>>>>>>>>>> Ignite
> > > >>>>>>>>>>>>>>>> context.
> > > >>>>>>>>>>>>>>>>> If
> > > >>>>>>>>>>>>>>>>>>>> some
> > > >>>>>>>>>>>>>>>>>>>>>> data
> > > >>>>>>>>>>>>>>>>>>>>>>>> needs to be passed across, we
> > > >>>> should
> > > >>>>>>>> create a
> > > >>>>>>>>>>>> special
> > > >>>>>>>>>>>>>>>>> transfer
> > > >>>>>>>>>>>>>>>>>>>> object
> > > >>>>>>>>>>>>>>>>>>>>>> in
> > > >>>>>>>>>>>>>>>>>>>>>>>> this case.
> > > >>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>> D.
> > > >>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>> On Mon, Mar 13, 2017 at 9:10
> > > >> AM,
> > > >>>>>> ALEKSEY
> > > >>>>>>>>>>> KUZNETSOV
> > > >>>>>>>>>>>> <
> > > >>>>>>>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > >>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> > > >>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>> well, there a couple of
> > > >> issues
> > > >>>>>>> preventing
> > > >>>>>>>>>>>>> transaction
> > > >>>>>>>>>>>>>>>>>>> proceeding.
> > > >>>>>>>>>>>>>>>>>>>>>>>>> At first, After transaction
> > > >>>>>>> serialization
> > > >>>>>>>>> and
> > > >>>>>>>>>>>>>>>>> deserialization
> > > >>>>>>>>>>>>>>>>>>> on
> > > >>>>>>>>>>>>>>>>>>>>> the
> > > >>>>>>>>>>>>>>>>>>>>>>>> remote
> > > >>>>>>>>>>>>>>>>>>>>>>>>> server, there is no txState.
> > > >> So
> > > >>>> im
> > > >>>>>>> going
> > > >>>>>>>> to
> > > >>>>>>>>>> put
> > > >>>>>>>>>>>> it
> > > >>>>>>>>>>>>> in
> > > >>>>>>>>>>>>>>>>>>>>>>>>>
> > > >> writeExternal()\readExternal()
> > > >>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>> The last one is Deserialized
> > > >>>>>>> transaction
> > > >>>>>>>>>> lacks
> > > >>>>>>>>>>> of
> > > >>>>>>>>>>>>>>> shared
> > > >>>>>>>>>>>>>>>>>> cache
> > > >>>>>>>>>>>>>>>>>>>>>> context
> > > >>>>>>>>>>>>>>>>>>>>>>>>> field at
> > > >> TransactionProxyImpl.
> > > >>>>>> Perhaps,
> > > >>>>>>>> it
> > > >>>>>>>>>> must
> > > >>>>>>>>>>>> be
> > > >>>>>>>>>>>>>>>> injected
> > > >>>>>>>>>>>>>>>>>> by
> > > >>>>>>>>>>>>>>>>>>>>>>>>> GridResourceProcessor ?
> > > >>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>> пн, 13 мар. 2017 г. в 17:27,
> > > >>>>> ALEKSEY
> > > >>>>>>>>>> KUZNETSOV
> > > >>>>>>>>>>> <
> > > >>>>>>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > >>>>>>>>>>>>>>>>>>>>>>>>> :
> > > >>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> while starting and
> > > >> continuing
> > > >>>>>>>> transaction
> > > >>>>>>>>>> in
> > > >>>>>>>>>>>>>>> different
> > > >>>>>>>>>>>>>>>>> jvms
> > > >>>>>>>>>>>>>>>>>>> in
> > > >>>>>>>>>>>>>>>>>>>>> run
> > > >>>>>>>>>>>>>>>>>>>>>>> into
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> serialization exception in
> > > >>>>>>>>>> writeExternalMeta
> > > >>>>>>>>>>> :
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> @Override public void
> > > >>>>>>>>>>>> writeExternal(ObjectOutput
> > > >>>>>>>>>>>>>> out)
> > > >>>>>>>>>>>>>>>>>> throws
> > > >>>>>>>>>>>>>>>>>>>>>>>> IOException
> > > >>>>>>>>>>>>>>>>>>>>>>>>> {
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>    writeExternalMeta(out);
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> some meta is cannot be
> > > >>>>> serialized.
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> пт, 10 мар. 2017 г. в
> > > >> 17:25,
> > > >>>>> Alexey
> > > >>>>>>>>>>> Goncharuk <
> > > >>>>>>>>>>>>>>>>>>>>>>>>> alexey.goncharuk@gmail.com
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>> :
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> Aleksey,
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> I think I am starting to
> > > >> get
> > > >>>> what
> > > >>>>>> you
> > > >>>>>>>>> want,
> > > >>>>>>>>>>>> but I
> > > >>>>>>>>>>>>>>> have
> > > >>>>>>>>>>>>>>>> a
> > > >>>>>>>>>>>>>>>>>> few
> > > >>>>>>>>>>>>>>>>>>>>>>> concerns:
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> - What is the API for the
> > > >>>>> proposed
> > > >>>>>>>>> change?
> > > >>>>>>>>>>> In
> > > >>>>>>>>>>>>> your
> > > >>>>>>>>>>>>>>>> test,
> > > >>>>>>>>>>>>>>>>>> you
> > > >>>>>>>>>>>>>>>>>>>>> pass
> > > >>>>>>>>>>>>>>>>>>>>>> an
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> instance of transaction
> > > >>> created
> > > >>>>> on
> > > >>>>>>>>>> ignite(0)
> > > >>>>>>>>>>> to
> > > >>>>>>>>>>>>> the
> > > >>>>>>>>>>>>>>>>> ignite
> > > >>>>>>>>>>>>>>>>>>>>> instance
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> ignite(1). This is
> > > >> obviously
> > > >>>> not
> > > >>>>>>>> possible
> > > >>>>>>>>>> in
> > > >>>>>>>>>>> a
> > > >>>>>>>>>>>>>> truly
> > > >>>>>>>>>>>>>>>>>>>> distributed
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> (multi-jvm) environment.
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> - How will you synchronize
> > > >>>> cache
> > > >>>>>>> update
> > > >>>>>>>>>>> actions
> > > >>>>>>>>>>>>> and
> > > >>>>>>>>>>>>>>>>>>> transaction
> > > >>>>>>>>>>>>>>>>>>>>>>> commit?
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> Say, you have one node that
> > > >>>>> decided
> > > >>>>>>> to
> > > >>>>>>>>>>> commit,
> > > >>>>>>>>>>>>> but
> > > >>>>>>>>>>>>>>>>> another
> > > >>>>>>>>>>>>>>>>>>> node
> > > >>>>>>>>>>>>>>>>>>>>> is
> > > >>>>>>>>>>>>>>>>>>>>>>>> still
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> writing within this
> > > >>>> transaction.
> > > >>>>>> How
> > > >>>>>>> do
> > > >>>>>>>>> you
> > > >>>>>>>>>>>> make
> > > >>>>>>>>>>>>>> sure
> > > >>>>>>>>>>>>>>>>> that
> > > >>>>>>>>>>>>>>>>>>> two
> > > >>>>>>>>>>>>>>>>>>>>>> nodes
> > > >>>>>>>>>>>>>>>>>>>>>>>> will
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> not call commit() and
> > > >>>> rollback()
> > > >>>>>>>>>>>> simultaneously?
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> - How do you make sure
> > > >> that
> > > >>>>> either
> > > >>>>>>>>>> commit()
> > > >>>>>>>>>>> or
> > > >>>>>>>>>>>>>>>>> rollback()
> > > >>>>>>>>>>>>>>>>>> is
> > > >>>>>>>>>>>>>>>>>>>>>> called
> > > >>>>>>>>>>>>>>>>>>>>>>> if
> > > >>>>>>>>>>>>>>>>>>>>>>>>> an
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> originator failed?
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> 2017-03-10 15:38 GMT+03:00
> > > >>>>> Дмитрий
> > > >>>>>>>> Рябов
> > > >>>>>>>>> <
> > > >>>>>>>>>>>>>>>>>>>> somefireone@gmail.com
> > > >>>>>>>>>>>>>>>>>>>>>> :
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>> Alexey Goncharuk, heh, my
> > > >>>>> initial
> > > >>>>>>>>>>>> understanding
> > > >>>>>>>>>>>>>> was
> > > >>>>>>>>>>>>>>>>> that
> > > >>>>>>>>>>>>>>>>>>>>>>> transferring
> > > >>>>>>>>>>>>>>>>>>>>>>>>> of
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> tx
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>> ownership from one node
> > > >> to
> > > >>>>>> another
> > > >>>>>>>> will
> > > >>>>>>>>>> be
> > > >>>>>>>>>>>>>> happened
> > > >>>>>>>>>>>>>>>>>>>>> automatically
> > > >>>>>>>>>>>>>>>>>>>>>>>> when
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>> originating node is gone
> > > >>>> down.
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>> 2017-03-10 15:36
> > > >> GMT+03:00
> > > >>>>>> ALEKSEY
> > > >>>>>>>>>>> KUZNETSOV
> > > >>>>>>>>>>>> <
> > > >>>>>>>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> :
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>> Im aiming to span
> > > >>>> transaction
> > > >>>>>> on
> > > >>>>>>>>>> multiple
> > > >>>>>>>>>>>>>>> threads,
> > > >>>>>>>>>>>>>>>>>> nodes,
> > > >>>>>>>>>>>>>>>>>>>>>>>> jvms(soon).
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> So
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>> every node is able to
> > > >>>>> rollback,
> > > >>>>>>> or
> > > >>>>>>>>>> commit
> > > >>>>>>>>>>>>>> common
> > > >>>>>>>>>>>>>>>>>>>>> transaction.It
> > > >>>>>>>>>>>>>>>>>>>>>>>>> turned
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>> up i
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>> need to transfer tx
> > > >>> between
> > > >>>>>> nodes
> > > >>>>>>>> in
> > > >>>>>>>>>>> order
> > > >>>>>>>>>>>> to
> > > >>>>>>>>>>>>>>>> commit
> > > >>>>>>>>>>>>>>>>>>>>>> transaction
> > > >>>>>>>>>>>>>>>>>>>>>>> in
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>> different node(in the
> > > >>> same
> > > >>>>>> jvm).
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>> пт, 10 мар. 2017 г. в
> > > >>>> 15:20,
> > > >>>>>>> Alexey
> > > >>>>>>>>>>>>> Goncharuk <
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >> alexey.goncharuk@gmail.com
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>> :
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Aleksey,
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Do you mean that you
> > > >>>> want a
> > > >>>>>>>> concept
> > > >>>>>>>>>> of
> > > >>>>>>>>>>>>>>>> transferring
> > > >>>>>>>>>>>>>>>>>> of
> > > >>>>>>>>>>>>>>>>>>> tx
> > > >>>>>>>>>>>>>>>>>>>>>>>> ownership
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>> from
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>> one node to another?
> > > >> My
> > > >>>>>> initial
> > > >>>>>>>>>>>>> understanding
> > > >>>>>>>>>>>>>>> was
> > > >>>>>>>>>>>>>>>>>> that
> > > >>>>>>>>>>>>>>>>>>>> you
> > > >>>>>>>>>>>>>>>>>>>>>> want
> > > >>>>>>>>>>>>>>>>>>>>>>>> to
> > > >>>>>>>>>>>>>>>>>>>>>>>>> be
> > > >>>>>>>>>>>>>>>>
>
> --

*Best Regards,*

*Kuznetsov Aleksey*

Re: distributed transaction of non-single coordinator

Posted by ALEKSEY KUZNETSOV <al...@gmail.com>.
Hi ! Thanks for help. I've created ticket :
https://issues.apache.org/jira/browse/IGNITE-4887
and a commit :
https://github.com/voipp/ignite/commit/aa3487bd9c203394f534c605f84e06436b638e5c
We really need this feature

чт, 30 мар. 2017 г. в 11:31, Alexey Goncharuk <al...@gmail.com>:

> Aleksey,
>
> I doubt your approach works as expected. Current transaction recovery
> protocol heavily relies on the originating node ID in its internal logic.
> For example, currently a transaction will be rolled back if you want to
> transfer a transaction ownership to another node and original tx owner
> fails. An attempt to commit such a transaction on another node may fail
> with all sorts of assertions. After transaction ownership changed, you need
> to notify all current transaction participants about this change, and it
> should also be done failover-safe, let alone that you did not add any tests
> for these cases.
>
> I back Denis here. Please create a ticket first and come up with clear
> use-cases, API and protocol changes design. It is hard to reason about the
> changes you've made when we do not even understand why you are making these
> changes and how they are supposed to work.
>
> --AG
>
> 2017-03-30 10:43 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:
>
> > So, what do u think on my idea ?
> >
> > ср, 29 мар. 2017 г. в 10:35, ALEKSEY KUZNETSOV <alkuznetsov.sb@gmail.com
> >:
> >
> > > Hi! No, i dont have ticket for this.
> > > In the ticket i have implemented methods that change transaction status
> > to
> > > STOP, thus letting it to commit transaction in another thread. In
> another
> > > thread you r going to restart transaction in order to commit it.
> > > The mechanism behind it is obvious : we change thread id to newer one
> in
> > > ThreadMap, and make use of serialization of txState, transactions
> itself
> > to
> > > transfer them into another thread.
> > >
> > >
> > > вт, 28 мар. 2017 г. в 20:15, Denis Magda <dm...@apache.org>:
> > >
> > > Aleksey,
> > >
> > > Do you have a ticket for this? Could you briefly list what exactly was
> > > done and how the things work.
> > >
> > > —
> > > Denis
> > >
> > > > On Mar 28, 2017, at 8:32 AM, ALEKSEY KUZNETSOV <
> > alkuznetsov.sb@gmail.com>
> > > wrote:
> > > >
> > > > Hi, Igniters! I 've made implementation of transactions of non-single
> > > > coordinator. Here you can start transaction in one thread and commit
> it
> > > in
> > > > another thread.
> > > > Take a look on it. Give your thoughts on it.
> > > >
> > > >
> > > https://github.com/voipp/ignite/pull/10/commits/
> > 3a3d90aa6ac84f125e4c3ce4ced4f269a695ef45
> > > >
> > > > пт, 17 мар. 2017 г. в 19:26, Sergi Vladykin <
> sergi.vladykin@gmail.com
> > >:
> > > >
> > > >> You know better, go ahead! :)
> > > >>
> > > >> Sergi
> > > >>
> > > >> 2017-03-17 16:16 GMT+03:00 ALEKSEY KUZNETSOV <
> > alkuznetsov.sb@gmail.com
> > > >:
> > > >>
> > > >>> we've discovered several problems regarding your "accumulation"
> > > >>> approach.These are
> > > >>>
> > > >>>   1. perfomance issues when transfering data from temporary cache
> to
> > > >>>   permanent one. Keep in mind big deal of concurent transactions in
> > > >>> Service
> > > >>>   commiter
> > > >>>   2. extreme memory load when keeping temporary cache in memory
> > > >>>   3. As long as user is not acquainted with ignite, working with
> > cache
> > > >>>   must be transparent for him. Keep this in mind. User's node can
> > > >> evaluate
> > > >>>   logic with no transaction at all, so we should deal with both
> types
> > > of
> > > >>>   execution flow : transactional and non-transactional.Another one
> > > >>> problem is
> > > >>>   transaction id support at the user node. We would have handled
> all
> > > >> this
> > > >>>   issues and many more.
> > > >>>   4. we cannot pessimistically lock entity.
> > > >>>
> > > >>> As a result, we decided to move on building distributed
> transaction.
> > We
> > > >> put
> > > >>> aside your "accumulation" approach until we realize how to solve
> > > >>> difficulties above .
> > > >>>
> > > >>> чт, 16 мар. 2017 г. в 16:56, Sergi Vladykin <
> > sergi.vladykin@gmail.com
> > > >:
> > > >>>
> > > >>>> The problem "How to run millions of entities, and millions of
> > > >> operations
> > > >>> on
> > > >>>> a single Pentium3" is out of scope here. Do the math, plan
> capacity
> > > >>>> reasonably.
> > > >>>>
> > > >>>> Sergi
> > > >>>>
> > > >>>> 2017-03-16 15:54 GMT+03:00 ALEKSEY KUZNETSOV <
> > > alkuznetsov.sb@gmail.com
> > > >>> :
> > > >>>>
> > > >>>>> hmm, If we have millions of entities, and millions of operations,
> > > >> would
> > > >>>> not
> > > >>>>> this approache lead to memory overflow and perfomance degradation
> > > >>>>>
> > > >>>>> чт, 16 мар. 2017 г. в 15:42, Sergi Vladykin <
> > > >> sergi.vladykin@gmail.com
> > > >>>> :
> > > >>>>>
> > > >>>>>> 1. Actually you have to check versions on all the values you
> have
> > > >>> read
> > > >>>>>> during the tx.
> > > >>>>>>
> > > >>>>>> For example if we have [k1 => v1, k2 => v2] and do:
> > > >>>>>>
> > > >>>>>> put(k1, get(k2) + 5)
> > > >>>>>>
> > > >>>>>> We have to remember the version for k2. This logic can be
> > > >> relatively
> > > >>>>> easily
> > > >>>>>> encapsulated in a framework atop of Ignite. You need to
> implement
> > > >> one
> > > >>>> to
> > > >>>>>> make all this stuff usable.
> > > >>>>>>
> > > >>>>>> 2. I suggest to avoid any locking here, because you easily will
> > end
> > > >>> up
> > > >>>>> with
> > > >>>>>> deadlocks. If you do not have too frequent updates for your
> keys,
> > > >>>>>> optimistic approach will work just fine.
> > > >>>>>>
> > > >>>>>> Theoretically in the Committer Service you can start a thread
> for
> > > >> the
> > > >>>>>> lifetime of the whole distributed transaction, take a lock on
> the
> > > >> key
> > > >>>>> using
> > > >>>>>> IgniteCache.lock(K key) before executing any Services, wait for
> > all
> > > >>> the
> > > >>>>>> services to complete, execute optimistic commit in the same
> thread
> > > >>>> while
> > > >>>>>> keeping this lock and then release it. Notice that all the
> Ignite
> > > >>>>>> transactions inside of all Services must be optimistic here to
> be
> > > >>> able
> > > >>>> to
> > > >>>>>> read this locked key.
> > > >>>>>>
> > > >>>>>> But again I do not recommend you using this approach until you
> > > >> have a
> > > >>>>>> reliable deadlock avoidance scheme.
> > > >>>>>>
> > > >>>>>> Sergi
> > > >>>>>>
> > > >>>>>>
> > > >>>>>>
> > > >>>>>>
> > > >>>>>>
> > > >>>>>>
> > > >>>>>>
> > > >>>>>> 2017-03-16 12:53 GMT+03:00 ALEKSEY KUZNETSOV <
> > > >>> alkuznetsov.sb@gmail.com
> > > >>>>> :
> > > >>>>>>
> > > >>>>>>> Yeah, now i got it.
> > > >>>>>>> There are some doubts on this approach
> > > >>>>>>> 1) During optimistic commit phase, when you assure no one
> altered
> > > >>> the
> > > >>>>>>> original values, you must check versions of other dependent
> keys.
> > > >>> How
> > > >>>>>> could
> > > >>>>>>> we obtain those keys(in an automative manner, of course) ?
> > > >>>>>>> 2) How could we lock a key before some Service A introduce
> > > >> changes?
> > > >>>> So
> > > >>>>> no
> > > >>>>>>> other service is allowed to change this key-value?(sort of
> > > >>>> pessimistic
> > > >>>>>>> blocking)
> > > >>>>>>> May be you know some implementations of such approach ?
> > > >>>>>>>
> > > >>>>>>> ср, 15 мар. 2017 г. в 17:54, ALEKSEY KUZNETSOV <
> > > >>>>> alkuznetsov.sb@gmail.com
> > > >>>>>>> :
> > > >>>>>>>
> > > >>>>>>>> Thank you very much for help.  I will answer later.
> > > >>>>>>>>
> > > >>>>>>>> ср, 15 мар. 2017 г. в 17:39, Sergi Vladykin <
> > > >>>>> sergi.vladykin@gmail.com
> > > >>>>>>> :
> > > >>>>>>>>
> > > >>>>>>>> All the services do not update key in place, but only generate
> > > >>> new
> > > >>>>> keys
> > > >>>>>>>> augmented by otx and store the updated value in the same cache
> > > >> +
> > > >>>>>> remember
> > > >>>>>>>> the keys and versions participating in the transaction in some
> > > >>>>> separate
> > > >>>>>>>> atomic cache.
> > > >>>>>>>>
> > > >>>>>>>> Follow this sequence of changes applied to cache contents by
> > > >> each
> > > >>>>>>> Service:
> > > >>>>>>>>
> > > >>>>>>>> Initial cache contents:
> > > >>>>>>>>            [k1 => v1]
> > > >>>>>>>>            [k2 => v2]
> > > >>>>>>>>            [k3 => v3]
> > > >>>>>>>>
> > > >>>>>>>> Cache contents after Service A:
> > > >>>>>>>>            [k1 => v1]
> > > >>>>>>>>            [k2 => v2]
> > > >>>>>>>>            [k3 => v3]
> > > >>>>>>>>            [k1x => v1a]
> > > >>>>>>>>            [k2x => v2a]
> > > >>>>>>>>
> > > >>>>>>>>         + [x => (k1 -> ver1, k2 -> ver2)] in some separate
> > > >>> atomic
> > > >>>>>> cache
> > > >>>>>>>>
> > > >>>>>>>> Cache contents after Service B:
> > > >>>>>>>>            [k1 => v1]
> > > >>>>>>>>            [k2 => v2]
> > > >>>>>>>>            [k3 => v3]
> > > >>>>>>>>            [k1x => v1a]
> > > >>>>>>>>            [k2x => v2ab]
> > > >>>>>>>>            [k3x => v3b]
> > > >>>>>>>>
> > > >>>>>>>>        + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in some
> > > >>>>> separate
> > > >>>>>>>> atomic cache
> > > >>>>>>>>
> > > >>>>>>>> Finally the Committer Service takes this map of updated keys
> > > >> and
> > > >>>>> their
> > > >>>>>>>> versions from some separate atomic cache, starts Ignite
> > > >>> transaction
> > > >>>>> and
> > > >>>>>>>> replaces all the values for k* keys to values taken from k*x
> > > >>> keys.
> > > >>>>> The
> > > >>>>>>>> successful result must be the following:
> > > >>>>>>>>
> > > >>>>>>>>            [k1 => v1a]
> > > >>>>>>>>            [k2 => v2ab]
> > > >>>>>>>>            [k3 => v3b]
> > > >>>>>>>>            [k1x => v1a]
> > > >>>>>>>>            [k2x => v2ab]
> > > >>>>>>>>            [k3x => v3b]
> > > >>>>>>>>
> > > >>>>>>>>        + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in some
> > > >>>>> separate
> > > >>>>>>>> atomic cache
> > > >>>>>>>>
> > > >>>>>>>> But Committer Service also has to check that no one updated
> the
> > > >>>>>> original
> > > >>>>>>>> values before us, because otherwise we can not give any
> > > >>>>> serializability
> > > >>>>>>>> guarantee for these distributed transactions. Here we may need
> > > >> to
> > > >>>>> check
> > > >>>>>>> not
> > > >>>>>>>> only versions of the updated keys, but also versions of any
> > > >> other
> > > >>>>> keys
> > > >>>>>>> end
> > > >>>>>>>> result depends on.
> > > >>>>>>>>
> > > >>>>>>>> After that Committer Service has to do a cleanup (may be
> > > >> outside
> > > >>> of
> > > >>>>> the
> > > >>>>>>>> committing tx) to come to the following final state:
> > > >>>>>>>>
> > > >>>>>>>>            [k1 => v1a]
> > > >>>>>>>>            [k2 => v2ab]
> > > >>>>>>>>            [k3 => v3b]
> > > >>>>>>>>
> > > >>>>>>>> Makes sense?
> > > >>>>>>>>
> > > >>>>>>>> Sergi
> > > >>>>>>>>
> > > >>>>>>>>
> > > >>>>>>>> 2017-03-15 16:54 GMT+03:00 ALEKSEY KUZNETSOV <
> > > >>>>> alkuznetsov.sb@gmail.com
> > > >>>>>>> :
> > > >>>>>>>>
> > > >>>>>>>>>   - what do u mean by saying "
> > > >>>>>>>>> *in a single transaction checks value versions for all the
> > > >> old
> > > >>>>> values
> > > >>>>>>>>>    and replaces them with calculated new ones *"? Every time
> > > >>> you
> > > >>>>>>> change
> > > >>>>>>>>>   value(in some service), you store it to *some special
> > > >> atomic
> > > >>>>>> cache*
> > > >>>>>>> ,
> > > >>>>>>>> so
> > > >>>>>>>>>   when all services ceased working, Service commiter got a
> > > >>>> values
> > > >>>>>> with
> > > >>>>>>>> the
> > > >>>>>>>>>   last versions.
> > > >>>>>>>>>   - After "*does cleanup of temporary keys and values*"
> > > >>> Service
> > > >>>>>>> commiter
> > > >>>>>>>>>   persists them into permanent store, isn't it ?
> > > >>>>>>>>>   - I cant grasp your though, you say "*in case of version
> > > >>>>> mismatch
> > > >>>>>> or
> > > >>>>>>>> TX
> > > >>>>>>>>>   timeout just rollbacks*". But what versions would it
> > > >> match?
> > > >>>>>>>>>
> > > >>>>>>>>>
> > > >>>>>>>>> ср, 15 мар. 2017 г. в 15:34, Sergi Vladykin <
> > > >>>>>> sergi.vladykin@gmail.com
> > > >>>>>>>> :
> > > >>>>>>>>>
> > > >>>>>>>>>> Ok, here is what you actually need to implement at the
> > > >>>>> application
> > > >>>>>>>> level.
> > > >>>>>>>>>>
> > > >>>>>>>>>> Lets say we have to call 2 services in the following order:
> > > >>>>>>>>>> - Service A: wants to update keys [k1 => v1,   k2 => v2]
> > > >> to
> > > >>>>> [k1
> > > >>>>>> =>
> > > >>>>>>>>> v1a,
> > > >>>>>>>>>>  k2 => v2a]
> > > >>>>>>>>>> - Service B: wants to update keys [k2 => v2a, k3 => v3]
> > > >> to
> > > >>>> [k2
> > > >>>>>> =>
> > > >>>>>>>>> v2ab,
> > > >>>>>>>>>> k3 => v3b]
> > > >>>>>>>>>>
> > > >>>>>>>>>> The change
> > > >>>>>>>>>>    from [ k1 => v1,   k2 => v2,     k3 => v3   ]
> > > >>>>>>>>>>    to     [ k1 => v1a, k2 => v2ab, k3 => v3b ]
> > > >>>>>>>>>> must happen in a single transaction.
> > > >>>>>>>>>>
> > > >>>>>>>>>>
> > > >>>>>>>>>> Optimistic protocol to solve this:
> > > >>>>>>>>>>
> > > >>>>>>>>>> Each cache key must have a field `otx`, which is a unique
> > > >>>>>>> orchestrator
> > > >>>>>>>> TX
> > > >>>>>>>>>> identifier - it must be a parameter passed to all the
> > > >>> services.
> > > >>>>> If
> > > >>>>>>>> `otx`
> > > >>>>>>>>> is
> > > >>>>>>>>>> set to some value it means that it is an intermediate key
> > > >> and
> > > >>>> is
> > > >>>>>>>> visible
> > > >>>>>>>>>> only inside of some transaction, for the finalized key
> > > >> `otx`
> > > >>>> must
> > > >>>>>> be
> > > >>>>>>>>> null -
> > > >>>>>>>>>> it means the key is committed and visible for everyone.
> > > >>>>>>>>>>
> > > >>>>>>>>>> Each cache value must have a field `ver` which is a version
> > > >>> of
> > > >>>>> that
> > > >>>>>>>>> value.
> > > >>>>>>>>>>
> > > >>>>>>>>>> For both fields (`otx` and `ver`) the safest way is to use
> > > >>>> UUID.
> > > >>>>>>>>>>
> > > >>>>>>>>>> Workflow is the following:
> > > >>>>>>>>>>
> > > >>>>>>>>>> Orchestrator starts the distributed transaction with `otx`
> > > >> =
> > > >>> x
> > > >>>>> and
> > > >>>>>>>> passes
> > > >>>>>>>>>> this parameter to all the services.
> > > >>>>>>>>>>
> > > >>>>>>>>>> Service A:
> > > >>>>>>>>>> - does some computations
> > > >>>>>>>>>> - stores [k1x => v1a, k2x => v2a]  with TTL = Za
> > > >>>>>>>>>>      where
> > > >>>>>>>>>>          Za - left time from max Orchestrator TX duration
> > > >>>> after
> > > >>>>>>>> Service
> > > >>>>>>>>> A
> > > >>>>>>>>>> end
> > > >>>>>>>>>>          k1x, k2x - new temporary keys with field `otx` =
> > > >> x
> > > >>>>>>>>>>          v2a has updated version `ver`
> > > >>>>>>>>>> - returns a set of updated keys and all the old versions
> > > >> to
> > > >>>> the
> > > >>>>>>>>>> orchestrator
> > > >>>>>>>>>>       or just stores it in some special atomic cache like
> > > >>>>>>>>>>       [x => (k1 -> ver1, k2 -> ver2)] TTL = Za
> > > >>>>>>>>>>
> > > >>>>>>>>>> Service B:
> > > >>>>>>>>>> - retrieves the updated value k2x => v2a because it knows
> > > >>>> `otx`
> > > >>>>> =
> > > >>>>>> x
> > > >>>>>>>>>> - does computations
> > > >>>>>>>>>> - stores [k2x => v2ab, k3x => v3b] TTL = Zb
> > > >>>>>>>>>> - updates the set of updated keys like [x => (k1 -> ver1,
> > > >> k2
> > > >>>> ->
> > > >>>>>>> ver2,
> > > >>>>>>>> k3
> > > >>>>>>>>>> -> ver3)] TTL = Zb
> > > >>>>>>>>>>
> > > >>>>>>>>>> Service Committer (may be embedded into Orchestrator):
> > > >>>>>>>>>> - takes all the updated keys and versions for `otx` = x
> > > >>>>>>>>>>       [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)]
> > > >>>>>>>>>> - in a single transaction checks value versions for all
> > > >> the
> > > >>>> old
> > > >>>>>>> values
> > > >>>>>>>>>>       and replaces them with calculated new ones
> > > >>>>>>>>>> - does cleanup of temporary keys and values
> > > >>>>>>>>>> - in case of version mismatch or TX timeout just rollbacks
> > > >>> and
> > > >>>>>>> signals
> > > >>>>>>>>>>        to Orchestrator to restart the job with new `otx`
> > > >>>>>>>>>>
> > > >>>>>>>>>> PROFIT!!
> > > >>>>>>>>>>
> > > >>>>>>>>>> This approach even allows you to run independent parts of
> > > >> the
> > > >>>>> graph
> > > >>>>>>> in
> > > >>>>>>>>>> parallel (with TX transfer you will always run only one at
> > > >> a
> > > >>>>> time).
> > > >>>>>>>> Also
> > > >>>>>>>>> it
> > > >>>>>>>>>> does not require inventing any special fault tolerance
> > > >>> technics
> > > >>>>>>> because
> > > >>>>>>>>>> Ignite caches are already fault tolerant and all the
> > > >>>> intermediate
> > > >>>>>>>> results
> > > >>>>>>>>>> are virtually invisible and stored with TTL, thus in case
> > > >> of
> > > >>>> any
> > > >>>>>>> crash
> > > >>>>>>>>> you
> > > >>>>>>>>>> will not have inconsistent state or garbage.
> > > >>>>>>>>>>
> > > >>>>>>>>>> Sergi
> > > >>>>>>>>>>
> > > >>>>>>>>>>
> > > >>>>>>>>>> 2017-03-15 11:42 GMT+03:00 ALEKSEY KUZNETSOV <
> > > >>>>>>> alkuznetsov.sb@gmail.com
> > > >>>>>>>>> :
> > > >>>>>>>>>>
> > > >>>>>>>>>>> Okay, we are open for proposals on business task. I mean,
> > > >>> we
> > > >>>>> can
> > > >>>>>>> make
> > > >>>>>>>>> use
> > > >>>>>>>>>>> of some other thing, not distributed transaction. Not
> > > >>>>> transaction
> > > >>>>>>>> yet.
> > > >>>>>>>>>>>
> > > >>>>>>>>>>> ср, 15 мар. 2017 г. в 11:24, Vladimir Ozerov <
> > > >>>>>> vozerov@gridgain.com
> > > >>>>>>>> :
> > > >>>>>>>>>>>
> > > >>>>>>>>>>>> IMO the use case makes sense. However, as Sergi already
> > > >>>>>>> mentioned,
> > > >>>>>>>>> the
> > > >>>>>>>>>>>> problem is far more complex, than simply passing TX
> > > >> state
> > > >>>>> over
> > > >>>>>> a
> > > >>>>>>>>> wire.
> > > >>>>>>>>>>> Most
> > > >>>>>>>>>>>> probably a kind of coordinator will be required still
> > > >> to
> > > >>>>> manage
> > > >>>>>>> all
> > > >>>>>>>>>> kinds
> > > >>>>>>>>>>>> of failures. This task should be started with clean
> > > >>> design
> > > >>>>>>> proposal
> > > >>>>>>>>>>>> explaining how we handle all these concurrent events.
> > > >> And
> > > >>>>> only
> > > >>>>>>>> then,
> > > >>>>>>>>>> when
> > > >>>>>>>>>>>> we understand all implications, we should move to
> > > >>>> development
> > > >>>>>>>> stage.
> > > >>>>>>>>>>>>
> > > >>>>>>>>>>>> On Wed, Mar 15, 2017 at 10:38 AM, ALEKSEY KUZNETSOV <
> > > >>>>>>>>>>>> alkuznetsov.sb@gmail.com> wrote:
> > > >>>>>>>>>>>>
> > > >>>>>>>>>>>>> Right
> > > >>>>>>>>>>>>>
> > > >>>>>>>>>>>>> ср, 15 мар. 2017 г. в 10:35, Sergi Vladykin <
> > > >>>>>>>>>> sergi.vladykin@gmail.com
> > > >>>>>>>>>>>> :
> > > >>>>>>>>>>>>>
> > > >>>>>>>>>>>>>> Good! Basically your orchestrator just takes some
> > > >>>>>> predefined
> > > >>>>>>>>> graph
> > > >>>>>>>>>> of
> > > >>>>>>>>>>>>>> distributed services to be invoked, calls them by
> > > >>> some
> > > >>>>> kind
> > > >>>>>>> of
> > > >>>>>>>>> RPC
> > > >>>>>>>>>>> and
> > > >>>>>>>>>>>>>> passes the needed parameters between them, right?
> > > >>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>> Sergi
> > > >>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>> 2017-03-14 22:46 GMT+03:00 ALEKSEY KUZNETSOV <
> > > >>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > >>>>>>>>>>>>> :
> > > >>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>> orchestrator is a custom thing. He is responsible
> > > >>> for
> > > >>>>>>>> managing
> > > >>>>>>>>>>>> business
> > > >>>>>>>>>>>>>>> scenarios flows. Many nodes are involved in
> > > >>>> scenarios.
> > > >>>>>> They
> > > >>>>>>>>>>> exchange
> > > >>>>>>>>>>>>> data
> > > >>>>>>>>>>>>>>> and folow one another. If you acquinted with BPMN
> > > >>>>>>> framework,
> > > >>>>>>>> so
> > > >>>>>>>>>>>>>>> orchestrator is like bpmn engine.
> > > >>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>> вт, 14 Мар 2017 г., 18:56 Sergi Vladykin <
> > > >>>>>>>>>> sergi.vladykin@gmail.com
> > > >>>>>>>>>>>> :
> > > >>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>> What is Orchestrator for you? Is it a thing
> > > >> from
> > > >>>>>>> Microsoft
> > > >>>>>>>> or
> > > >>>>>>>>>>> your
> > > >>>>>>>>>>>>>> custom
> > > >>>>>>>>>>>>>>>> in-house software?
> > > >>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>> Sergi
> > > >>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>> 2017-03-14 18:00 GMT+03:00 ALEKSEY KUZNETSOV <
> > > >>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > >>>>>>>>>>>>>>> :
> > > >>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>> Fine. Let's say we've got multiple servers
> > > >>> which
> > > >>>>>>> fulfills
> > > >>>>>>>>>>> custom
> > > >>>>>>>>>>>>>> logic.
> > > >>>>>>>>>>>>>>>>> This servers compound oriented graph (BPMN
> > > >>>> process)
> > > >>>>>>> which
> > > >>>>>>>>>>>>> controlled
> > > >>>>>>>>>>>>>> by
> > > >>>>>>>>>>>>>>>>> Orchestrator.
> > > >>>>>>>>>>>>>>>>> For instance, *server1  *creates *variable A
> > > >>>> *with
> > > >>>>>>> value
> > > >>>>>>>> 1,
> > > >>>>>>>>>>>>> persists
> > > >>>>>>>>>>>>>> it
> > > >>>>>>>>>>>>>>>> to
> > > >>>>>>>>>>>>>>>>> IGNITE cache and creates *variable B *and
> > > >> sends
> > > >>>> it
> > > >>>>>> to*
> > > >>>>>>>>>> server2.
> > > >>>>>>>>>>>>> *The
> > > >>>>>>>>>>>>>>>>> latests receives *variable B*, do some logic
> > > >>> with
> > > >>>>> it
> > > >>>>>>> and
> > > >>>>>>>>>> stores
> > > >>>>>>>>>>>> to
> > > >>>>>>>>>>>>>>>> IGNITE.
> > > >>>>>>>>>>>>>>>>> All the work made by both servers must be
> > > >>>> fulfilled
> > > >>>>>> in
> > > >>>>>>>>> *one*
> > > >>>>>>>>>>>>>>> transaction.
> > > >>>>>>>>>>>>>>>>> Because we need all information done, or
> > > >>>>>>>>> nothing(rollbacked).
> > > >>>>>>>>>>> The
> > > >>>>>>>>>>>>>>>> scenario
> > > >>>>>>>>>>>>>>>>> is managed by orchestrator.
> > > >>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 17:31, Sergi Vladykin <
> > > >>>>>>>>>>>>>> sergi.vladykin@gmail.com
> > > >>>>>>>>>>>>>>>> :
> > > >>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>> Ok, it is not a business case, it is your
> > > >>> wrong
> > > >>>>>>>> solution
> > > >>>>>>>>>> for
> > > >>>>>>>>>>>> it.
> > > >>>>>>>>>>>>>>>>>> Lets try again, what is the business case?
> > > >>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>> Sergi
> > > >>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>> 2017-03-14 16:42 GMT+03:00 ALEKSEY
> > > >> KUZNETSOV
> > > >>> <
> > > >>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > >>>>>>>>>>>>>>>>> :
> > > >>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>> The case is the following, One starts
> > > >>>>> transaction
> > > >>>>>>> in
> > > >>>>>>>>> one
> > > >>>>>>>>>>>> node,
> > > >>>>>>>>>>>>>> and
> > > >>>>>>>>>>>>>>>>> commit
> > > >>>>>>>>>>>>>>>>>>> this transaction in another jvm node(or
> > > >>>>> rollback
> > > >>>>>> it
> > > >>>>>>>>>>>> remotely).
> > > >>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 16:30, Sergi
> > > >>> Vladykin <
> > > >>>>>>>>>>>>>>>> sergi.vladykin@gmail.com
> > > >>>>>>>>>>>>>>>>>> :
> > > >>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>> Because even if you make it work for
> > > >> some
> > > >>>>>>>> simplistic
> > > >>>>>>>>>>>>> scenario,
> > > >>>>>>>>>>>>>>> get
> > > >>>>>>>>>>>>>>>>>> ready
> > > >>>>>>>>>>>>>>>>>>> to
> > > >>>>>>>>>>>>>>>>>>>> write many fault tolerance tests and
> > > >> make
> > > >>>>> sure
> > > >>>>>>> that
> > > >>>>>>>>> you
> > > >>>>>>>>>>> TXs
> > > >>>>>>>>>>>>>> work
> > > >>>>>>>>>>>>>>>>>>> gracefully
> > > >>>>>>>>>>>>>>>>>>>> in all modes in case of crashes. Also
> > > >>> make
> > > >>>>> sure
> > > >>>>>>>> that
> > > >>>>>>>>> we
> > > >>>>>>>>>>> do
> > > >>>>>>>>>>>>> not
> > > >>>>>>>>>>>>>>> have
> > > >>>>>>>>>>>>>>>>> any
> > > >>>>>>>>>>>>>>>>>>>> performance drops after all your
> > > >> changes
> > > >>> in
> > > >>>>>>>> existing
> > > >>>>>>>>>>>>>> benchmarks.
> > > >>>>>>>>>>>>>>>> All
> > > >>>>>>>>>>>>>>>>> in
> > > >>>>>>>>>>>>>>>>>>> all
> > > >>>>>>>>>>>>>>>>>>>> I don't believe these conditions will
> > > >> be
> > > >>>> met
> > > >>>>>> and
> > > >>>>>>>> your
> > > >>>>>>>>>>>>>>> contribution
> > > >>>>>>>>>>>>>>>>> will
> > > >>>>>>>>>>>>>>>>>>> be
> > > >>>>>>>>>>>>>>>>>>>> accepted.
> > > >>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>> Better solution to what problem?
> > > >> Sending
> > > >>> TX
> > > >>>>> to
> > > >>>>>>>>> another
> > > >>>>>>>>>>>> node?
> > > >>>>>>>>>>>>>> The
> > > >>>>>>>>>>>>>>>>>> problem
> > > >>>>>>>>>>>>>>>>>>>> statement itself is already wrong. What
> > > >>>>>> business
> > > >>>>>>>> case
> > > >>>>>>>>>> you
> > > >>>>>>>>>>>> are
> > > >>>>>>>>>>>>>>>> trying
> > > >>>>>>>>>>>>>>>>> to
> > > >>>>>>>>>>>>>>>>>>>> solve? I'm sure everything you need can
> > > >>> be
> > > >>>>> done
> > > >>>>>>> in
> > > >>>>>>>> a
> > > >>>>>>>>>> much
> > > >>>>>>>>>>>>> more
> > > >>>>>>>>>>>>>>>> simple
> > > >>>>>>>>>>>>>>>>>> and
> > > >>>>>>>>>>>>>>>>>>>> efficient way at the application level.
> > > >>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>> Sergi
> > > >>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>> 2017-03-14 16:03 GMT+03:00 ALEKSEY
> > > >>>> KUZNETSOV
> > > >>>>> <
> > > >>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > >>>>>>>>>>>>>>>>>>> :
> > > >>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>> Why wrong ? You know the better
> > > >>> solution?
> > > >>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 15:46, Sergi
> > > >>>>> Vladykin <
> > > >>>>>>>>>>>>>>>>>> sergi.vladykin@gmail.com
> > > >>>>>>>>>>>>>>>>>>>> :
> > > >>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>> Just serializing TX object and
> > > >>>>>> deserializing
> > > >>>>>>> it
> > > >>>>>>>>> on
> > > >>>>>>>>>>>>> another
> > > >>>>>>>>>>>>>>> node
> > > >>>>>>>>>>>>>>>>> is
> > > >>>>>>>>>>>>>>>>>>>>>> meaningless, because other nodes
> > > >>>>>>> participating
> > > >>>>>>>> in
> > > >>>>>>>>>> the
> > > >>>>>>>>>>>> TX
> > > >>>>>>>>>>>>>> have
> > > >>>>>>>>>>>>>>>> to
> > > >>>>>>>>>>>>>>>>>> know
> > > >>>>>>>>>>>>>>>>>>>>> about
> > > >>>>>>>>>>>>>>>>>>>>>> the new coordinator. This will
> > > >>> require
> > > >>>>>>> protocol
> > > >>>>>>>>>>>> changes,
> > > >>>>>>>>>>>>> we
> > > >>>>>>>>>>>>>>>>>>> definitely
> > > >>>>>>>>>>>>>>>>>>>>> will
> > > >>>>>>>>>>>>>>>>>>>>>> have fault tolerance and
> > > >> performance
> > > >>>>>> issues.
> > > >>>>>>>> IMO
> > > >>>>>>>>>> the
> > > >>>>>>>>>>>>> whole
> > > >>>>>>>>>>>>>>> idea
> > > >>>>>>>>>>>>>>>>> is
> > > >>>>>>>>>>>>>>>>>>>> wrong
> > > >>>>>>>>>>>>>>>>>>>>>> and it makes no sense to waste time
> > > >>> on
> > > >>>>> it.
> > > >>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>> Sergi
> > > >>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>> 2017-03-14 10:57 GMT+03:00 ALEKSEY
> > > >>>>>> KUZNETSOV
> > > >>>>>>> <
> > > >>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > >>>>>>>>>>>>>>>>>>>>> :
> > > >>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>> IgniteTransactionState
> > > >>>> implememntation
> > > >>>>>>>> contains
> > > >>>>>>>>>>>>>>>> IgniteTxEntry's
> > > >>>>>>>>>>>>>>>>>>> which
> > > >>>>>>>>>>>>>>>>>>>>> is
> > > >>>>>>>>>>>>>>>>>>>>>>> supposed to be transferable
> > > >>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>> пн, 13 мар. 2017 г. в 19:32,
> > > >>> Dmitriy
> > > >>>>>>>> Setrakyan
> > > >>>>>>>>> <
> > > >>>>>>>>>>>>>>>>>>>> dsetrakyan@apache.org
> > > >>>>>>>>>>>>>>>>>>>>>> :
> > > >>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>> It sounds a little scary to me
> > > >>> that
> > > >>>>> we
> > > >>>>>>> are
> > > >>>>>>>>>>> passing
> > > >>>>>>>>>>>>>>>>> transaction
> > > >>>>>>>>>>>>>>>>>>>>> objects
> > > >>>>>>>>>>>>>>>>>>>>>>>> around. Such object may contain
> > > >>> all
> > > >>>>>> sorts
> > > >>>>>>>> of
> > > >>>>>>>>>>> Ignite
> > > >>>>>>>>>>>>>>>> context.
> > > >>>>>>>>>>>>>>>>> If
> > > >>>>>>>>>>>>>>>>>>>> some
> > > >>>>>>>>>>>>>>>>>>>>>> data
> > > >>>>>>>>>>>>>>>>>>>>>>>> needs to be passed across, we
> > > >>>> should
> > > >>>>>>>> create a
> > > >>>>>>>>>>>> special
> > > >>>>>>>>>>>>>>>>> transfer
> > > >>>>>>>>>>>>>>>>>>>> object
> > > >>>>>>>>>>>>>>>>>>>>>> in
> > > >>>>>>>>>>>>>>>>>>>>>>>> this case.
> > > >>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>> D.
> > > >>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>> On Mon, Mar 13, 2017 at 9:10
> > > >> AM,
> > > >>>>>> ALEKSEY
> > > >>>>>>>>>>> KUZNETSOV
> > > >>>>>>>>>>>> <
> > > >>>>>>>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > >>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> > > >>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>> well, there a couple of
> > > >> issues
> > > >>>>>>> preventing
> > > >>>>>>>>>>>>> transaction
> > > >>>>>>>>>>>>>>>>>>> proceeding.
> > > >>>>>>>>>>>>>>>>>>>>>>>>> At first, After transaction
> > > >>>>>>> serialization
> > > >>>>>>>>> and
> > > >>>>>>>>>>>>>>>>> deserialization
> > > >>>>>>>>>>>>>>>>>>> on
> > > >>>>>>>>>>>>>>>>>>>>> the
> > > >>>>>>>>>>>>>>>>>>>>>>>> remote
> > > >>>>>>>>>>>>>>>>>>>>>>>>> server, there is no txState.
> > > >> So
> > > >>>> im
> > > >>>>>>> going
> > > >>>>>>>> to
> > > >>>>>>>>>> put
> > > >>>>>>>>>>>> it
> > > >>>>>>>>>>>>> in
> > > >>>>>>>>>>>>>>>>>>>>>>>>>
> > > >> writeExternal()\readExternal()
> > > >>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>> The last one is Deserialized
> > > >>>>>>> transaction
> > > >>>>>>>>>> lacks
> > > >>>>>>>>>>> of
> > > >>>>>>>>>>>>>>> shared
> > > >>>>>>>>>>>>>>>>>> cache
> > > >>>>>>>>>>>>>>>>>>>>>> context
> > > >>>>>>>>>>>>>>>>>>>>>>>>> field at
> > > >> TransactionProxyImpl.
> > > >>>>>> Perhaps,
> > > >>>>>>>> it
> > > >>>>>>>>>> must
> > > >>>>>>>>>>>> be
> > > >>>>>>>>>>>>>>>> injected
> > > >>>>>>>>>>>>>>>>>> by
> > > >>>>>>>>>>>>>>>>>>>>>>>>> GridResourceProcessor ?
> > > >>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>> пн, 13 мар. 2017 г. в 17:27,
> > > >>>>> ALEKSEY
> > > >>>>>>>>>> KUZNETSOV
> > > >>>>>>>>>>> <
> > > >>>>>>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > >>>>>>>>>>>>>>>>>>>>>>>>> :
> > > >>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> while starting and
> > > >> continuing
> > > >>>>>>>> transaction
> > > >>>>>>>>>> in
> > > >>>>>>>>>>>>>>> different
> > > >>>>>>>>>>>>>>>>> jvms
> > > >>>>>>>>>>>>>>>>>>> in
> > > >>>>>>>>>>>>>>>>>>>>> run
> > > >>>>>>>>>>>>>>>>>>>>>>> into
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> serialization exception in
> > > >>>>>>>>>> writeExternalMeta
> > > >>>>>>>>>>> :
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> @Override public void
> > > >>>>>>>>>>>> writeExternal(ObjectOutput
> > > >>>>>>>>>>>>>> out)
> > > >>>>>>>>>>>>>>>>>> throws
> > > >>>>>>>>>>>>>>>>>>>>>>>> IOException
> > > >>>>>>>>>>>>>>>>>>>>>>>>> {
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>    writeExternalMeta(out);
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> some meta is cannot be
> > > >>>>> serialized.
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> пт, 10 мар. 2017 г. в
> > > >> 17:25,
> > > >>>>> Alexey
> > > >>>>>>>>>>> Goncharuk <
> > > >>>>>>>>>>>>>>>>>>>>>>>>> alexey.goncharuk@gmail.com
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>> :
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> Aleksey,
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> I think I am starting to
> > > >> get
> > > >>>> what
> > > >>>>>> you
> > > >>>>>>>>> want,
> > > >>>>>>>>>>>> but I
> > > >>>>>>>>>>>>>>> have
> > > >>>>>>>>>>>>>>>> a
> > > >>>>>>>>>>>>>>>>>> few
> > > >>>>>>>>>>>>>>>>>>>>>>> concerns:
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> - What is the API for the
> > > >>>>> proposed
> > > >>>>>>>>> change?
> > > >>>>>>>>>>> In
> > > >>>>>>>>>>>>> your
> > > >>>>>>>>>>>>>>>> test,
> > > >>>>>>>>>>>>>>>>>> you
> > > >>>>>>>>>>>>>>>>>>>>> pass
> > > >>>>>>>>>>>>>>>>>>>>>> an
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> instance of transaction
> > > >>> created
> > > >>>>> on
> > > >>>>>>>>>> ignite(0)
> > > >>>>>>>>>>> to
> > > >>>>>>>>>>>>> the
> > > >>>>>>>>>>>>>>>>> ignite
> > > >>>>>>>>>>>>>>>>>>>>> instance
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> ignite(1). This is
> > > >> obviously
> > > >>>> not
> > > >>>>>>>> possible
> > > >>>>>>>>>> in
> > > >>>>>>>>>>> a
> > > >>>>>>>>>>>>>> truly
> > > >>>>>>>>>>>>>>>>>>>> distributed
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> (multi-jvm) environment.
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> - How will you synchronize
> > > >>>> cache
> > > >>>>>>> update
> > > >>>>>>>>>>> actions
> > > >>>>>>>>>>>>> and
> > > >>>>>>>>>>>>>>>>>>> transaction
> > > >>>>>>>>>>>>>>>>>>>>>>> commit?
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> Say, you have one node that
> > > >>>>> decided
> > > >>>>>>> to
> > > >>>>>>>>>>> commit,
> > > >>>>>>>>>>>>> but
> > > >>>>>>>>>>>>>>>>> another
> > > >>>>>>>>>>>>>>>>>>> node
> > > >>>>>>>>>>>>>>>>>>>>> is
> > > >>>>>>>>>>>>>>>>>>>>>>>> still
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> writing within this
> > > >>>> transaction.
> > > >>>>>> How
> > > >>>>>>> do
> > > >>>>>>>>> you
> > > >>>>>>>>>>>> make
> > > >>>>>>>>>>>>>> sure
> > > >>>>>>>>>>>>>>>>> that
> > > >>>>>>>>>>>>>>>>>>> two
> > > >>>>>>>>>>>>>>>>>>>>>> nodes
> > > >>>>>>>>>>>>>>>>>>>>>>>> will
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> not call commit() and
> > > >>>> rollback()
> > > >>>>>>>>>>>> simultaneously?
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> - How do you make sure
> > > >> that
> > > >>>>> either
> > > >>>>>>>>>> commit()
> > > >>>>>>>>>>> or
> > > >>>>>>>>>>>>>>>>> rollback()
> > > >>>>>>>>>>>>>>>>>> is
> > > >>>>>>>>>>>>>>>>>>>>>> called
> > > >>>>>>>>>>>>>>>>>>>>>>> if
> > > >>>>>>>>>>>>>>>>>>>>>>>>> an
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> originator failed?
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> 2017-03-10 15:38 GMT+03:00
> > > >>>>> Дмитрий
> > > >>>>>>>> Рябов
> > > >>>>>>>>> <
> > > >>>>>>>>>>>>>>>>>>>> somefireone@gmail.com
> > > >>>>>>>>>>>>>>>>>>>>>> :
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>> Alexey Goncharuk, heh, my
> > > >>>>> initial
> > > >>>>>>>>>>>> understanding
> > > >>>>>>>>>>>>>> was
> > > >>>>>>>>>>>>>>>>> that
> > > >>>>>>>>>>>>>>>>>>>>>>> transferring
> > > >>>>>>>>>>>>>>>>>>>>>>>>> of
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> tx
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>> ownership from one node
> > > >> to
> > > >>>>>> another
> > > >>>>>>>> will
> > > >>>>>>>>>> be
> > > >>>>>>>>>>>>>> happened
> > > >>>>>>>>>>>>>>>>>>>>> automatically
> > > >>>>>>>>>>>>>>>>>>>>>>>> when
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>> originating node is gone
> > > >>>> down.
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>> 2017-03-10 15:36
> > > >> GMT+03:00
> > > >>>>>> ALEKSEY
> > > >>>>>>>>>>> KUZNETSOV
> > > >>>>>>>>>>>> <
> > > >>>>>>>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> :
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>> Im aiming to span
> > > >>>> transaction
> > > >>>>>> on
> > > >>>>>>>>>> multiple
> > > >>>>>>>>>>>>>>> threads,
> > > >>>>>>>>>>>>>>>>>> nodes,
> > > >>>>>>>>>>>>>>>>>>>>>>>> jvms(soon).
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> So
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>> every node is able to
> > > >>>>> rollback,
> > > >>>>>>> or
> > > >>>>>>>>>> commit
> > > >>>>>>>>>>>>>> common
> > > >>>>>>>>>>>>>>>>>>>>> transaction.It
> > > >>>>>>>>>>>>>>>>>>>>>>>>> turned
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>> up i
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>> need to transfer tx
> > > >>> between
> > > >>>>>> nodes
> > > >>>>>>>> in
> > > >>>>>>>>>>> order
> > > >>>>>>>>>>>> to
> > > >>>>>>>>>>>>>>>> commit
> > > >>>>>>>>>>>>>>>>>>>>>> transaction
> > > >>>>>>>>>>>>>>>>>>>>>>> in
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>> different node(in the
> > > >>> same
> > > >>>>>> jvm).
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>> пт, 10 мар. 2017 г. в
> > > >>>> 15:20,
> > > >>>>>>> Alexey
> > > >>>>>>>>>>>>> Goncharuk <
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >> alexey.goncharuk@gmail.com
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>> :
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Aleksey,
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Do you mean that you
> > > >>>> want a
> > > >>>>>>>> concept
> > > >>>>>>>>>> of
> > > >>>>>>>>>>>>>>>> transferring
> > > >>>>>>>>>>>>>>>>>> of
> > > >>>>>>>>>>>>>>>>>>> tx
> > > >>>>>>>>>>>>>>>>>>>>>>>> ownership
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>> from
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>> one node to another?
> > > >> My
> > > >>>>>> initial
> > > >>>>>>>>>>>>> understanding
> > > >>>>>>>>>>>>>>> was
> > > >>>>>>>>>>>>>>>>>> that
> > > >>>>>>>>>>>>>>>>>>>> you
> > > >>>>>>>>>>>>>>>>>>>>>> want
> > > >>>>>>>>>>>>>>>>>>>>>>>> to
> > > >>>>>>>>>>>>>>>>>>>>>>>>> be
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>> able
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>> to update keys in a
> > > >>>>>> transaction
> > > >>>>>>>>> from
> > > >>>>>>>>>>>>> multiple
> > > >>>>>>>>>>>>>>>>> threads
> > > >>>>>>>>>>>>>>>>>>> in
> > > >>>>>>>>>>>>>>>>>>>>>>>> parallel.
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>> --AG
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>> 2017-03-10 15:01
> > > >>>> GMT+03:00
> > > >>>>>>>> ALEKSEY
> > > >>>>>>>>>>>>> KUZNETSOV
> > > >>>>>>>>>>>>>> <
> > > >>>>>>>>>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>> :
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Well. Consider
> > > >>>>> transaction
> > > >>>>>>>>> started
> > > >>>>>>>>>> in
> > > >>>>>>>>>>>> one
> > > >>>>>>>>>>>>>>> node,
> > > >>>>>>>>>>>>>>>>> and
> > > >>>>>>>>>>>>>>>>>>>>>> continued
> > > >>>>>>>>>>>>>>>>>>>>>>>> in
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>> another
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> one.
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The following test
> > > >>>>>> describes
> > > >>>>>>> my
> > > >>>>>>>>>> idea:
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Ignite ignite1 =
> > > >>>>> ignite(0);
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> IgniteTransactions
> > > >>>>>>>> transactions =
> > > >>>>>>>>>>>>>>>>>>>> ignite1.transactions();
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> IgniteCache<String,
> > > >>>>>> Integer>
> > > >>>>>>>>> cache
> > > >>>>>>>>>> =
> > > >>>>>>>>>>>>>>>>>>>>>>> ignite1.getOrCreateCache("
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> testCache");
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Transaction tx =
> > > >>>>>>>>>>> transactions.txStart(
> > > >>>>>>>>>>>>>>>>> concurrency,
> > > >>>>>>>>>>>>>>>>>>>>>>> isolation);
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> cache.put("key1",
> > > >> 1);
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> cache.put("key2",
> > > >> 2);
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> tx.stop();
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>> IgniteInternalFuture<Boolean>
> > > >>>>>>>>> fut =
> > > >>>>>>>>>>>>>>>>>>>>>> GridTestUtils.runAsync(()
> > > >>>>>>>>>>>>>>>>>>>>>>>> ->
> > > >>>>>>>>>>>>>>>>>>>>>>>>> {
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>> IgniteTransactions
> > > >>>>> ts =
> > > >>>>>>>>>>>>>>>>>> ignite(1).transactions();
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>> Assert.assertNull(ts.tx());
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>> Assert.assertEquals(
> > > >>>>>>>>>>>>>>>>> TransactionState.STOPPED,
> > > >>>>>>>>>>>>>>>>>>>>>>> tx.state());
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>    ts.txStart(tx);
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>> Assert.assertEquals(TransactionState.ACTIVE,
> > > >>>>>>>>>>>>>>>>>>>>>>> tx.state());
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >> cache.put("key3",
> > > >>>> 3);
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>> Assert.assertTrue(cache.remove("key2"));
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>    tx.commit();
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>    return true;
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> });
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> fut.get();
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >> Assert.assertEquals(
> > > >>>>>>>>>>>>>>> TransactionState.COMMITTED,
> > > >>>>>>>>>>>>>>>>>>>>>> tx.state());
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>> Assert.assertEquals((long)1,
> > > >>>>>>>>>>>>>>>>>>> (long)cache.get("key1"));
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>> Assert.assertEquals((long)3,
> > > >>>>>>>>>>>>>>>>>>> (long)cache.get("key3"));
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>> Assert.assertFalse(cache.
> > > >>>>>>>>>>>>>>> containsKey("key2"));
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> In method
> > > >>>>> *ts.txStart(...)*
> > > >>>>>>> we
> > > >>>>>>>>> just
> > > >>>>>>>>>>>>> rebind
> > > >>>>>>>>>>>>>>> *tx*
> > > >>>>>>>>>>>>>>>>> to
> > > >>>>>>>>>>>>>>>>>>>>> current
> > > >>>>>>>>>>>>>>>>>>>>>>>>> thread:
> > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>>>>>>>
> > >
> > > --
> >
> > *Best Regards,*
> >
> > *Kuznetsov Aleksey*
> >
>
-- 

*Best Regards,*

*Kuznetsov Aleksey*

Re: distributed transaction of non-single coordinator

Posted by Alexey Goncharuk <al...@gmail.com>.
Aleksey,

I doubt your approach works as expected. Current transaction recovery
protocol heavily relies on the originating node ID in its internal logic.
For example, currently a transaction will be rolled back if you want to
transfer a transaction ownership to another node and original tx owner
fails. An attempt to commit such a transaction on another node may fail
with all sorts of assertions. After transaction ownership changed, you need
to notify all current transaction participants about this change, and it
should also be done failover-safe, let alone that you did not add any tests
for these cases.

I back Denis here. Please create a ticket first and come up with clear
use-cases, API and protocol changes design. It is hard to reason about the
changes you've made when we do not even understand why you are making these
changes and how they are supposed to work.

--AG

2017-03-30 10:43 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:

> So, what do u think on my idea ?
>
> ср, 29 мар. 2017 г. в 10:35, ALEKSEY KUZNETSOV <al...@gmail.com>:
>
> > Hi! No, i dont have ticket for this.
> > In the ticket i have implemented methods that change transaction status
> to
> > STOP, thus letting it to commit transaction in another thread. In another
> > thread you r going to restart transaction in order to commit it.
> > The mechanism behind it is obvious : we change thread id to newer one in
> > ThreadMap, and make use of serialization of txState, transactions itself
> to
> > transfer them into another thread.
> >
> >
> > вт, 28 мар. 2017 г. в 20:15, Denis Magda <dm...@apache.org>:
> >
> > Aleksey,
> >
> > Do you have a ticket for this? Could you briefly list what exactly was
> > done and how the things work.
> >
> > —
> > Denis
> >
> > > On Mar 28, 2017, at 8:32 AM, ALEKSEY KUZNETSOV <
> alkuznetsov.sb@gmail.com>
> > wrote:
> > >
> > > Hi, Igniters! I 've made implementation of transactions of non-single
> > > coordinator. Here you can start transaction in one thread and commit it
> > in
> > > another thread.
> > > Take a look on it. Give your thoughts on it.
> > >
> > >
> > https://github.com/voipp/ignite/pull/10/commits/
> 3a3d90aa6ac84f125e4c3ce4ced4f269a695ef45
> > >
> > > пт, 17 мар. 2017 г. в 19:26, Sergi Vladykin <sergi.vladykin@gmail.com
> >:
> > >
> > >> You know better, go ahead! :)
> > >>
> > >> Sergi
> > >>
> > >> 2017-03-17 16:16 GMT+03:00 ALEKSEY KUZNETSOV <
> alkuznetsov.sb@gmail.com
> > >:
> > >>
> > >>> we've discovered several problems regarding your "accumulation"
> > >>> approach.These are
> > >>>
> > >>>   1. perfomance issues when transfering data from temporary cache to
> > >>>   permanent one. Keep in mind big deal of concurent transactions in
> > >>> Service
> > >>>   commiter
> > >>>   2. extreme memory load when keeping temporary cache in memory
> > >>>   3. As long as user is not acquainted with ignite, working with
> cache
> > >>>   must be transparent for him. Keep this in mind. User's node can
> > >> evaluate
> > >>>   logic with no transaction at all, so we should deal with both types
> > of
> > >>>   execution flow : transactional and non-transactional.Another one
> > >>> problem is
> > >>>   transaction id support at the user node. We would have handled all
> > >> this
> > >>>   issues and many more.
> > >>>   4. we cannot pessimistically lock entity.
> > >>>
> > >>> As a result, we decided to move on building distributed transaction.
> We
> > >> put
> > >>> aside your "accumulation" approach until we realize how to solve
> > >>> difficulties above .
> > >>>
> > >>> чт, 16 мар. 2017 г. в 16:56, Sergi Vladykin <
> sergi.vladykin@gmail.com
> > >:
> > >>>
> > >>>> The problem "How to run millions of entities, and millions of
> > >> operations
> > >>> on
> > >>>> a single Pentium3" is out of scope here. Do the math, plan capacity
> > >>>> reasonably.
> > >>>>
> > >>>> Sergi
> > >>>>
> > >>>> 2017-03-16 15:54 GMT+03:00 ALEKSEY KUZNETSOV <
> > alkuznetsov.sb@gmail.com
> > >>> :
> > >>>>
> > >>>>> hmm, If we have millions of entities, and millions of operations,
> > >> would
> > >>>> not
> > >>>>> this approache lead to memory overflow and perfomance degradation
> > >>>>>
> > >>>>> чт, 16 мар. 2017 г. в 15:42, Sergi Vladykin <
> > >> sergi.vladykin@gmail.com
> > >>>> :
> > >>>>>
> > >>>>>> 1. Actually you have to check versions on all the values you have
> > >>> read
> > >>>>>> during the tx.
> > >>>>>>
> > >>>>>> For example if we have [k1 => v1, k2 => v2] and do:
> > >>>>>>
> > >>>>>> put(k1, get(k2) + 5)
> > >>>>>>
> > >>>>>> We have to remember the version for k2. This logic can be
> > >> relatively
> > >>>>> easily
> > >>>>>> encapsulated in a framework atop of Ignite. You need to implement
> > >> one
> > >>>> to
> > >>>>>> make all this stuff usable.
> > >>>>>>
> > >>>>>> 2. I suggest to avoid any locking here, because you easily will
> end
> > >>> up
> > >>>>> with
> > >>>>>> deadlocks. If you do not have too frequent updates for your keys,
> > >>>>>> optimistic approach will work just fine.
> > >>>>>>
> > >>>>>> Theoretically in the Committer Service you can start a thread for
> > >> the
> > >>>>>> lifetime of the whole distributed transaction, take a lock on the
> > >> key
> > >>>>> using
> > >>>>>> IgniteCache.lock(K key) before executing any Services, wait for
> all
> > >>> the
> > >>>>>> services to complete, execute optimistic commit in the same thread
> > >>>> while
> > >>>>>> keeping this lock and then release it. Notice that all the Ignite
> > >>>>>> transactions inside of all Services must be optimistic here to be
> > >>> able
> > >>>> to
> > >>>>>> read this locked key.
> > >>>>>>
> > >>>>>> But again I do not recommend you using this approach until you
> > >> have a
> > >>>>>> reliable deadlock avoidance scheme.
> > >>>>>>
> > >>>>>> Sergi
> > >>>>>>
> > >>>>>>
> > >>>>>>
> > >>>>>>
> > >>>>>>
> > >>>>>>
> > >>>>>>
> > >>>>>> 2017-03-16 12:53 GMT+03:00 ALEKSEY KUZNETSOV <
> > >>> alkuznetsov.sb@gmail.com
> > >>>>> :
> > >>>>>>
> > >>>>>>> Yeah, now i got it.
> > >>>>>>> There are some doubts on this approach
> > >>>>>>> 1) During optimistic commit phase, when you assure no one altered
> > >>> the
> > >>>>>>> original values, you must check versions of other dependent keys.
> > >>> How
> > >>>>>> could
> > >>>>>>> we obtain those keys(in an automative manner, of course) ?
> > >>>>>>> 2) How could we lock a key before some Service A introduce
> > >> changes?
> > >>>> So
> > >>>>> no
> > >>>>>>> other service is allowed to change this key-value?(sort of
> > >>>> pessimistic
> > >>>>>>> blocking)
> > >>>>>>> May be you know some implementations of such approach ?
> > >>>>>>>
> > >>>>>>> ср, 15 мар. 2017 г. в 17:54, ALEKSEY KUZNETSOV <
> > >>>>> alkuznetsov.sb@gmail.com
> > >>>>>>> :
> > >>>>>>>
> > >>>>>>>> Thank you very much for help.  I will answer later.
> > >>>>>>>>
> > >>>>>>>> ср, 15 мар. 2017 г. в 17:39, Sergi Vladykin <
> > >>>>> sergi.vladykin@gmail.com
> > >>>>>>> :
> > >>>>>>>>
> > >>>>>>>> All the services do not update key in place, but only generate
> > >>> new
> > >>>>> keys
> > >>>>>>>> augmented by otx and store the updated value in the same cache
> > >> +
> > >>>>>> remember
> > >>>>>>>> the keys and versions participating in the transaction in some
> > >>>>> separate
> > >>>>>>>> atomic cache.
> > >>>>>>>>
> > >>>>>>>> Follow this sequence of changes applied to cache contents by
> > >> each
> > >>>>>>> Service:
> > >>>>>>>>
> > >>>>>>>> Initial cache contents:
> > >>>>>>>>            [k1 => v1]
> > >>>>>>>>            [k2 => v2]
> > >>>>>>>>            [k3 => v3]
> > >>>>>>>>
> > >>>>>>>> Cache contents after Service A:
> > >>>>>>>>            [k1 => v1]
> > >>>>>>>>            [k2 => v2]
> > >>>>>>>>            [k3 => v3]
> > >>>>>>>>            [k1x => v1a]
> > >>>>>>>>            [k2x => v2a]
> > >>>>>>>>
> > >>>>>>>>         + [x => (k1 -> ver1, k2 -> ver2)] in some separate
> > >>> atomic
> > >>>>>> cache
> > >>>>>>>>
> > >>>>>>>> Cache contents after Service B:
> > >>>>>>>>            [k1 => v1]
> > >>>>>>>>            [k2 => v2]
> > >>>>>>>>            [k3 => v3]
> > >>>>>>>>            [k1x => v1a]
> > >>>>>>>>            [k2x => v2ab]
> > >>>>>>>>            [k3x => v3b]
> > >>>>>>>>
> > >>>>>>>>        + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in some
> > >>>>> separate
> > >>>>>>>> atomic cache
> > >>>>>>>>
> > >>>>>>>> Finally the Committer Service takes this map of updated keys
> > >> and
> > >>>>> their
> > >>>>>>>> versions from some separate atomic cache, starts Ignite
> > >>> transaction
> > >>>>> and
> > >>>>>>>> replaces all the values for k* keys to values taken from k*x
> > >>> keys.
> > >>>>> The
> > >>>>>>>> successful result must be the following:
> > >>>>>>>>
> > >>>>>>>>            [k1 => v1a]
> > >>>>>>>>            [k2 => v2ab]
> > >>>>>>>>            [k3 => v3b]
> > >>>>>>>>            [k1x => v1a]
> > >>>>>>>>            [k2x => v2ab]
> > >>>>>>>>            [k3x => v3b]
> > >>>>>>>>
> > >>>>>>>>        + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in some
> > >>>>> separate
> > >>>>>>>> atomic cache
> > >>>>>>>>
> > >>>>>>>> But Committer Service also has to check that no one updated the
> > >>>>>> original
> > >>>>>>>> values before us, because otherwise we can not give any
> > >>>>> serializability
> > >>>>>>>> guarantee for these distributed transactions. Here we may need
> > >> to
> > >>>>> check
> > >>>>>>> not
> > >>>>>>>> only versions of the updated keys, but also versions of any
> > >> other
> > >>>>> keys
> > >>>>>>> end
> > >>>>>>>> result depends on.
> > >>>>>>>>
> > >>>>>>>> After that Committer Service has to do a cleanup (may be
> > >> outside
> > >>> of
> > >>>>> the
> > >>>>>>>> committing tx) to come to the following final state:
> > >>>>>>>>
> > >>>>>>>>            [k1 => v1a]
> > >>>>>>>>            [k2 => v2ab]
> > >>>>>>>>            [k3 => v3b]
> > >>>>>>>>
> > >>>>>>>> Makes sense?
> > >>>>>>>>
> > >>>>>>>> Sergi
> > >>>>>>>>
> > >>>>>>>>
> > >>>>>>>> 2017-03-15 16:54 GMT+03:00 ALEKSEY KUZNETSOV <
> > >>>>> alkuznetsov.sb@gmail.com
> > >>>>>>> :
> > >>>>>>>>
> > >>>>>>>>>   - what do u mean by saying "
> > >>>>>>>>> *in a single transaction checks value versions for all the
> > >> old
> > >>>>> values
> > >>>>>>>>>    and replaces them with calculated new ones *"? Every time
> > >>> you
> > >>>>>>> change
> > >>>>>>>>>   value(in some service), you store it to *some special
> > >> atomic
> > >>>>>> cache*
> > >>>>>>> ,
> > >>>>>>>> so
> > >>>>>>>>>   when all services ceased working, Service commiter got a
> > >>>> values
> > >>>>>> with
> > >>>>>>>> the
> > >>>>>>>>>   last versions.
> > >>>>>>>>>   - After "*does cleanup of temporary keys and values*"
> > >>> Service
> > >>>>>>> commiter
> > >>>>>>>>>   persists them into permanent store, isn't it ?
> > >>>>>>>>>   - I cant grasp your though, you say "*in case of version
> > >>>>> mismatch
> > >>>>>> or
> > >>>>>>>> TX
> > >>>>>>>>>   timeout just rollbacks*". But what versions would it
> > >> match?
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>> ср, 15 мар. 2017 г. в 15:34, Sergi Vladykin <
> > >>>>>> sergi.vladykin@gmail.com
> > >>>>>>>> :
> > >>>>>>>>>
> > >>>>>>>>>> Ok, here is what you actually need to implement at the
> > >>>>> application
> > >>>>>>>> level.
> > >>>>>>>>>>
> > >>>>>>>>>> Lets say we have to call 2 services in the following order:
> > >>>>>>>>>> - Service A: wants to update keys [k1 => v1,   k2 => v2]
> > >> to
> > >>>>> [k1
> > >>>>>> =>
> > >>>>>>>>> v1a,
> > >>>>>>>>>>  k2 => v2a]
> > >>>>>>>>>> - Service B: wants to update keys [k2 => v2a, k3 => v3]
> > >> to
> > >>>> [k2
> > >>>>>> =>
> > >>>>>>>>> v2ab,
> > >>>>>>>>>> k3 => v3b]
> > >>>>>>>>>>
> > >>>>>>>>>> The change
> > >>>>>>>>>>    from [ k1 => v1,   k2 => v2,     k3 => v3   ]
> > >>>>>>>>>>    to     [ k1 => v1a, k2 => v2ab, k3 => v3b ]
> > >>>>>>>>>> must happen in a single transaction.
> > >>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>>> Optimistic protocol to solve this:
> > >>>>>>>>>>
> > >>>>>>>>>> Each cache key must have a field `otx`, which is a unique
> > >>>>>>> orchestrator
> > >>>>>>>> TX
> > >>>>>>>>>> identifier - it must be a parameter passed to all the
> > >>> services.
> > >>>>> If
> > >>>>>>>> `otx`
> > >>>>>>>>> is
> > >>>>>>>>>> set to some value it means that it is an intermediate key
> > >> and
> > >>>> is
> > >>>>>>>> visible
> > >>>>>>>>>> only inside of some transaction, for the finalized key
> > >> `otx`
> > >>>> must
> > >>>>>> be
> > >>>>>>>>> null -
> > >>>>>>>>>> it means the key is committed and visible for everyone.
> > >>>>>>>>>>
> > >>>>>>>>>> Each cache value must have a field `ver` which is a version
> > >>> of
> > >>>>> that
> > >>>>>>>>> value.
> > >>>>>>>>>>
> > >>>>>>>>>> For both fields (`otx` and `ver`) the safest way is to use
> > >>>> UUID.
> > >>>>>>>>>>
> > >>>>>>>>>> Workflow is the following:
> > >>>>>>>>>>
> > >>>>>>>>>> Orchestrator starts the distributed transaction with `otx`
> > >> =
> > >>> x
> > >>>>> and
> > >>>>>>>> passes
> > >>>>>>>>>> this parameter to all the services.
> > >>>>>>>>>>
> > >>>>>>>>>> Service A:
> > >>>>>>>>>> - does some computations
> > >>>>>>>>>> - stores [k1x => v1a, k2x => v2a]  with TTL = Za
> > >>>>>>>>>>      where
> > >>>>>>>>>>          Za - left time from max Orchestrator TX duration
> > >>>> after
> > >>>>>>>> Service
> > >>>>>>>>> A
> > >>>>>>>>>> end
> > >>>>>>>>>>          k1x, k2x - new temporary keys with field `otx` =
> > >> x
> > >>>>>>>>>>          v2a has updated version `ver`
> > >>>>>>>>>> - returns a set of updated keys and all the old versions
> > >> to
> > >>>> the
> > >>>>>>>>>> orchestrator
> > >>>>>>>>>>       or just stores it in some special atomic cache like
> > >>>>>>>>>>       [x => (k1 -> ver1, k2 -> ver2)] TTL = Za
> > >>>>>>>>>>
> > >>>>>>>>>> Service B:
> > >>>>>>>>>> - retrieves the updated value k2x => v2a because it knows
> > >>>> `otx`
> > >>>>> =
> > >>>>>> x
> > >>>>>>>>>> - does computations
> > >>>>>>>>>> - stores [k2x => v2ab, k3x => v3b] TTL = Zb
> > >>>>>>>>>> - updates the set of updated keys like [x => (k1 -> ver1,
> > >> k2
> > >>>> ->
> > >>>>>>> ver2,
> > >>>>>>>> k3
> > >>>>>>>>>> -> ver3)] TTL = Zb
> > >>>>>>>>>>
> > >>>>>>>>>> Service Committer (may be embedded into Orchestrator):
> > >>>>>>>>>> - takes all the updated keys and versions for `otx` = x
> > >>>>>>>>>>       [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)]
> > >>>>>>>>>> - in a single transaction checks value versions for all
> > >> the
> > >>>> old
> > >>>>>>> values
> > >>>>>>>>>>       and replaces them with calculated new ones
> > >>>>>>>>>> - does cleanup of temporary keys and values
> > >>>>>>>>>> - in case of version mismatch or TX timeout just rollbacks
> > >>> and
> > >>>>>>> signals
> > >>>>>>>>>>        to Orchestrator to restart the job with new `otx`
> > >>>>>>>>>>
> > >>>>>>>>>> PROFIT!!
> > >>>>>>>>>>
> > >>>>>>>>>> This approach even allows you to run independent parts of
> > >> the
> > >>>>> graph
> > >>>>>>> in
> > >>>>>>>>>> parallel (with TX transfer you will always run only one at
> > >> a
> > >>>>> time).
> > >>>>>>>> Also
> > >>>>>>>>> it
> > >>>>>>>>>> does not require inventing any special fault tolerance
> > >>> technics
> > >>>>>>> because
> > >>>>>>>>>> Ignite caches are already fault tolerant and all the
> > >>>> intermediate
> > >>>>>>>> results
> > >>>>>>>>>> are virtually invisible and stored with TTL, thus in case
> > >> of
> > >>>> any
> > >>>>>>> crash
> > >>>>>>>>> you
> > >>>>>>>>>> will not have inconsistent state or garbage.
> > >>>>>>>>>>
> > >>>>>>>>>> Sergi
> > >>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>>> 2017-03-15 11:42 GMT+03:00 ALEKSEY KUZNETSOV <
> > >>>>>>> alkuznetsov.sb@gmail.com
> > >>>>>>>>> :
> > >>>>>>>>>>
> > >>>>>>>>>>> Okay, we are open for proposals on business task. I mean,
> > >>> we
> > >>>>> can
> > >>>>>>> make
> > >>>>>>>>> use
> > >>>>>>>>>>> of some other thing, not distributed transaction. Not
> > >>>>> transaction
> > >>>>>>>> yet.
> > >>>>>>>>>>>
> > >>>>>>>>>>> ср, 15 мар. 2017 г. в 11:24, Vladimir Ozerov <
> > >>>>>> vozerov@gridgain.com
> > >>>>>>>> :
> > >>>>>>>>>>>
> > >>>>>>>>>>>> IMO the use case makes sense. However, as Sergi already
> > >>>>>>> mentioned,
> > >>>>>>>>> the
> > >>>>>>>>>>>> problem is far more complex, than simply passing TX
> > >> state
> > >>>>> over
> > >>>>>> a
> > >>>>>>>>> wire.
> > >>>>>>>>>>> Most
> > >>>>>>>>>>>> probably a kind of coordinator will be required still
> > >> to
> > >>>>> manage
> > >>>>>>> all
> > >>>>>>>>>> kinds
> > >>>>>>>>>>>> of failures. This task should be started with clean
> > >>> design
> > >>>>>>> proposal
> > >>>>>>>>>>>> explaining how we handle all these concurrent events.
> > >> And
> > >>>>> only
> > >>>>>>>> then,
> > >>>>>>>>>> when
> > >>>>>>>>>>>> we understand all implications, we should move to
> > >>>> development
> > >>>>>>>> stage.
> > >>>>>>>>>>>>
> > >>>>>>>>>>>> On Wed, Mar 15, 2017 at 10:38 AM, ALEKSEY KUZNETSOV <
> > >>>>>>>>>>>> alkuznetsov.sb@gmail.com> wrote:
> > >>>>>>>>>>>>
> > >>>>>>>>>>>>> Right
> > >>>>>>>>>>>>>
> > >>>>>>>>>>>>> ср, 15 мар. 2017 г. в 10:35, Sergi Vladykin <
> > >>>>>>>>>> sergi.vladykin@gmail.com
> > >>>>>>>>>>>> :
> > >>>>>>>>>>>>>
> > >>>>>>>>>>>>>> Good! Basically your orchestrator just takes some
> > >>>>>> predefined
> > >>>>>>>>> graph
> > >>>>>>>>>> of
> > >>>>>>>>>>>>>> distributed services to be invoked, calls them by
> > >>> some
> > >>>>> kind
> > >>>>>>> of
> > >>>>>>>>> RPC
> > >>>>>>>>>>> and
> > >>>>>>>>>>>>>> passes the needed parameters between them, right?
> > >>>>>>>>>>>>>>
> > >>>>>>>>>>>>>> Sergi
> > >>>>>>>>>>>>>>
> > >>>>>>>>>>>>>> 2017-03-14 22:46 GMT+03:00 ALEKSEY KUZNETSOV <
> > >>>>>>>>>>> alkuznetsov.sb@gmail.com
> > >>>>>>>>>>>>> :
> > >>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>> orchestrator is a custom thing. He is responsible
> > >>> for
> > >>>>>>>> managing
> > >>>>>>>>>>>> business
> > >>>>>>>>>>>>>>> scenarios flows. Many nodes are involved in
> > >>>> scenarios.
> > >>>>>> They
> > >>>>>>>>>>> exchange
> > >>>>>>>>>>>>> data
> > >>>>>>>>>>>>>>> and folow one another. If you acquinted with BPMN
> > >>>>>>> framework,
> > >>>>>>>> so
> > >>>>>>>>>>>>>>> orchestrator is like bpmn engine.
> > >>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>> вт, 14 Мар 2017 г., 18:56 Sergi Vladykin <
> > >>>>>>>>>> sergi.vladykin@gmail.com
> > >>>>>>>>>>>> :
> > >>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>> What is Orchestrator for you? Is it a thing
> > >> from
> > >>>>>>> Microsoft
> > >>>>>>>> or
> > >>>>>>>>>>> your
> > >>>>>>>>>>>>>> custom
> > >>>>>>>>>>>>>>>> in-house software?
> > >>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>> Sergi
> > >>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>> 2017-03-14 18:00 GMT+03:00 ALEKSEY KUZNETSOV <
> > >>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > >>>>>>>>>>>>>>> :
> > >>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>> Fine. Let's say we've got multiple servers
> > >>> which
> > >>>>>>> fulfills
> > >>>>>>>>>>> custom
> > >>>>>>>>>>>>>> logic.
> > >>>>>>>>>>>>>>>>> This servers compound oriented graph (BPMN
> > >>>> process)
> > >>>>>>> which
> > >>>>>>>>>>>>> controlled
> > >>>>>>>>>>>>>> by
> > >>>>>>>>>>>>>>>>> Orchestrator.
> > >>>>>>>>>>>>>>>>> For instance, *server1  *creates *variable A
> > >>>> *with
> > >>>>>>> value
> > >>>>>>>> 1,
> > >>>>>>>>>>>>> persists
> > >>>>>>>>>>>>>> it
> > >>>>>>>>>>>>>>>> to
> > >>>>>>>>>>>>>>>>> IGNITE cache and creates *variable B *and
> > >> sends
> > >>>> it
> > >>>>>> to*
> > >>>>>>>>>> server2.
> > >>>>>>>>>>>>> *The
> > >>>>>>>>>>>>>>>>> latests receives *variable B*, do some logic
> > >>> with
> > >>>>> it
> > >>>>>>> and
> > >>>>>>>>>> stores
> > >>>>>>>>>>>> to
> > >>>>>>>>>>>>>>>> IGNITE.
> > >>>>>>>>>>>>>>>>> All the work made by both servers must be
> > >>>> fulfilled
> > >>>>>> in
> > >>>>>>>>> *one*
> > >>>>>>>>>>>>>>> transaction.
> > >>>>>>>>>>>>>>>>> Because we need all information done, or
> > >>>>>>>>> nothing(rollbacked).
> > >>>>>>>>>>> The
> > >>>>>>>>>>>>>>>> scenario
> > >>>>>>>>>>>>>>>>> is managed by orchestrator.
> > >>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 17:31, Sergi Vladykin <
> > >>>>>>>>>>>>>> sergi.vladykin@gmail.com
> > >>>>>>>>>>>>>>>> :
> > >>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>> Ok, it is not a business case, it is your
> > >>> wrong
> > >>>>>>>> solution
> > >>>>>>>>>> for
> > >>>>>>>>>>>> it.
> > >>>>>>>>>>>>>>>>>> Lets try again, what is the business case?
> > >>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>> Sergi
> > >>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>> 2017-03-14 16:42 GMT+03:00 ALEKSEY
> > >> KUZNETSOV
> > >>> <
> > >>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > >>>>>>>>>>>>>>>>> :
> > >>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>> The case is the following, One starts
> > >>>>> transaction
> > >>>>>>> in
> > >>>>>>>>> one
> > >>>>>>>>>>>> node,
> > >>>>>>>>>>>>>> and
> > >>>>>>>>>>>>>>>>> commit
> > >>>>>>>>>>>>>>>>>>> this transaction in another jvm node(or
> > >>>>> rollback
> > >>>>>> it
> > >>>>>>>>>>>> remotely).
> > >>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 16:30, Sergi
> > >>> Vladykin <
> > >>>>>>>>>>>>>>>> sergi.vladykin@gmail.com
> > >>>>>>>>>>>>>>>>>> :
> > >>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>> Because even if you make it work for
> > >> some
> > >>>>>>>> simplistic
> > >>>>>>>>>>>>> scenario,
> > >>>>>>>>>>>>>>> get
> > >>>>>>>>>>>>>>>>>> ready
> > >>>>>>>>>>>>>>>>>>> to
> > >>>>>>>>>>>>>>>>>>>> write many fault tolerance tests and
> > >> make
> > >>>>> sure
> > >>>>>>> that
> > >>>>>>>>> you
> > >>>>>>>>>>> TXs
> > >>>>>>>>>>>>>> work
> > >>>>>>>>>>>>>>>>>>> gracefully
> > >>>>>>>>>>>>>>>>>>>> in all modes in case of crashes. Also
> > >>> make
> > >>>>> sure
> > >>>>>>>> that
> > >>>>>>>>> we
> > >>>>>>>>>>> do
> > >>>>>>>>>>>>> not
> > >>>>>>>>>>>>>>> have
> > >>>>>>>>>>>>>>>>> any
> > >>>>>>>>>>>>>>>>>>>> performance drops after all your
> > >> changes
> > >>> in
> > >>>>>>>> existing
> > >>>>>>>>>>>>>> benchmarks.
> > >>>>>>>>>>>>>>>> All
> > >>>>>>>>>>>>>>>>> in
> > >>>>>>>>>>>>>>>>>>> all
> > >>>>>>>>>>>>>>>>>>>> I don't believe these conditions will
> > >> be
> > >>>> met
> > >>>>>> and
> > >>>>>>>> your
> > >>>>>>>>>>>>>>> contribution
> > >>>>>>>>>>>>>>>>> will
> > >>>>>>>>>>>>>>>>>>> be
> > >>>>>>>>>>>>>>>>>>>> accepted.
> > >>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>> Better solution to what problem?
> > >> Sending
> > >>> TX
> > >>>>> to
> > >>>>>>>>> another
> > >>>>>>>>>>>> node?
> > >>>>>>>>>>>>>> The
> > >>>>>>>>>>>>>>>>>> problem
> > >>>>>>>>>>>>>>>>>>>> statement itself is already wrong. What
> > >>>>>> business
> > >>>>>>>> case
> > >>>>>>>>>> you
> > >>>>>>>>>>>> are
> > >>>>>>>>>>>>>>>> trying
> > >>>>>>>>>>>>>>>>> to
> > >>>>>>>>>>>>>>>>>>>> solve? I'm sure everything you need can
> > >>> be
> > >>>>> done
> > >>>>>>> in
> > >>>>>>>> a
> > >>>>>>>>>> much
> > >>>>>>>>>>>>> more
> > >>>>>>>>>>>>>>>> simple
> > >>>>>>>>>>>>>>>>>> and
> > >>>>>>>>>>>>>>>>>>>> efficient way at the application level.
> > >>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>> Sergi
> > >>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>> 2017-03-14 16:03 GMT+03:00 ALEKSEY
> > >>>> KUZNETSOV
> > >>>>> <
> > >>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > >>>>>>>>>>>>>>>>>>> :
> > >>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>> Why wrong ? You know the better
> > >>> solution?
> > >>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 15:46, Sergi
> > >>>>> Vladykin <
> > >>>>>>>>>>>>>>>>>> sergi.vladykin@gmail.com
> > >>>>>>>>>>>>>>>>>>>> :
> > >>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>> Just serializing TX object and
> > >>>>>> deserializing
> > >>>>>>> it
> > >>>>>>>>> on
> > >>>>>>>>>>>>> another
> > >>>>>>>>>>>>>>> node
> > >>>>>>>>>>>>>>>>> is
> > >>>>>>>>>>>>>>>>>>>>>> meaningless, because other nodes
> > >>>>>>> participating
> > >>>>>>>> in
> > >>>>>>>>>> the
> > >>>>>>>>>>>> TX
> > >>>>>>>>>>>>>> have
> > >>>>>>>>>>>>>>>> to
> > >>>>>>>>>>>>>>>>>> know
> > >>>>>>>>>>>>>>>>>>>>> about
> > >>>>>>>>>>>>>>>>>>>>>> the new coordinator. This will
> > >>> require
> > >>>>>>> protocol
> > >>>>>>>>>>>> changes,
> > >>>>>>>>>>>>> we
> > >>>>>>>>>>>>>>>>>>> definitely
> > >>>>>>>>>>>>>>>>>>>>> will
> > >>>>>>>>>>>>>>>>>>>>>> have fault tolerance and
> > >> performance
> > >>>>>> issues.
> > >>>>>>>> IMO
> > >>>>>>>>>> the
> > >>>>>>>>>>>>> whole
> > >>>>>>>>>>>>>>> idea
> > >>>>>>>>>>>>>>>>> is
> > >>>>>>>>>>>>>>>>>>>> wrong
> > >>>>>>>>>>>>>>>>>>>>>> and it makes no sense to waste time
> > >>> on
> > >>>>> it.
> > >>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>> Sergi
> > >>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>> 2017-03-14 10:57 GMT+03:00 ALEKSEY
> > >>>>>> KUZNETSOV
> > >>>>>>> <
> > >>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > >>>>>>>>>>>>>>>>>>>>> :
> > >>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>> IgniteTransactionState
> > >>>> implememntation
> > >>>>>>>> contains
> > >>>>>>>>>>>>>>>> IgniteTxEntry's
> > >>>>>>>>>>>>>>>>>>> which
> > >>>>>>>>>>>>>>>>>>>>> is
> > >>>>>>>>>>>>>>>>>>>>>>> supposed to be transferable
> > >>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>> пн, 13 мар. 2017 г. в 19:32,
> > >>> Dmitriy
> > >>>>>>>> Setrakyan
> > >>>>>>>>> <
> > >>>>>>>>>>>>>>>>>>>> dsetrakyan@apache.org
> > >>>>>>>>>>>>>>>>>>>>>> :
> > >>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>>> It sounds a little scary to me
> > >>> that
> > >>>>> we
> > >>>>>>> are
> > >>>>>>>>>>> passing
> > >>>>>>>>>>>>>>>>> transaction
> > >>>>>>>>>>>>>>>>>>>>> objects
> > >>>>>>>>>>>>>>>>>>>>>>>> around. Such object may contain
> > >>> all
> > >>>>>> sorts
> > >>>>>>>> of
> > >>>>>>>>>>> Ignite
> > >>>>>>>>>>>>>>>> context.
> > >>>>>>>>>>>>>>>>> If
> > >>>>>>>>>>>>>>>>>>>> some
> > >>>>>>>>>>>>>>>>>>>>>> data
> > >>>>>>>>>>>>>>>>>>>>>>>> needs to be passed across, we
> > >>>> should
> > >>>>>>>> create a
> > >>>>>>>>>>>> special
> > >>>>>>>>>>>>>>>>> transfer
> > >>>>>>>>>>>>>>>>>>>> object
> > >>>>>>>>>>>>>>>>>>>>>> in
> > >>>>>>>>>>>>>>>>>>>>>>>> this case.
> > >>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>>> D.
> > >>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>>> On Mon, Mar 13, 2017 at 9:10
> > >> AM,
> > >>>>>> ALEKSEY
> > >>>>>>>>>>> KUZNETSOV
> > >>>>>>>>>>>> <
> > >>>>>>>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > >>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> > >>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>>>> well, there a couple of
> > >> issues
> > >>>>>>> preventing
> > >>>>>>>>>>>>> transaction
> > >>>>>>>>>>>>>>>>>>> proceeding.
> > >>>>>>>>>>>>>>>>>>>>>>>>> At first, After transaction
> > >>>>>>> serialization
> > >>>>>>>>> and
> > >>>>>>>>>>>>>>>>> deserialization
> > >>>>>>>>>>>>>>>>>>> on
> > >>>>>>>>>>>>>>>>>>>>> the
> > >>>>>>>>>>>>>>>>>>>>>>>> remote
> > >>>>>>>>>>>>>>>>>>>>>>>>> server, there is no txState.
> > >> So
> > >>>> im
> > >>>>>>> going
> > >>>>>>>> to
> > >>>>>>>>>> put
> > >>>>>>>>>>>> it
> > >>>>>>>>>>>>> in
> > >>>>>>>>>>>>>>>>>>>>>>>>>
> > >> writeExternal()\readExternal()
> > >>>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>>>> The last one is Deserialized
> > >>>>>>> transaction
> > >>>>>>>>>> lacks
> > >>>>>>>>>>> of
> > >>>>>>>>>>>>>>> shared
> > >>>>>>>>>>>>>>>>>> cache
> > >>>>>>>>>>>>>>>>>>>>>> context
> > >>>>>>>>>>>>>>>>>>>>>>>>> field at
> > >> TransactionProxyImpl.
> > >>>>>> Perhaps,
> > >>>>>>>> it
> > >>>>>>>>>> must
> > >>>>>>>>>>>> be
> > >>>>>>>>>>>>>>>> injected
> > >>>>>>>>>>>>>>>>>> by
> > >>>>>>>>>>>>>>>>>>>>>>>>> GridResourceProcessor ?
> > >>>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>>>> пн, 13 мар. 2017 г. в 17:27,
> > >>>>> ALEKSEY
> > >>>>>>>>>> KUZNETSOV
> > >>>>>>>>>>> <
> > >>>>>>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > >>>>>>>>>>>>>>>>>>>>>>>>> :
> > >>>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>>>>> while starting and
> > >> continuing
> > >>>>>>>> transaction
> > >>>>>>>>>> in
> > >>>>>>>>>>>>>>> different
> > >>>>>>>>>>>>>>>>> jvms
> > >>>>>>>>>>>>>>>>>>> in
> > >>>>>>>>>>>>>>>>>>>>> run
> > >>>>>>>>>>>>>>>>>>>>>>> into
> > >>>>>>>>>>>>>>>>>>>>>>>>>> serialization exception in
> > >>>>>>>>>> writeExternalMeta
> > >>>>>>>>>>> :
> > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>>>>> @Override public void
> > >>>>>>>>>>>> writeExternal(ObjectOutput
> > >>>>>>>>>>>>>> out)
> > >>>>>>>>>>>>>>>>>> throws
> > >>>>>>>>>>>>>>>>>>>>>>>> IOException
> > >>>>>>>>>>>>>>>>>>>>>>>>> {
> > >>>>>>>>>>>>>>>>>>>>>>>>>>    writeExternalMeta(out);
> > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>>>>> some meta is cannot be
> > >>>>> serialized.
> > >>>>>>>>>>>>>>>>>>>>>>>>>> пт, 10 мар. 2017 г. в
> > >> 17:25,
> > >>>>> Alexey
> > >>>>>>>>>>> Goncharuk <
> > >>>>>>>>>>>>>>>>>>>>>>>>> alexey.goncharuk@gmail.com
> > >>>>>>>>>>>>>>>>>>>>>>>>>>> :
> > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>>>>> Aleksey,
> > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>>>>> I think I am starting to
> > >> get
> > >>>> what
> > >>>>>> you
> > >>>>>>>>> want,
> > >>>>>>>>>>>> but I
> > >>>>>>>>>>>>>>> have
> > >>>>>>>>>>>>>>>> a
> > >>>>>>>>>>>>>>>>>> few
> > >>>>>>>>>>>>>>>>>>>>>>> concerns:
> > >>>>>>>>>>>>>>>>>>>>>>>>>> - What is the API for the
> > >>>>> proposed
> > >>>>>>>>> change?
> > >>>>>>>>>>> In
> > >>>>>>>>>>>>> your
> > >>>>>>>>>>>>>>>> test,
> > >>>>>>>>>>>>>>>>>> you
> > >>>>>>>>>>>>>>>>>>>>> pass
> > >>>>>>>>>>>>>>>>>>>>>> an
> > >>>>>>>>>>>>>>>>>>>>>>>>>> instance of transaction
> > >>> created
> > >>>>> on
> > >>>>>>>>>> ignite(0)
> > >>>>>>>>>>> to
> > >>>>>>>>>>>>> the
> > >>>>>>>>>>>>>>>>> ignite
> > >>>>>>>>>>>>>>>>>>>>> instance
> > >>>>>>>>>>>>>>>>>>>>>>>>>> ignite(1). This is
> > >> obviously
> > >>>> not
> > >>>>>>>> possible
> > >>>>>>>>>> in
> > >>>>>>>>>>> a
> > >>>>>>>>>>>>>> truly
> > >>>>>>>>>>>>>>>>>>>> distributed
> > >>>>>>>>>>>>>>>>>>>>>>>>>> (multi-jvm) environment.
> > >>>>>>>>>>>>>>>>>>>>>>>>>> - How will you synchronize
> > >>>> cache
> > >>>>>>> update
> > >>>>>>>>>>> actions
> > >>>>>>>>>>>>> and
> > >>>>>>>>>>>>>>>>>>> transaction
> > >>>>>>>>>>>>>>>>>>>>>>> commit?
> > >>>>>>>>>>>>>>>>>>>>>>>>>> Say, you have one node that
> > >>>>> decided
> > >>>>>>> to
> > >>>>>>>>>>> commit,
> > >>>>>>>>>>>>> but
> > >>>>>>>>>>>>>>>>> another
> > >>>>>>>>>>>>>>>>>>> node
> > >>>>>>>>>>>>>>>>>>>>> is
> > >>>>>>>>>>>>>>>>>>>>>>>> still
> > >>>>>>>>>>>>>>>>>>>>>>>>>> writing within this
> > >>>> transaction.
> > >>>>>> How
> > >>>>>>> do
> > >>>>>>>>> you
> > >>>>>>>>>>>> make
> > >>>>>>>>>>>>>> sure
> > >>>>>>>>>>>>>>>>> that
> > >>>>>>>>>>>>>>>>>>> two
> > >>>>>>>>>>>>>>>>>>>>>> nodes
> > >>>>>>>>>>>>>>>>>>>>>>>> will
> > >>>>>>>>>>>>>>>>>>>>>>>>>> not call commit() and
> > >>>> rollback()
> > >>>>>>>>>>>> simultaneously?
> > >>>>>>>>>>>>>>>>>>>>>>>>>> - How do you make sure
> > >> that
> > >>>>> either
> > >>>>>>>>>> commit()
> > >>>>>>>>>>> or
> > >>>>>>>>>>>>>>>>> rollback()
> > >>>>>>>>>>>>>>>>>> is
> > >>>>>>>>>>>>>>>>>>>>>> called
> > >>>>>>>>>>>>>>>>>>>>>>> if
> > >>>>>>>>>>>>>>>>>>>>>>>>> an
> > >>>>>>>>>>>>>>>>>>>>>>>>>> originator failed?
> > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>>>>> 2017-03-10 15:38 GMT+03:00
> > >>>>> Дмитрий
> > >>>>>>>> Рябов
> > >>>>>>>>> <
> > >>>>>>>>>>>>>>>>>>>> somefireone@gmail.com
> > >>>>>>>>>>>>>>>>>>>>>> :
> > >>>>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>>>>>> Alexey Goncharuk, heh, my
> > >>>>> initial
> > >>>>>>>>>>>> understanding
> > >>>>>>>>>>>>>> was
> > >>>>>>>>>>>>>>>>> that
> > >>>>>>>>>>>>>>>>>>>>>>> transferring
> > >>>>>>>>>>>>>>>>>>>>>>>>> of
> > >>>>>>>>>>>>>>>>>>>>>>>>>> tx
> > >>>>>>>>>>>>>>>>>>>>>>>>>>> ownership from one node
> > >> to
> > >>>>>> another
> > >>>>>>>> will
> > >>>>>>>>>> be
> > >>>>>>>>>>>>>> happened
> > >>>>>>>>>>>>>>>>>>>>> automatically
> > >>>>>>>>>>>>>>>>>>>>>>>> when
> > >>>>>>>>>>>>>>>>>>>>>>>>>>> originating node is gone
> > >>>> down.
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>>>>>> 2017-03-10 15:36
> > >> GMT+03:00
> > >>>>>> ALEKSEY
> > >>>>>>>>>>> KUZNETSOV
> > >>>>>>>>>>>> <
> > >>>>>>>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > >>>>>>>>>>>>>>>>>>>>>>>>>> :
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>> Im aiming to span
> > >>>> transaction
> > >>>>>> on
> > >>>>>>>>>> multiple
> > >>>>>>>>>>>>>>> threads,
> > >>>>>>>>>>>>>>>>>> nodes,
> > >>>>>>>>>>>>>>>>>>>>>>>> jvms(soon).
> > >>>>>>>>>>>>>>>>>>>>>>>>>> So
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>> every node is able to
> > >>>>> rollback,
> > >>>>>>> or
> > >>>>>>>>>> commit
> > >>>>>>>>>>>>>> common
> > >>>>>>>>>>>>>>>>>>>>> transaction.It
> > >>>>>>>>>>>>>>>>>>>>>>>>> turned
> > >>>>>>>>>>>>>>>>>>>>>>>>>>> up i
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>> need to transfer tx
> > >>> between
> > >>>>>> nodes
> > >>>>>>>> in
> > >>>>>>>>>>> order
> > >>>>>>>>>>>> to
> > >>>>>>>>>>>>>>>> commit
> > >>>>>>>>>>>>>>>>>>>>>> transaction
> > >>>>>>>>>>>>>>>>>>>>>>> in
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>> different node(in the
> > >>> same
> > >>>>>> jvm).
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>> пт, 10 мар. 2017 г. в
> > >>>> 15:20,
> > >>>>>>> Alexey
> > >>>>>>>>>>>>> Goncharuk <
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>
> > >> alexey.goncharuk@gmail.com
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>> :
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Aleksey,
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Do you mean that you
> > >>>> want a
> > >>>>>>>> concept
> > >>>>>>>>>> of
> > >>>>>>>>>>>>>>>> transferring
> > >>>>>>>>>>>>>>>>>> of
> > >>>>>>>>>>>>>>>>>>> tx
> > >>>>>>>>>>>>>>>>>>>>>>>> ownership
> > >>>>>>>>>>>>>>>>>>>>>>>>>>> from
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>> one node to another?
> > >> My
> > >>>>>> initial
> > >>>>>>>>>>>>> understanding
> > >>>>>>>>>>>>>>> was
> > >>>>>>>>>>>>>>>>>> that
> > >>>>>>>>>>>>>>>>>>>> you
> > >>>>>>>>>>>>>>>>>>>>>> want
> > >>>>>>>>>>>>>>>>>>>>>>>> to
> > >>>>>>>>>>>>>>>>>>>>>>>>> be
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>> able
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>> to update keys in a
> > >>>>>> transaction
> > >>>>>>>>> from
> > >>>>>>>>>>>>> multiple
> > >>>>>>>>>>>>>>>>> threads
> > >>>>>>>>>>>>>>>>>>> in
> > >>>>>>>>>>>>>>>>>>>>>>>> parallel.
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>> --AG
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>> 2017-03-10 15:01
> > >>>> GMT+03:00
> > >>>>>>>> ALEKSEY
> > >>>>>>>>>>>>> KUZNETSOV
> > >>>>>>>>>>>>>> <
> > >>>>>>>>>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>> :
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Well. Consider
> > >>>>> transaction
> > >>>>>>>>> started
> > >>>>>>>>>> in
> > >>>>>>>>>>>> one
> > >>>>>>>>>>>>>>> node,
> > >>>>>>>>>>>>>>>>> and
> > >>>>>>>>>>>>>>>>>>>>>> continued
> > >>>>>>>>>>>>>>>>>>>>>>>> in
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>> another
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> one.
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The following test
> > >>>>>> describes
> > >>>>>>> my
> > >>>>>>>>>> idea:
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Ignite ignite1 =
> > >>>>> ignite(0);
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> IgniteTransactions
> > >>>>>>>> transactions =
> > >>>>>>>>>>>>>>>>>>>> ignite1.transactions();
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> IgniteCache<String,
> > >>>>>> Integer>
> > >>>>>>>>> cache
> > >>>>>>>>>> =
> > >>>>>>>>>>>>>>>>>>>>>>> ignite1.getOrCreateCache("
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> testCache");
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Transaction tx =
> > >>>>>>>>>>> transactions.txStart(
> > >>>>>>>>>>>>>>>>> concurrency,
> > >>>>>>>>>>>>>>>>>>>>>>> isolation);
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> cache.put("key1",
> > >> 1);
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> cache.put("key2",
> > >> 2);
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> tx.stop();
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>> IgniteInternalFuture<Boolean>
> > >>>>>>>>> fut =
> > >>>>>>>>>>>>>>>>>>>>>> GridTestUtils.runAsync(()
> > >>>>>>>>>>>>>>>>>>>>>>>> ->
> > >>>>>>>>>>>>>>>>>>>>>>>>> {
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > >>> IgniteTransactions
> > >>>>> ts =
> > >>>>>>>>>>>>>>>>>> ignite(1).transactions();
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>> Assert.assertNull(ts.tx());
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > >>>> Assert.assertEquals(
> > >>>>>>>>>>>>>>>>> TransactionState.STOPPED,
> > >>>>>>>>>>>>>>>>>>>>>>> tx.state());
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>    ts.txStart(tx);
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>> Assert.assertEquals(TransactionState.ACTIVE,
> > >>>>>>>>>>>>>>>>>>>>>>> tx.state());
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > >> cache.put("key3",
> > >>>> 3);
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>> Assert.assertTrue(cache.remove("key2"));
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>    tx.commit();
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>    return true;
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> });
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> fut.get();
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > >> Assert.assertEquals(
> > >>>>>>>>>>>>>>> TransactionState.COMMITTED,
> > >>>>>>>>>>>>>>>>>>>>>> tx.state());
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>> Assert.assertEquals((long)1,
> > >>>>>>>>>>>>>>>>>>> (long)cache.get("key1"));
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>> Assert.assertEquals((long)3,
> > >>>>>>>>>>>>>>>>>>> (long)cache.get("key3"));
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > >>>> Assert.assertFalse(cache.
> > >>>>>>>>>>>>>>> containsKey("key2"));
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> In method
> > >>>>> *ts.txStart(...)*
> > >>>>>>> we
> > >>>>>>>>> just
> > >>>>>>>>>>>>> rebind
> > >>>>>>>>>>>>>>> *tx*
> > >>>>>>>>>>>>>>>>> to
> > >>>>>>>>>>>>>>>>>>>>> current
> > >>>>>>>>>>>>>>>>>>>>>>>>> thread:
> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>
> >
> > --
>
> *Best Regards,*
>
> *Kuznetsov Aleksey*
>

Re: distributed transaction of non-single coordinator

Posted by ALEKSEY KUZNETSOV <al...@gmail.com>.
So, what do u think on my idea ?

ср, 29 мар. 2017 г. в 10:35, ALEKSEY KUZNETSOV <al...@gmail.com>:

> Hi! No, i dont have ticket for this.
> In the ticket i have implemented methods that change transaction status to
> STOP, thus letting it to commit transaction in another thread. In another
> thread you r going to restart transaction in order to commit it.
> The mechanism behind it is obvious : we change thread id to newer one in
> ThreadMap, and make use of serialization of txState, transactions itself to
> transfer them into another thread.
>
>
> вт, 28 мар. 2017 г. в 20:15, Denis Magda <dm...@apache.org>:
>
> Aleksey,
>
> Do you have a ticket for this? Could you briefly list what exactly was
> done and how the things work.
>
> —
> Denis
>
> > On Mar 28, 2017, at 8:32 AM, ALEKSEY KUZNETSOV <al...@gmail.com>
> wrote:
> >
> > Hi, Igniters! I 've made implementation of transactions of non-single
> > coordinator. Here you can start transaction in one thread and commit it
> in
> > another thread.
> > Take a look on it. Give your thoughts on it.
> >
> >
> https://github.com/voipp/ignite/pull/10/commits/3a3d90aa6ac84f125e4c3ce4ced4f269a695ef45
> >
> > пт, 17 мар. 2017 г. в 19:26, Sergi Vladykin <se...@gmail.com>:
> >
> >> You know better, go ahead! :)
> >>
> >> Sergi
> >>
> >> 2017-03-17 16:16 GMT+03:00 ALEKSEY KUZNETSOV <alkuznetsov.sb@gmail.com
> >:
> >>
> >>> we've discovered several problems regarding your "accumulation"
> >>> approach.These are
> >>>
> >>>   1. perfomance issues when transfering data from temporary cache to
> >>>   permanent one. Keep in mind big deal of concurent transactions in
> >>> Service
> >>>   commiter
> >>>   2. extreme memory load when keeping temporary cache in memory
> >>>   3. As long as user is not acquainted with ignite, working with cache
> >>>   must be transparent for him. Keep this in mind. User's node can
> >> evaluate
> >>>   logic with no transaction at all, so we should deal with both types
> of
> >>>   execution flow : transactional and non-transactional.Another one
> >>> problem is
> >>>   transaction id support at the user node. We would have handled all
> >> this
> >>>   issues and many more.
> >>>   4. we cannot pessimistically lock entity.
> >>>
> >>> As a result, we decided to move on building distributed transaction. We
> >> put
> >>> aside your "accumulation" approach until we realize how to solve
> >>> difficulties above .
> >>>
> >>> чт, 16 мар. 2017 г. в 16:56, Sergi Vladykin <sergi.vladykin@gmail.com
> >:
> >>>
> >>>> The problem "How to run millions of entities, and millions of
> >> operations
> >>> on
> >>>> a single Pentium3" is out of scope here. Do the math, plan capacity
> >>>> reasonably.
> >>>>
> >>>> Sergi
> >>>>
> >>>> 2017-03-16 15:54 GMT+03:00 ALEKSEY KUZNETSOV <
> alkuznetsov.sb@gmail.com
> >>> :
> >>>>
> >>>>> hmm, If we have millions of entities, and millions of operations,
> >> would
> >>>> not
> >>>>> this approache lead to memory overflow and perfomance degradation
> >>>>>
> >>>>> чт, 16 мар. 2017 г. в 15:42, Sergi Vladykin <
> >> sergi.vladykin@gmail.com
> >>>> :
> >>>>>
> >>>>>> 1. Actually you have to check versions on all the values you have
> >>> read
> >>>>>> during the tx.
> >>>>>>
> >>>>>> For example if we have [k1 => v1, k2 => v2] and do:
> >>>>>>
> >>>>>> put(k1, get(k2) + 5)
> >>>>>>
> >>>>>> We have to remember the version for k2. This logic can be
> >> relatively
> >>>>> easily
> >>>>>> encapsulated in a framework atop of Ignite. You need to implement
> >> one
> >>>> to
> >>>>>> make all this stuff usable.
> >>>>>>
> >>>>>> 2. I suggest to avoid any locking here, because you easily will end
> >>> up
> >>>>> with
> >>>>>> deadlocks. If you do not have too frequent updates for your keys,
> >>>>>> optimistic approach will work just fine.
> >>>>>>
> >>>>>> Theoretically in the Committer Service you can start a thread for
> >> the
> >>>>>> lifetime of the whole distributed transaction, take a lock on the
> >> key
> >>>>> using
> >>>>>> IgniteCache.lock(K key) before executing any Services, wait for all
> >>> the
> >>>>>> services to complete, execute optimistic commit in the same thread
> >>>> while
> >>>>>> keeping this lock and then release it. Notice that all the Ignite
> >>>>>> transactions inside of all Services must be optimistic here to be
> >>> able
> >>>> to
> >>>>>> read this locked key.
> >>>>>>
> >>>>>> But again I do not recommend you using this approach until you
> >> have a
> >>>>>> reliable deadlock avoidance scheme.
> >>>>>>
> >>>>>> Sergi
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> 2017-03-16 12:53 GMT+03:00 ALEKSEY KUZNETSOV <
> >>> alkuznetsov.sb@gmail.com
> >>>>> :
> >>>>>>
> >>>>>>> Yeah, now i got it.
> >>>>>>> There are some doubts on this approach
> >>>>>>> 1) During optimistic commit phase, when you assure no one altered
> >>> the
> >>>>>>> original values, you must check versions of other dependent keys.
> >>> How
> >>>>>> could
> >>>>>>> we obtain those keys(in an automative manner, of course) ?
> >>>>>>> 2) How could we lock a key before some Service A introduce
> >> changes?
> >>>> So
> >>>>> no
> >>>>>>> other service is allowed to change this key-value?(sort of
> >>>> pessimistic
> >>>>>>> blocking)
> >>>>>>> May be you know some implementations of such approach ?
> >>>>>>>
> >>>>>>> ср, 15 мар. 2017 г. в 17:54, ALEKSEY KUZNETSOV <
> >>>>> alkuznetsov.sb@gmail.com
> >>>>>>> :
> >>>>>>>
> >>>>>>>> Thank you very much for help.  I will answer later.
> >>>>>>>>
> >>>>>>>> ср, 15 мар. 2017 г. в 17:39, Sergi Vladykin <
> >>>>> sergi.vladykin@gmail.com
> >>>>>>> :
> >>>>>>>>
> >>>>>>>> All the services do not update key in place, but only generate
> >>> new
> >>>>> keys
> >>>>>>>> augmented by otx and store the updated value in the same cache
> >> +
> >>>>>> remember
> >>>>>>>> the keys and versions participating in the transaction in some
> >>>>> separate
> >>>>>>>> atomic cache.
> >>>>>>>>
> >>>>>>>> Follow this sequence of changes applied to cache contents by
> >> each
> >>>>>>> Service:
> >>>>>>>>
> >>>>>>>> Initial cache contents:
> >>>>>>>>            [k1 => v1]
> >>>>>>>>            [k2 => v2]
> >>>>>>>>            [k3 => v3]
> >>>>>>>>
> >>>>>>>> Cache contents after Service A:
> >>>>>>>>            [k1 => v1]
> >>>>>>>>            [k2 => v2]
> >>>>>>>>            [k3 => v3]
> >>>>>>>>            [k1x => v1a]
> >>>>>>>>            [k2x => v2a]
> >>>>>>>>
> >>>>>>>>         + [x => (k1 -> ver1, k2 -> ver2)] in some separate
> >>> atomic
> >>>>>> cache
> >>>>>>>>
> >>>>>>>> Cache contents after Service B:
> >>>>>>>>            [k1 => v1]
> >>>>>>>>            [k2 => v2]
> >>>>>>>>            [k3 => v3]
> >>>>>>>>            [k1x => v1a]
> >>>>>>>>            [k2x => v2ab]
> >>>>>>>>            [k3x => v3b]
> >>>>>>>>
> >>>>>>>>        + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in some
> >>>>> separate
> >>>>>>>> atomic cache
> >>>>>>>>
> >>>>>>>> Finally the Committer Service takes this map of updated keys
> >> and
> >>>>> their
> >>>>>>>> versions from some separate atomic cache, starts Ignite
> >>> transaction
> >>>>> and
> >>>>>>>> replaces all the values for k* keys to values taken from k*x
> >>> keys.
> >>>>> The
> >>>>>>>> successful result must be the following:
> >>>>>>>>
> >>>>>>>>            [k1 => v1a]
> >>>>>>>>            [k2 => v2ab]
> >>>>>>>>            [k3 => v3b]
> >>>>>>>>            [k1x => v1a]
> >>>>>>>>            [k2x => v2ab]
> >>>>>>>>            [k3x => v3b]
> >>>>>>>>
> >>>>>>>>        + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in some
> >>>>> separate
> >>>>>>>> atomic cache
> >>>>>>>>
> >>>>>>>> But Committer Service also has to check that no one updated the
> >>>>>> original
> >>>>>>>> values before us, because otherwise we can not give any
> >>>>> serializability
> >>>>>>>> guarantee for these distributed transactions. Here we may need
> >> to
> >>>>> check
> >>>>>>> not
> >>>>>>>> only versions of the updated keys, but also versions of any
> >> other
> >>>>> keys
> >>>>>>> end
> >>>>>>>> result depends on.
> >>>>>>>>
> >>>>>>>> After that Committer Service has to do a cleanup (may be
> >> outside
> >>> of
> >>>>> the
> >>>>>>>> committing tx) to come to the following final state:
> >>>>>>>>
> >>>>>>>>            [k1 => v1a]
> >>>>>>>>            [k2 => v2ab]
> >>>>>>>>            [k3 => v3b]
> >>>>>>>>
> >>>>>>>> Makes sense?
> >>>>>>>>
> >>>>>>>> Sergi
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> 2017-03-15 16:54 GMT+03:00 ALEKSEY KUZNETSOV <
> >>>>> alkuznetsov.sb@gmail.com
> >>>>>>> :
> >>>>>>>>
> >>>>>>>>>   - what do u mean by saying "
> >>>>>>>>> *in a single transaction checks value versions for all the
> >> old
> >>>>> values
> >>>>>>>>>    and replaces them with calculated new ones *"? Every time
> >>> you
> >>>>>>> change
> >>>>>>>>>   value(in some service), you store it to *some special
> >> atomic
> >>>>>> cache*
> >>>>>>> ,
> >>>>>>>> so
> >>>>>>>>>   when all services ceased working, Service commiter got a
> >>>> values
> >>>>>> with
> >>>>>>>> the
> >>>>>>>>>   last versions.
> >>>>>>>>>   - After "*does cleanup of temporary keys and values*"
> >>> Service
> >>>>>>> commiter
> >>>>>>>>>   persists them into permanent store, isn't it ?
> >>>>>>>>>   - I cant grasp your though, you say "*in case of version
> >>>>> mismatch
> >>>>>> or
> >>>>>>>> TX
> >>>>>>>>>   timeout just rollbacks*". But what versions would it
> >> match?
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> ср, 15 мар. 2017 г. в 15:34, Sergi Vladykin <
> >>>>>> sergi.vladykin@gmail.com
> >>>>>>>> :
> >>>>>>>>>
> >>>>>>>>>> Ok, here is what you actually need to implement at the
> >>>>> application
> >>>>>>>> level.
> >>>>>>>>>>
> >>>>>>>>>> Lets say we have to call 2 services in the following order:
> >>>>>>>>>> - Service A: wants to update keys [k1 => v1,   k2 => v2]
> >> to
> >>>>> [k1
> >>>>>> =>
> >>>>>>>>> v1a,
> >>>>>>>>>>  k2 => v2a]
> >>>>>>>>>> - Service B: wants to update keys [k2 => v2a, k3 => v3]
> >> to
> >>>> [k2
> >>>>>> =>
> >>>>>>>>> v2ab,
> >>>>>>>>>> k3 => v3b]
> >>>>>>>>>>
> >>>>>>>>>> The change
> >>>>>>>>>>    from [ k1 => v1,   k2 => v2,     k3 => v3   ]
> >>>>>>>>>>    to     [ k1 => v1a, k2 => v2ab, k3 => v3b ]
> >>>>>>>>>> must happen in a single transaction.
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> Optimistic protocol to solve this:
> >>>>>>>>>>
> >>>>>>>>>> Each cache key must have a field `otx`, which is a unique
> >>>>>>> orchestrator
> >>>>>>>> TX
> >>>>>>>>>> identifier - it must be a parameter passed to all the
> >>> services.
> >>>>> If
> >>>>>>>> `otx`
> >>>>>>>>> is
> >>>>>>>>>> set to some value it means that it is an intermediate key
> >> and
> >>>> is
> >>>>>>>> visible
> >>>>>>>>>> only inside of some transaction, for the finalized key
> >> `otx`
> >>>> must
> >>>>>> be
> >>>>>>>>> null -
> >>>>>>>>>> it means the key is committed and visible for everyone.
> >>>>>>>>>>
> >>>>>>>>>> Each cache value must have a field `ver` which is a version
> >>> of
> >>>>> that
> >>>>>>>>> value.
> >>>>>>>>>>
> >>>>>>>>>> For both fields (`otx` and `ver`) the safest way is to use
> >>>> UUID.
> >>>>>>>>>>
> >>>>>>>>>> Workflow is the following:
> >>>>>>>>>>
> >>>>>>>>>> Orchestrator starts the distributed transaction with `otx`
> >> =
> >>> x
> >>>>> and
> >>>>>>>> passes
> >>>>>>>>>> this parameter to all the services.
> >>>>>>>>>>
> >>>>>>>>>> Service A:
> >>>>>>>>>> - does some computations
> >>>>>>>>>> - stores [k1x => v1a, k2x => v2a]  with TTL = Za
> >>>>>>>>>>      where
> >>>>>>>>>>          Za - left time from max Orchestrator TX duration
> >>>> after
> >>>>>>>> Service
> >>>>>>>>> A
> >>>>>>>>>> end
> >>>>>>>>>>          k1x, k2x - new temporary keys with field `otx` =
> >> x
> >>>>>>>>>>          v2a has updated version `ver`
> >>>>>>>>>> - returns a set of updated keys and all the old versions
> >> to
> >>>> the
> >>>>>>>>>> orchestrator
> >>>>>>>>>>       or just stores it in some special atomic cache like
> >>>>>>>>>>       [x => (k1 -> ver1, k2 -> ver2)] TTL = Za
> >>>>>>>>>>
> >>>>>>>>>> Service B:
> >>>>>>>>>> - retrieves the updated value k2x => v2a because it knows
> >>>> `otx`
> >>>>> =
> >>>>>> x
> >>>>>>>>>> - does computations
> >>>>>>>>>> - stores [k2x => v2ab, k3x => v3b] TTL = Zb
> >>>>>>>>>> - updates the set of updated keys like [x => (k1 -> ver1,
> >> k2
> >>>> ->
> >>>>>>> ver2,
> >>>>>>>> k3
> >>>>>>>>>> -> ver3)] TTL = Zb
> >>>>>>>>>>
> >>>>>>>>>> Service Committer (may be embedded into Orchestrator):
> >>>>>>>>>> - takes all the updated keys and versions for `otx` = x
> >>>>>>>>>>       [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)]
> >>>>>>>>>> - in a single transaction checks value versions for all
> >> the
> >>>> old
> >>>>>>> values
> >>>>>>>>>>       and replaces them with calculated new ones
> >>>>>>>>>> - does cleanup of temporary keys and values
> >>>>>>>>>> - in case of version mismatch or TX timeout just rollbacks
> >>> and
> >>>>>>> signals
> >>>>>>>>>>        to Orchestrator to restart the job with new `otx`
> >>>>>>>>>>
> >>>>>>>>>> PROFIT!!
> >>>>>>>>>>
> >>>>>>>>>> This approach even allows you to run independent parts of
> >> the
> >>>>> graph
> >>>>>>> in
> >>>>>>>>>> parallel (with TX transfer you will always run only one at
> >> a
> >>>>> time).
> >>>>>>>> Also
> >>>>>>>>> it
> >>>>>>>>>> does not require inventing any special fault tolerance
> >>> technics
> >>>>>>> because
> >>>>>>>>>> Ignite caches are already fault tolerant and all the
> >>>> intermediate
> >>>>>>>> results
> >>>>>>>>>> are virtually invisible and stored with TTL, thus in case
> >> of
> >>>> any
> >>>>>>> crash
> >>>>>>>>> you
> >>>>>>>>>> will not have inconsistent state or garbage.
> >>>>>>>>>>
> >>>>>>>>>> Sergi
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> 2017-03-15 11:42 GMT+03:00 ALEKSEY KUZNETSOV <
> >>>>>>> alkuznetsov.sb@gmail.com
> >>>>>>>>> :
> >>>>>>>>>>
> >>>>>>>>>>> Okay, we are open for proposals on business task. I mean,
> >>> we
> >>>>> can
> >>>>>>> make
> >>>>>>>>> use
> >>>>>>>>>>> of some other thing, not distributed transaction. Not
> >>>>> transaction
> >>>>>>>> yet.
> >>>>>>>>>>>
> >>>>>>>>>>> ср, 15 мар. 2017 г. в 11:24, Vladimir Ozerov <
> >>>>>> vozerov@gridgain.com
> >>>>>>>> :
> >>>>>>>>>>>
> >>>>>>>>>>>> IMO the use case makes sense. However, as Sergi already
> >>>>>>> mentioned,
> >>>>>>>>> the
> >>>>>>>>>>>> problem is far more complex, than simply passing TX
> >> state
> >>>>> over
> >>>>>> a
> >>>>>>>>> wire.
> >>>>>>>>>>> Most
> >>>>>>>>>>>> probably a kind of coordinator will be required still
> >> to
> >>>>> manage
> >>>>>>> all
> >>>>>>>>>> kinds
> >>>>>>>>>>>> of failures. This task should be started with clean
> >>> design
> >>>>>>> proposal
> >>>>>>>>>>>> explaining how we handle all these concurrent events.
> >> And
> >>>>> only
> >>>>>>>> then,
> >>>>>>>>>> when
> >>>>>>>>>>>> we understand all implications, we should move to
> >>>> development
> >>>>>>>> stage.
> >>>>>>>>>>>>
> >>>>>>>>>>>> On Wed, Mar 15, 2017 at 10:38 AM, ALEKSEY KUZNETSOV <
> >>>>>>>>>>>> alkuznetsov.sb@gmail.com> wrote:
> >>>>>>>>>>>>
> >>>>>>>>>>>>> Right
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> ср, 15 мар. 2017 г. в 10:35, Sergi Vladykin <
> >>>>>>>>>> sergi.vladykin@gmail.com
> >>>>>>>>>>>> :
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>> Good! Basically your orchestrator just takes some
> >>>>>> predefined
> >>>>>>>>> graph
> >>>>>>>>>> of
> >>>>>>>>>>>>>> distributed services to be invoked, calls them by
> >>> some
> >>>>> kind
> >>>>>>> of
> >>>>>>>>> RPC
> >>>>>>>>>>> and
> >>>>>>>>>>>>>> passes the needed parameters between them, right?
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> Sergi
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> 2017-03-14 22:46 GMT+03:00 ALEKSEY KUZNETSOV <
> >>>>>>>>>>> alkuznetsov.sb@gmail.com
> >>>>>>>>>>>>> :
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> orchestrator is a custom thing. He is responsible
> >>> for
> >>>>>>>> managing
> >>>>>>>>>>>> business
> >>>>>>>>>>>>>>> scenarios flows. Many nodes are involved in
> >>>> scenarios.
> >>>>>> They
> >>>>>>>>>>> exchange
> >>>>>>>>>>>>> data
> >>>>>>>>>>>>>>> and folow one another. If you acquinted with BPMN
> >>>>>>> framework,
> >>>>>>>> so
> >>>>>>>>>>>>>>> orchestrator is like bpmn engine.
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> вт, 14 Мар 2017 г., 18:56 Sergi Vladykin <
> >>>>>>>>>> sergi.vladykin@gmail.com
> >>>>>>>>>>>> :
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> What is Orchestrator for you? Is it a thing
> >> from
> >>>>>>> Microsoft
> >>>>>>>> or
> >>>>>>>>>>> your
> >>>>>>>>>>>>>> custom
> >>>>>>>>>>>>>>>> in-house software?
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Sergi
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> 2017-03-14 18:00 GMT+03:00 ALEKSEY KUZNETSOV <
> >>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> >>>>>>>>>>>>>>> :
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> Fine. Let's say we've got multiple servers
> >>> which
> >>>>>>> fulfills
> >>>>>>>>>>> custom
> >>>>>>>>>>>>>> logic.
> >>>>>>>>>>>>>>>>> This servers compound oriented graph (BPMN
> >>>> process)
> >>>>>>> which
> >>>>>>>>>>>>> controlled
> >>>>>>>>>>>>>> by
> >>>>>>>>>>>>>>>>> Orchestrator.
> >>>>>>>>>>>>>>>>> For instance, *server1  *creates *variable A
> >>>> *with
> >>>>>>> value
> >>>>>>>> 1,
> >>>>>>>>>>>>> persists
> >>>>>>>>>>>>>> it
> >>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>> IGNITE cache and creates *variable B *and
> >> sends
> >>>> it
> >>>>>> to*
> >>>>>>>>>> server2.
> >>>>>>>>>>>>> *The
> >>>>>>>>>>>>>>>>> latests receives *variable B*, do some logic
> >>> with
> >>>>> it
> >>>>>>> and
> >>>>>>>>>> stores
> >>>>>>>>>>>> to
> >>>>>>>>>>>>>>>> IGNITE.
> >>>>>>>>>>>>>>>>> All the work made by both servers must be
> >>>> fulfilled
> >>>>>> in
> >>>>>>>>> *one*
> >>>>>>>>>>>>>>> transaction.
> >>>>>>>>>>>>>>>>> Because we need all information done, or
> >>>>>>>>> nothing(rollbacked).
> >>>>>>>>>>> The
> >>>>>>>>>>>>>>>> scenario
> >>>>>>>>>>>>>>>>> is managed by orchestrator.
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 17:31, Sergi Vladykin <
> >>>>>>>>>>>>>> sergi.vladykin@gmail.com
> >>>>>>>>>>>>>>>> :
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>> Ok, it is not a business case, it is your
> >>> wrong
> >>>>>>>> solution
> >>>>>>>>>> for
> >>>>>>>>>>>> it.
> >>>>>>>>>>>>>>>>>> Lets try again, what is the business case?
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>> Sergi
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>> 2017-03-14 16:42 GMT+03:00 ALEKSEY
> >> KUZNETSOV
> >>> <
> >>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> >>>>>>>>>>>>>>>>> :
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> The case is the following, One starts
> >>>>> transaction
> >>>>>>> in
> >>>>>>>>> one
> >>>>>>>>>>>> node,
> >>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>> commit
> >>>>>>>>>>>>>>>>>>> this transaction in another jvm node(or
> >>>>> rollback
> >>>>>> it
> >>>>>>>>>>>> remotely).
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 16:30, Sergi
> >>> Vladykin <
> >>>>>>>>>>>>>>>> sergi.vladykin@gmail.com
> >>>>>>>>>>>>>>>>>> :
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> Because even if you make it work for
> >> some
> >>>>>>>> simplistic
> >>>>>>>>>>>>> scenario,
> >>>>>>>>>>>>>>> get
> >>>>>>>>>>>>>>>>>> ready
> >>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>> write many fault tolerance tests and
> >> make
> >>>>> sure
> >>>>>>> that
> >>>>>>>>> you
> >>>>>>>>>>> TXs
> >>>>>>>>>>>>>> work
> >>>>>>>>>>>>>>>>>>> gracefully
> >>>>>>>>>>>>>>>>>>>> in all modes in case of crashes. Also
> >>> make
> >>>>> sure
> >>>>>>>> that
> >>>>>>>>> we
> >>>>>>>>>>> do
> >>>>>>>>>>>>> not
> >>>>>>>>>>>>>>> have
> >>>>>>>>>>>>>>>>> any
> >>>>>>>>>>>>>>>>>>>> performance drops after all your
> >> changes
> >>> in
> >>>>>>>> existing
> >>>>>>>>>>>>>> benchmarks.
> >>>>>>>>>>>>>>>> All
> >>>>>>>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>> all
> >>>>>>>>>>>>>>>>>>>> I don't believe these conditions will
> >> be
> >>>> met
> >>>>>> and
> >>>>>>>> your
> >>>>>>>>>>>>>>> contribution
> >>>>>>>>>>>>>>>>> will
> >>>>>>>>>>>>>>>>>>> be
> >>>>>>>>>>>>>>>>>>>> accepted.
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> Better solution to what problem?
> >> Sending
> >>> TX
> >>>>> to
> >>>>>>>>> another
> >>>>>>>>>>>> node?
> >>>>>>>>>>>>>> The
> >>>>>>>>>>>>>>>>>> problem
> >>>>>>>>>>>>>>>>>>>> statement itself is already wrong. What
> >>>>>> business
> >>>>>>>> case
> >>>>>>>>>> you
> >>>>>>>>>>>> are
> >>>>>>>>>>>>>>>> trying
> >>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>> solve? I'm sure everything you need can
> >>> be
> >>>>> done
> >>>>>>> in
> >>>>>>>> a
> >>>>>>>>>> much
> >>>>>>>>>>>>> more
> >>>>>>>>>>>>>>>> simple
> >>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>> efficient way at the application level.
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> Sergi
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> 2017-03-14 16:03 GMT+03:00 ALEKSEY
> >>>> KUZNETSOV
> >>>>> <
> >>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> >>>>>>>>>>>>>>>>>>> :
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> Why wrong ? You know the better
> >>> solution?
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 15:46, Sergi
> >>>>> Vladykin <
> >>>>>>>>>>>>>>>>>> sergi.vladykin@gmail.com
> >>>>>>>>>>>>>>>>>>>> :
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>> Just serializing TX object and
> >>>>>> deserializing
> >>>>>>> it
> >>>>>>>>> on
> >>>>>>>>>>>>> another
> >>>>>>>>>>>>>>> node
> >>>>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>>>> meaningless, because other nodes
> >>>>>>> participating
> >>>>>>>> in
> >>>>>>>>>> the
> >>>>>>>>>>>> TX
> >>>>>>>>>>>>>> have
> >>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>> know
> >>>>>>>>>>>>>>>>>>>>> about
> >>>>>>>>>>>>>>>>>>>>>> the new coordinator. This will
> >>> require
> >>>>>>> protocol
> >>>>>>>>>>>> changes,
> >>>>>>>>>>>>> we
> >>>>>>>>>>>>>>>>>>> definitely
> >>>>>>>>>>>>>>>>>>>>> will
> >>>>>>>>>>>>>>>>>>>>>> have fault tolerance and
> >> performance
> >>>>>> issues.
> >>>>>>>> IMO
> >>>>>>>>>> the
> >>>>>>>>>>>>> whole
> >>>>>>>>>>>>>>> idea
> >>>>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>> wrong
> >>>>>>>>>>>>>>>>>>>>>> and it makes no sense to waste time
> >>> on
> >>>>> it.
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>> Sergi
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>> 2017-03-14 10:57 GMT+03:00 ALEKSEY
> >>>>>> KUZNETSOV
> >>>>>>> <
> >>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> >>>>>>>>>>>>>>>>>>>>> :
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>> IgniteTransactionState
> >>>> implememntation
> >>>>>>>> contains
> >>>>>>>>>>>>>>>> IgniteTxEntry's
> >>>>>>>>>>>>>>>>>>> which
> >>>>>>>>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>>>>> supposed to be transferable
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>> пн, 13 мар. 2017 г. в 19:32,
> >>> Dmitriy
> >>>>>>>> Setrakyan
> >>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>> dsetrakyan@apache.org
> >>>>>>>>>>>>>>>>>>>>>> :
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> It sounds a little scary to me
> >>> that
> >>>>> we
> >>>>>>> are
> >>>>>>>>>>> passing
> >>>>>>>>>>>>>>>>> transaction
> >>>>>>>>>>>>>>>>>>>>> objects
> >>>>>>>>>>>>>>>>>>>>>>>> around. Such object may contain
> >>> all
> >>>>>> sorts
> >>>>>>>> of
> >>>>>>>>>>> Ignite
> >>>>>>>>>>>>>>>> context.
> >>>>>>>>>>>>>>>>> If
> >>>>>>>>>>>>>>>>>>>> some
> >>>>>>>>>>>>>>>>>>>>>> data
> >>>>>>>>>>>>>>>>>>>>>>>> needs to be passed across, we
> >>>> should
> >>>>>>>> create a
> >>>>>>>>>>>> special
> >>>>>>>>>>>>>>>>> transfer
> >>>>>>>>>>>>>>>>>>>> object
> >>>>>>>>>>>>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>>>>>>> this case.
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> D.
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> On Mon, Mar 13, 2017 at 9:10
> >> AM,
> >>>>>> ALEKSEY
> >>>>>>>>>>> KUZNETSOV
> >>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> >>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>> well, there a couple of
> >> issues
> >>>>>>> preventing
> >>>>>>>>>>>>> transaction
> >>>>>>>>>>>>>>>>>>> proceeding.
> >>>>>>>>>>>>>>>>>>>>>>>>> At first, After transaction
> >>>>>>> serialization
> >>>>>>>>> and
> >>>>>>>>>>>>>>>>> deserialization
> >>>>>>>>>>>>>>>>>>> on
> >>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>> remote
> >>>>>>>>>>>>>>>>>>>>>>>>> server, there is no txState.
> >> So
> >>>> im
> >>>>>>> going
> >>>>>>>> to
> >>>>>>>>>> put
> >>>>>>>>>>>> it
> >>>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>>>>>>>>
> >> writeExternal()\readExternal()
> >>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>> The last one is Deserialized
> >>>>>>> transaction
> >>>>>>>>>> lacks
> >>>>>>>>>>> of
> >>>>>>>>>>>>>>> shared
> >>>>>>>>>>>>>>>>>> cache
> >>>>>>>>>>>>>>>>>>>>>> context
> >>>>>>>>>>>>>>>>>>>>>>>>> field at
> >> TransactionProxyImpl.
> >>>>>> Perhaps,
> >>>>>>>> it
> >>>>>>>>>> must
> >>>>>>>>>>>> be
> >>>>>>>>>>>>>>>> injected
> >>>>>>>>>>>>>>>>>> by
> >>>>>>>>>>>>>>>>>>>>>>>>> GridResourceProcessor ?
> >>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>> пн, 13 мар. 2017 г. в 17:27,
> >>>>> ALEKSEY
> >>>>>>>>>> KUZNETSOV
> >>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> >>>>>>>>>>>>>>>>>>>>>>>>> :
> >>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>> while starting and
> >> continuing
> >>>>>>>> transaction
> >>>>>>>>>> in
> >>>>>>>>>>>>>>> different
> >>>>>>>>>>>>>>>>> jvms
> >>>>>>>>>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>>>> run
> >>>>>>>>>>>>>>>>>>>>>>> into
> >>>>>>>>>>>>>>>>>>>>>>>>>> serialization exception in
> >>>>>>>>>> writeExternalMeta
> >>>>>>>>>>> :
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>> @Override public void
> >>>>>>>>>>>> writeExternal(ObjectOutput
> >>>>>>>>>>>>>> out)
> >>>>>>>>>>>>>>>>>> throws
> >>>>>>>>>>>>>>>>>>>>>>>> IOException
> >>>>>>>>>>>>>>>>>>>>>>>>> {
> >>>>>>>>>>>>>>>>>>>>>>>>>>    writeExternalMeta(out);
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>> some meta is cannot be
> >>>>> serialized.
> >>>>>>>>>>>>>>>>>>>>>>>>>> пт, 10 мар. 2017 г. в
> >> 17:25,
> >>>>> Alexey
> >>>>>>>>>>> Goncharuk <
> >>>>>>>>>>>>>>>>>>>>>>>>> alexey.goncharuk@gmail.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>> :
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>> Aleksey,
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>> I think I am starting to
> >> get
> >>>> what
> >>>>>> you
> >>>>>>>>> want,
> >>>>>>>>>>>> but I
> >>>>>>>>>>>>>>> have
> >>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>> few
> >>>>>>>>>>>>>>>>>>>>>>> concerns:
> >>>>>>>>>>>>>>>>>>>>>>>>>> - What is the API for the
> >>>>> proposed
> >>>>>>>>> change?
> >>>>>>>>>>> In
> >>>>>>>>>>>>> your
> >>>>>>>>>>>>>>>> test,
> >>>>>>>>>>>>>>>>>> you
> >>>>>>>>>>>>>>>>>>>>> pass
> >>>>>>>>>>>>>>>>>>>>>> an
> >>>>>>>>>>>>>>>>>>>>>>>>>> instance of transaction
> >>> created
> >>>>> on
> >>>>>>>>>> ignite(0)
> >>>>>>>>>>> to
> >>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>> ignite
> >>>>>>>>>>>>>>>>>>>>> instance
> >>>>>>>>>>>>>>>>>>>>>>>>>> ignite(1). This is
> >> obviously
> >>>> not
> >>>>>>>> possible
> >>>>>>>>>> in
> >>>>>>>>>>> a
> >>>>>>>>>>>>>> truly
> >>>>>>>>>>>>>>>>>>>> distributed
> >>>>>>>>>>>>>>>>>>>>>>>>>> (multi-jvm) environment.
> >>>>>>>>>>>>>>>>>>>>>>>>>> - How will you synchronize
> >>>> cache
> >>>>>>> update
> >>>>>>>>>>> actions
> >>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>> transaction
> >>>>>>>>>>>>>>>>>>>>>>> commit?
> >>>>>>>>>>>>>>>>>>>>>>>>>> Say, you have one node that
> >>>>> decided
> >>>>>>> to
> >>>>>>>>>>> commit,
> >>>>>>>>>>>>> but
> >>>>>>>>>>>>>>>>> another
> >>>>>>>>>>>>>>>>>>> node
> >>>>>>>>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>>>>>> still
> >>>>>>>>>>>>>>>>>>>>>>>>>> writing within this
> >>>> transaction.
> >>>>>> How
> >>>>>>> do
> >>>>>>>>> you
> >>>>>>>>>>>> make
> >>>>>>>>>>>>>> sure
> >>>>>>>>>>>>>>>>> that
> >>>>>>>>>>>>>>>>>>> two
> >>>>>>>>>>>>>>>>>>>>>> nodes
> >>>>>>>>>>>>>>>>>>>>>>>> will
> >>>>>>>>>>>>>>>>>>>>>>>>>> not call commit() and
> >>>> rollback()
> >>>>>>>>>>>> simultaneously?
> >>>>>>>>>>>>>>>>>>>>>>>>>> - How do you make sure
> >> that
> >>>>> either
> >>>>>>>>>> commit()
> >>>>>>>>>>> or
> >>>>>>>>>>>>>>>>> rollback()
> >>>>>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>>>> called
> >>>>>>>>>>>>>>>>>>>>>>> if
> >>>>>>>>>>>>>>>>>>>>>>>>> an
> >>>>>>>>>>>>>>>>>>>>>>>>>> originator failed?
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>> 2017-03-10 15:38 GMT+03:00
> >>>>> Дмитрий
> >>>>>>>> Рябов
> >>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>> somefireone@gmail.com
> >>>>>>>>>>>>>>>>>>>>>> :
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>> Alexey Goncharuk, heh, my
> >>>>> initial
> >>>>>>>>>>>> understanding
> >>>>>>>>>>>>>> was
> >>>>>>>>>>>>>>>>> that
> >>>>>>>>>>>>>>>>>>>>>>> transferring
> >>>>>>>>>>>>>>>>>>>>>>>>> of
> >>>>>>>>>>>>>>>>>>>>>>>>>> tx
> >>>>>>>>>>>>>>>>>>>>>>>>>>> ownership from one node
> >> to
> >>>>>> another
> >>>>>>>> will
> >>>>>>>>>> be
> >>>>>>>>>>>>>> happened
> >>>>>>>>>>>>>>>>>>>>> automatically
> >>>>>>>>>>>>>>>>>>>>>>>> when
> >>>>>>>>>>>>>>>>>>>>>>>>>>> originating node is gone
> >>>> down.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>> 2017-03-10 15:36
> >> GMT+03:00
> >>>>>> ALEKSEY
> >>>>>>>>>>> KUZNETSOV
> >>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> >>>>>>>>>>>>>>>>>>>>>>>>>> :
> >>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> Im aiming to span
> >>>> transaction
> >>>>>> on
> >>>>>>>>>> multiple
> >>>>>>>>>>>>>>> threads,
> >>>>>>>>>>>>>>>>>> nodes,
> >>>>>>>>>>>>>>>>>>>>>>>> jvms(soon).
> >>>>>>>>>>>>>>>>>>>>>>>>>> So
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> every node is able to
> >>>>> rollback,
> >>>>>>> or
> >>>>>>>>>> commit
> >>>>>>>>>>>>>> common
> >>>>>>>>>>>>>>>>>>>>> transaction.It
> >>>>>>>>>>>>>>>>>>>>>>>>> turned
> >>>>>>>>>>>>>>>>>>>>>>>>>>> up i
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> need to transfer tx
> >>> between
> >>>>>> nodes
> >>>>>>>> in
> >>>>>>>>>>> order
> >>>>>>>>>>>> to
> >>>>>>>>>>>>>>>> commit
> >>>>>>>>>>>>>>>>>>>>>> transaction
> >>>>>>>>>>>>>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> different node(in the
> >>> same
> >>>>>> jvm).
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> пт, 10 мар. 2017 г. в
> >>>> 15:20,
> >>>>>>> Alexey
> >>>>>>>>>>>>> Goncharuk <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>
> >> alexey.goncharuk@gmail.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> :
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Aleksey,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Do you mean that you
> >>>> want a
> >>>>>>>> concept
> >>>>>>>>>> of
> >>>>>>>>>>>>>>>> transferring
> >>>>>>>>>>>>>>>>>> of
> >>>>>>>>>>>>>>>>>>> tx
> >>>>>>>>>>>>>>>>>>>>>>>> ownership
> >>>>>>>>>>>>>>>>>>>>>>>>>>> from
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> one node to another?
> >> My
> >>>>>> initial
> >>>>>>>>>>>>> understanding
> >>>>>>>>>>>>>>> was
> >>>>>>>>>>>>>>>>>> that
> >>>>>>>>>>>>>>>>>>>> you
> >>>>>>>>>>>>>>>>>>>>>> want
> >>>>>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>> be
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> able
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> to update keys in a
> >>>>>> transaction
> >>>>>>>>> from
> >>>>>>>>>>>>> multiple
> >>>>>>>>>>>>>>>>> threads
> >>>>>>>>>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>>>>>>> parallel.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> --AG
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> 2017-03-10 15:01
> >>>> GMT+03:00
> >>>>>>>> ALEKSEY
> >>>>>>>>>>>>> KUZNETSOV
> >>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> :
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Well. Consider
> >>>>> transaction
> >>>>>>>>> started
> >>>>>>>>>> in
> >>>>>>>>>>>> one
> >>>>>>>>>>>>>>> node,
> >>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>> continued
> >>>>>>>>>>>>>>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> another
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> one.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The following test
> >>>>>> describes
> >>>>>>> my
> >>>>>>>>>> idea:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Ignite ignite1 =
> >>>>> ignite(0);
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> IgniteTransactions
> >>>>>>>> transactions =
> >>>>>>>>>>>>>>>>>>>> ignite1.transactions();
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> IgniteCache<String,
> >>>>>> Integer>
> >>>>>>>>> cache
> >>>>>>>>>> =
> >>>>>>>>>>>>>>>>>>>>>>> ignite1.getOrCreateCache("
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> testCache");
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Transaction tx =
> >>>>>>>>>>> transactions.txStart(
> >>>>>>>>>>>>>>>>> concurrency,
> >>>>>>>>>>>>>>>>>>>>>>> isolation);
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> cache.put("key1",
> >> 1);
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> cache.put("key2",
> >> 2);
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> tx.stop();
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>> IgniteInternalFuture<Boolean>
> >>>>>>>>> fut =
> >>>>>>>>>>>>>>>>>>>>>> GridTestUtils.runAsync(()
> >>>>>>>>>>>>>>>>>>>>>>>> ->
> >>>>>>>>>>>>>>>>>>>>>>>>> {
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>> IgniteTransactions
> >>>>> ts =
> >>>>>>>>>>>>>>>>>> ignite(1).transactions();
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> Assert.assertNull(ts.tx());
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Assert.assertEquals(
> >>>>>>>>>>>>>>>>> TransactionState.STOPPED,
> >>>>>>>>>>>>>>>>>>>>>>> tx.state());
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>    ts.txStart(tx);
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Assert.assertEquals(TransactionState.ACTIVE,
> >>>>>>>>>>>>>>>>>>>>>>> tx.state());
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >> cache.put("key3",
> >>>> 3);
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>> Assert.assertTrue(cache.remove("key2"));
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>    tx.commit();
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>    return true;
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> });
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> fut.get();
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >> Assert.assertEquals(
> >>>>>>>>>>>>>>> TransactionState.COMMITTED,
> >>>>>>>>>>>>>>>>>>>>>> tx.state());
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>> Assert.assertEquals((long)1,
> >>>>>>>>>>>>>>>>>>> (long)cache.get("key1"));
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>> Assert.assertEquals((long)3,
> >>>>>>>>>>>>>>>>>>> (long)cache.get("key3"));
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Assert.assertFalse(cache.
> >>>>>>>>>>>>>>> containsKey("key2"));
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> In method
> >>>>> *ts.txStart(...)*
> >>>>>>> we
> >>>>>>>>> just
> >>>>>>>>>>>>> rebind
> >>>>>>>>>>>>>>> *tx*
> >>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>> current
> >>>>>>>>>>>>>>>>>>>>>>>>> thread:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>
>
> --

*Best Regards,*

*Kuznetsov Aleksey*

Re: distributed transaction of non-single coordinator

Posted by ALEKSEY KUZNETSOV <al...@gmail.com>.
Hi! No, i dont have ticket for this.
In the ticket i have implemented methods that change transaction status to
STOP, thus letting it to commit transaction in another thread. In another
thread you r going to restart transaction in order to commit it.
The mechanism behind it is obvious : we change thread id to newer one in
ThreadMap, and make use of serialization of txState, transactions itself to
transfer them into another thread.


вт, 28 мар. 2017 г. в 20:15, Denis Magda <dm...@apache.org>:

> Aleksey,
>
> Do you have a ticket for this? Could you briefly list what exactly was
> done and how the things work.
>
> —
> Denis
>
> > On Mar 28, 2017, at 8:32 AM, ALEKSEY KUZNETSOV <al...@gmail.com>
> wrote:
> >
> > Hi, Igniters! I 've made implementation of transactions of non-single
> > coordinator. Here you can start transaction in one thread and commit it
> in
> > another thread.
> > Take a look on it. Give your thoughts on it.
> >
> >
> https://github.com/voipp/ignite/pull/10/commits/3a3d90aa6ac84f125e4c3ce4ced4f269a695ef45
> >
> > пт, 17 мар. 2017 г. в 19:26, Sergi Vladykin <se...@gmail.com>:
> >
> >> You know better, go ahead! :)
> >>
> >> Sergi
> >>
> >> 2017-03-17 16:16 GMT+03:00 ALEKSEY KUZNETSOV <alkuznetsov.sb@gmail.com
> >:
> >>
> >>> we've discovered several problems regarding your "accumulation"
> >>> approach.These are
> >>>
> >>>   1. perfomance issues when transfering data from temporary cache to
> >>>   permanent one. Keep in mind big deal of concurent transactions in
> >>> Service
> >>>   commiter
> >>>   2. extreme memory load when keeping temporary cache in memory
> >>>   3. As long as user is not acquainted with ignite, working with cache
> >>>   must be transparent for him. Keep this in mind. User's node can
> >> evaluate
> >>>   logic with no transaction at all, so we should deal with both types
> of
> >>>   execution flow : transactional and non-transactional.Another one
> >>> problem is
> >>>   transaction id support at the user node. We would have handled all
> >> this
> >>>   issues and many more.
> >>>   4. we cannot pessimistically lock entity.
> >>>
> >>> As a result, we decided to move on building distributed transaction. We
> >> put
> >>> aside your "accumulation" approach until we realize how to solve
> >>> difficulties above .
> >>>
> >>> чт, 16 мар. 2017 г. в 16:56, Sergi Vladykin <sergi.vladykin@gmail.com
> >:
> >>>
> >>>> The problem "How to run millions of entities, and millions of
> >> operations
> >>> on
> >>>> a single Pentium3" is out of scope here. Do the math, plan capacity
> >>>> reasonably.
> >>>>
> >>>> Sergi
> >>>>
> >>>> 2017-03-16 15:54 GMT+03:00 ALEKSEY KUZNETSOV <
> alkuznetsov.sb@gmail.com
> >>> :
> >>>>
> >>>>> hmm, If we have millions of entities, and millions of operations,
> >> would
> >>>> not
> >>>>> this approache lead to memory overflow and perfomance degradation
> >>>>>
> >>>>> чт, 16 мар. 2017 г. в 15:42, Sergi Vladykin <
> >> sergi.vladykin@gmail.com
> >>>> :
> >>>>>
> >>>>>> 1. Actually you have to check versions on all the values you have
> >>> read
> >>>>>> during the tx.
> >>>>>>
> >>>>>> For example if we have [k1 => v1, k2 => v2] and do:
> >>>>>>
> >>>>>> put(k1, get(k2) + 5)
> >>>>>>
> >>>>>> We have to remember the version for k2. This logic can be
> >> relatively
> >>>>> easily
> >>>>>> encapsulated in a framework atop of Ignite. You need to implement
> >> one
> >>>> to
> >>>>>> make all this stuff usable.
> >>>>>>
> >>>>>> 2. I suggest to avoid any locking here, because you easily will end
> >>> up
> >>>>> with
> >>>>>> deadlocks. If you do not have too frequent updates for your keys,
> >>>>>> optimistic approach will work just fine.
> >>>>>>
> >>>>>> Theoretically in the Committer Service you can start a thread for
> >> the
> >>>>>> lifetime of the whole distributed transaction, take a lock on the
> >> key
> >>>>> using
> >>>>>> IgniteCache.lock(K key) before executing any Services, wait for all
> >>> the
> >>>>>> services to complete, execute optimistic commit in the same thread
> >>>> while
> >>>>>> keeping this lock and then release it. Notice that all the Ignite
> >>>>>> transactions inside of all Services must be optimistic here to be
> >>> able
> >>>> to
> >>>>>> read this locked key.
> >>>>>>
> >>>>>> But again I do not recommend you using this approach until you
> >> have a
> >>>>>> reliable deadlock avoidance scheme.
> >>>>>>
> >>>>>> Sergi
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> 2017-03-16 12:53 GMT+03:00 ALEKSEY KUZNETSOV <
> >>> alkuznetsov.sb@gmail.com
> >>>>> :
> >>>>>>
> >>>>>>> Yeah, now i got it.
> >>>>>>> There are some doubts on this approach
> >>>>>>> 1) During optimistic commit phase, when you assure no one altered
> >>> the
> >>>>>>> original values, you must check versions of other dependent keys.
> >>> How
> >>>>>> could
> >>>>>>> we obtain those keys(in an automative manner, of course) ?
> >>>>>>> 2) How could we lock a key before some Service A introduce
> >> changes?
> >>>> So
> >>>>> no
> >>>>>>> other service is allowed to change this key-value?(sort of
> >>>> pessimistic
> >>>>>>> blocking)
> >>>>>>> May be you know some implementations of such approach ?
> >>>>>>>
> >>>>>>> ср, 15 мар. 2017 г. в 17:54, ALEKSEY KUZNETSOV <
> >>>>> alkuznetsov.sb@gmail.com
> >>>>>>> :
> >>>>>>>
> >>>>>>>> Thank you very much for help.  I will answer later.
> >>>>>>>>
> >>>>>>>> ср, 15 мар. 2017 г. в 17:39, Sergi Vladykin <
> >>>>> sergi.vladykin@gmail.com
> >>>>>>> :
> >>>>>>>>
> >>>>>>>> All the services do not update key in place, but only generate
> >>> new
> >>>>> keys
> >>>>>>>> augmented by otx and store the updated value in the same cache
> >> +
> >>>>>> remember
> >>>>>>>> the keys and versions participating in the transaction in some
> >>>>> separate
> >>>>>>>> atomic cache.
> >>>>>>>>
> >>>>>>>> Follow this sequence of changes applied to cache contents by
> >> each
> >>>>>>> Service:
> >>>>>>>>
> >>>>>>>> Initial cache contents:
> >>>>>>>>            [k1 => v1]
> >>>>>>>>            [k2 => v2]
> >>>>>>>>            [k3 => v3]
> >>>>>>>>
> >>>>>>>> Cache contents after Service A:
> >>>>>>>>            [k1 => v1]
> >>>>>>>>            [k2 => v2]
> >>>>>>>>            [k3 => v3]
> >>>>>>>>            [k1x => v1a]
> >>>>>>>>            [k2x => v2a]
> >>>>>>>>
> >>>>>>>>         + [x => (k1 -> ver1, k2 -> ver2)] in some separate
> >>> atomic
> >>>>>> cache
> >>>>>>>>
> >>>>>>>> Cache contents after Service B:
> >>>>>>>>            [k1 => v1]
> >>>>>>>>            [k2 => v2]
> >>>>>>>>            [k3 => v3]
> >>>>>>>>            [k1x => v1a]
> >>>>>>>>            [k2x => v2ab]
> >>>>>>>>            [k3x => v3b]
> >>>>>>>>
> >>>>>>>>        + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in some
> >>>>> separate
> >>>>>>>> atomic cache
> >>>>>>>>
> >>>>>>>> Finally the Committer Service takes this map of updated keys
> >> and
> >>>>> their
> >>>>>>>> versions from some separate atomic cache, starts Ignite
> >>> transaction
> >>>>> and
> >>>>>>>> replaces all the values for k* keys to values taken from k*x
> >>> keys.
> >>>>> The
> >>>>>>>> successful result must be the following:
> >>>>>>>>
> >>>>>>>>            [k1 => v1a]
> >>>>>>>>            [k2 => v2ab]
> >>>>>>>>            [k3 => v3b]
> >>>>>>>>            [k1x => v1a]
> >>>>>>>>            [k2x => v2ab]
> >>>>>>>>            [k3x => v3b]
> >>>>>>>>
> >>>>>>>>        + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in some
> >>>>> separate
> >>>>>>>> atomic cache
> >>>>>>>>
> >>>>>>>> But Committer Service also has to check that no one updated the
> >>>>>> original
> >>>>>>>> values before us, because otherwise we can not give any
> >>>>> serializability
> >>>>>>>> guarantee for these distributed transactions. Here we may need
> >> to
> >>>>> check
> >>>>>>> not
> >>>>>>>> only versions of the updated keys, but also versions of any
> >> other
> >>>>> keys
> >>>>>>> end
> >>>>>>>> result depends on.
> >>>>>>>>
> >>>>>>>> After that Committer Service has to do a cleanup (may be
> >> outside
> >>> of
> >>>>> the
> >>>>>>>> committing tx) to come to the following final state:
> >>>>>>>>
> >>>>>>>>            [k1 => v1a]
> >>>>>>>>            [k2 => v2ab]
> >>>>>>>>            [k3 => v3b]
> >>>>>>>>
> >>>>>>>> Makes sense?
> >>>>>>>>
> >>>>>>>> Sergi
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> 2017-03-15 16:54 GMT+03:00 ALEKSEY KUZNETSOV <
> >>>>> alkuznetsov.sb@gmail.com
> >>>>>>> :
> >>>>>>>>
> >>>>>>>>>   - what do u mean by saying "
> >>>>>>>>> *in a single transaction checks value versions for all the
> >> old
> >>>>> values
> >>>>>>>>>    and replaces them with calculated new ones *"? Every time
> >>> you
> >>>>>>> change
> >>>>>>>>>   value(in some service), you store it to *some special
> >> atomic
> >>>>>> cache*
> >>>>>>> ,
> >>>>>>>> so
> >>>>>>>>>   when all services ceased working, Service commiter got a
> >>>> values
> >>>>>> with
> >>>>>>>> the
> >>>>>>>>>   last versions.
> >>>>>>>>>   - After "*does cleanup of temporary keys and values*"
> >>> Service
> >>>>>>> commiter
> >>>>>>>>>   persists them into permanent store, isn't it ?
> >>>>>>>>>   - I cant grasp your though, you say "*in case of version
> >>>>> mismatch
> >>>>>> or
> >>>>>>>> TX
> >>>>>>>>>   timeout just rollbacks*". But what versions would it
> >> match?
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> ср, 15 мар. 2017 г. в 15:34, Sergi Vladykin <
> >>>>>> sergi.vladykin@gmail.com
> >>>>>>>> :
> >>>>>>>>>
> >>>>>>>>>> Ok, here is what you actually need to implement at the
> >>>>> application
> >>>>>>>> level.
> >>>>>>>>>>
> >>>>>>>>>> Lets say we have to call 2 services in the following order:
> >>>>>>>>>> - Service A: wants to update keys [k1 => v1,   k2 => v2]
> >> to
> >>>>> [k1
> >>>>>> =>
> >>>>>>>>> v1a,
> >>>>>>>>>>  k2 => v2a]
> >>>>>>>>>> - Service B: wants to update keys [k2 => v2a, k3 => v3]
> >> to
> >>>> [k2
> >>>>>> =>
> >>>>>>>>> v2ab,
> >>>>>>>>>> k3 => v3b]
> >>>>>>>>>>
> >>>>>>>>>> The change
> >>>>>>>>>>    from [ k1 => v1,   k2 => v2,     k3 => v3   ]
> >>>>>>>>>>    to     [ k1 => v1a, k2 => v2ab, k3 => v3b ]
> >>>>>>>>>> must happen in a single transaction.
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> Optimistic protocol to solve this:
> >>>>>>>>>>
> >>>>>>>>>> Each cache key must have a field `otx`, which is a unique
> >>>>>>> orchestrator
> >>>>>>>> TX
> >>>>>>>>>> identifier - it must be a parameter passed to all the
> >>> services.
> >>>>> If
> >>>>>>>> `otx`
> >>>>>>>>> is
> >>>>>>>>>> set to some value it means that it is an intermediate key
> >> and
> >>>> is
> >>>>>>>> visible
> >>>>>>>>>> only inside of some transaction, for the finalized key
> >> `otx`
> >>>> must
> >>>>>> be
> >>>>>>>>> null -
> >>>>>>>>>> it means the key is committed and visible for everyone.
> >>>>>>>>>>
> >>>>>>>>>> Each cache value must have a field `ver` which is a version
> >>> of
> >>>>> that
> >>>>>>>>> value.
> >>>>>>>>>>
> >>>>>>>>>> For both fields (`otx` and `ver`) the safest way is to use
> >>>> UUID.
> >>>>>>>>>>
> >>>>>>>>>> Workflow is the following:
> >>>>>>>>>>
> >>>>>>>>>> Orchestrator starts the distributed transaction with `otx`
> >> =
> >>> x
> >>>>> and
> >>>>>>>> passes
> >>>>>>>>>> this parameter to all the services.
> >>>>>>>>>>
> >>>>>>>>>> Service A:
> >>>>>>>>>> - does some computations
> >>>>>>>>>> - stores [k1x => v1a, k2x => v2a]  with TTL = Za
> >>>>>>>>>>      where
> >>>>>>>>>>          Za - left time from max Orchestrator TX duration
> >>>> after
> >>>>>>>> Service
> >>>>>>>>> A
> >>>>>>>>>> end
> >>>>>>>>>>          k1x, k2x - new temporary keys with field `otx` =
> >> x
> >>>>>>>>>>          v2a has updated version `ver`
> >>>>>>>>>> - returns a set of updated keys and all the old versions
> >> to
> >>>> the
> >>>>>>>>>> orchestrator
> >>>>>>>>>>       or just stores it in some special atomic cache like
> >>>>>>>>>>       [x => (k1 -> ver1, k2 -> ver2)] TTL = Za
> >>>>>>>>>>
> >>>>>>>>>> Service B:
> >>>>>>>>>> - retrieves the updated value k2x => v2a because it knows
> >>>> `otx`
> >>>>> =
> >>>>>> x
> >>>>>>>>>> - does computations
> >>>>>>>>>> - stores [k2x => v2ab, k3x => v3b] TTL = Zb
> >>>>>>>>>> - updates the set of updated keys like [x => (k1 -> ver1,
> >> k2
> >>>> ->
> >>>>>>> ver2,
> >>>>>>>> k3
> >>>>>>>>>> -> ver3)] TTL = Zb
> >>>>>>>>>>
> >>>>>>>>>> Service Committer (may be embedded into Orchestrator):
> >>>>>>>>>> - takes all the updated keys and versions for `otx` = x
> >>>>>>>>>>       [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)]
> >>>>>>>>>> - in a single transaction checks value versions for all
> >> the
> >>>> old
> >>>>>>> values
> >>>>>>>>>>       and replaces them with calculated new ones
> >>>>>>>>>> - does cleanup of temporary keys and values
> >>>>>>>>>> - in case of version mismatch or TX timeout just rollbacks
> >>> and
> >>>>>>> signals
> >>>>>>>>>>        to Orchestrator to restart the job with new `otx`
> >>>>>>>>>>
> >>>>>>>>>> PROFIT!!
> >>>>>>>>>>
> >>>>>>>>>> This approach even allows you to run independent parts of
> >> the
> >>>>> graph
> >>>>>>> in
> >>>>>>>>>> parallel (with TX transfer you will always run only one at
> >> a
> >>>>> time).
> >>>>>>>> Also
> >>>>>>>>> it
> >>>>>>>>>> does not require inventing any special fault tolerance
> >>> technics
> >>>>>>> because
> >>>>>>>>>> Ignite caches are already fault tolerant and all the
> >>>> intermediate
> >>>>>>>> results
> >>>>>>>>>> are virtually invisible and stored with TTL, thus in case
> >> of
> >>>> any
> >>>>>>> crash
> >>>>>>>>> you
> >>>>>>>>>> will not have inconsistent state or garbage.
> >>>>>>>>>>
> >>>>>>>>>> Sergi
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> 2017-03-15 11:42 GMT+03:00 ALEKSEY KUZNETSOV <
> >>>>>>> alkuznetsov.sb@gmail.com
> >>>>>>>>> :
> >>>>>>>>>>
> >>>>>>>>>>> Okay, we are open for proposals on business task. I mean,
> >>> we
> >>>>> can
> >>>>>>> make
> >>>>>>>>> use
> >>>>>>>>>>> of some other thing, not distributed transaction. Not
> >>>>> transaction
> >>>>>>>> yet.
> >>>>>>>>>>>
> >>>>>>>>>>> ср, 15 мар. 2017 г. в 11:24, Vladimir Ozerov <
> >>>>>> vozerov@gridgain.com
> >>>>>>>> :
> >>>>>>>>>>>
> >>>>>>>>>>>> IMO the use case makes sense. However, as Sergi already
> >>>>>>> mentioned,
> >>>>>>>>> the
> >>>>>>>>>>>> problem is far more complex, than simply passing TX
> >> state
> >>>>> over
> >>>>>> a
> >>>>>>>>> wire.
> >>>>>>>>>>> Most
> >>>>>>>>>>>> probably a kind of coordinator will be required still
> >> to
> >>>>> manage
> >>>>>>> all
> >>>>>>>>>> kinds
> >>>>>>>>>>>> of failures. This task should be started with clean
> >>> design
> >>>>>>> proposal
> >>>>>>>>>>>> explaining how we handle all these concurrent events.
> >> And
> >>>>> only
> >>>>>>>> then,
> >>>>>>>>>> when
> >>>>>>>>>>>> we understand all implications, we should move to
> >>>> development
> >>>>>>>> stage.
> >>>>>>>>>>>>
> >>>>>>>>>>>> On Wed, Mar 15, 2017 at 10:38 AM, ALEKSEY KUZNETSOV <
> >>>>>>>>>>>> alkuznetsov.sb@gmail.com> wrote:
> >>>>>>>>>>>>
> >>>>>>>>>>>>> Right
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> ср, 15 мар. 2017 г. в 10:35, Sergi Vladykin <
> >>>>>>>>>> sergi.vladykin@gmail.com
> >>>>>>>>>>>> :
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>> Good! Basically your orchestrator just takes some
> >>>>>> predefined
> >>>>>>>>> graph
> >>>>>>>>>> of
> >>>>>>>>>>>>>> distributed services to be invoked, calls them by
> >>> some
> >>>>> kind
> >>>>>>> of
> >>>>>>>>> RPC
> >>>>>>>>>>> and
> >>>>>>>>>>>>>> passes the needed parameters between them, right?
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> Sergi
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> 2017-03-14 22:46 GMT+03:00 ALEKSEY KUZNETSOV <
> >>>>>>>>>>> alkuznetsov.sb@gmail.com
> >>>>>>>>>>>>> :
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> orchestrator is a custom thing. He is responsible
> >>> for
> >>>>>>>> managing
> >>>>>>>>>>>> business
> >>>>>>>>>>>>>>> scenarios flows. Many nodes are involved in
> >>>> scenarios.
> >>>>>> They
> >>>>>>>>>>> exchange
> >>>>>>>>>>>>> data
> >>>>>>>>>>>>>>> and folow one another. If you acquinted with BPMN
> >>>>>>> framework,
> >>>>>>>> so
> >>>>>>>>>>>>>>> orchestrator is like bpmn engine.
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> вт, 14 Мар 2017 г., 18:56 Sergi Vladykin <
> >>>>>>>>>> sergi.vladykin@gmail.com
> >>>>>>>>>>>> :
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> What is Orchestrator for you? Is it a thing
> >> from
> >>>>>>> Microsoft
> >>>>>>>> or
> >>>>>>>>>>> your
> >>>>>>>>>>>>>> custom
> >>>>>>>>>>>>>>>> in-house software?
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Sergi
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> 2017-03-14 18:00 GMT+03:00 ALEKSEY KUZNETSOV <
> >>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> >>>>>>>>>>>>>>> :
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> Fine. Let's say we've got multiple servers
> >>> which
> >>>>>>> fulfills
> >>>>>>>>>>> custom
> >>>>>>>>>>>>>> logic.
> >>>>>>>>>>>>>>>>> This servers compound oriented graph (BPMN
> >>>> process)
> >>>>>>> which
> >>>>>>>>>>>>> controlled
> >>>>>>>>>>>>>> by
> >>>>>>>>>>>>>>>>> Orchestrator.
> >>>>>>>>>>>>>>>>> For instance, *server1  *creates *variable A
> >>>> *with
> >>>>>>> value
> >>>>>>>> 1,
> >>>>>>>>>>>>> persists
> >>>>>>>>>>>>>> it
> >>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>> IGNITE cache and creates *variable B *and
> >> sends
> >>>> it
> >>>>>> to*
> >>>>>>>>>> server2.
> >>>>>>>>>>>>> *The
> >>>>>>>>>>>>>>>>> latests receives *variable B*, do some logic
> >>> with
> >>>>> it
> >>>>>>> and
> >>>>>>>>>> stores
> >>>>>>>>>>>> to
> >>>>>>>>>>>>>>>> IGNITE.
> >>>>>>>>>>>>>>>>> All the work made by both servers must be
> >>>> fulfilled
> >>>>>> in
> >>>>>>>>> *one*
> >>>>>>>>>>>>>>> transaction.
> >>>>>>>>>>>>>>>>> Because we need all information done, or
> >>>>>>>>> nothing(rollbacked).
> >>>>>>>>>>> The
> >>>>>>>>>>>>>>>> scenario
> >>>>>>>>>>>>>>>>> is managed by orchestrator.
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 17:31, Sergi Vladykin <
> >>>>>>>>>>>>>> sergi.vladykin@gmail.com
> >>>>>>>>>>>>>>>> :
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>> Ok, it is not a business case, it is your
> >>> wrong
> >>>>>>>> solution
> >>>>>>>>>> for
> >>>>>>>>>>>> it.
> >>>>>>>>>>>>>>>>>> Lets try again, what is the business case?
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>> Sergi
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>> 2017-03-14 16:42 GMT+03:00 ALEKSEY
> >> KUZNETSOV
> >>> <
> >>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> >>>>>>>>>>>>>>>>> :
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> The case is the following, One starts
> >>>>> transaction
> >>>>>>> in
> >>>>>>>>> one
> >>>>>>>>>>>> node,
> >>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>> commit
> >>>>>>>>>>>>>>>>>>> this transaction in another jvm node(or
> >>>>> rollback
> >>>>>> it
> >>>>>>>>>>>> remotely).
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 16:30, Sergi
> >>> Vladykin <
> >>>>>>>>>>>>>>>> sergi.vladykin@gmail.com
> >>>>>>>>>>>>>>>>>> :
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> Because even if you make it work for
> >> some
> >>>>>>>> simplistic
> >>>>>>>>>>>>> scenario,
> >>>>>>>>>>>>>>> get
> >>>>>>>>>>>>>>>>>> ready
> >>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>> write many fault tolerance tests and
> >> make
> >>>>> sure
> >>>>>>> that
> >>>>>>>>> you
> >>>>>>>>>>> TXs
> >>>>>>>>>>>>>> work
> >>>>>>>>>>>>>>>>>>> gracefully
> >>>>>>>>>>>>>>>>>>>> in all modes in case of crashes. Also
> >>> make
> >>>>> sure
> >>>>>>>> that
> >>>>>>>>> we
> >>>>>>>>>>> do
> >>>>>>>>>>>>> not
> >>>>>>>>>>>>>>> have
> >>>>>>>>>>>>>>>>> any
> >>>>>>>>>>>>>>>>>>>> performance drops after all your
> >> changes
> >>> in
> >>>>>>>> existing
> >>>>>>>>>>>>>> benchmarks.
> >>>>>>>>>>>>>>>> All
> >>>>>>>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>> all
> >>>>>>>>>>>>>>>>>>>> I don't believe these conditions will
> >> be
> >>>> met
> >>>>>> and
> >>>>>>>> your
> >>>>>>>>>>>>>>> contribution
> >>>>>>>>>>>>>>>>> will
> >>>>>>>>>>>>>>>>>>> be
> >>>>>>>>>>>>>>>>>>>> accepted.
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> Better solution to what problem?
> >> Sending
> >>> TX
> >>>>> to
> >>>>>>>>> another
> >>>>>>>>>>>> node?
> >>>>>>>>>>>>>> The
> >>>>>>>>>>>>>>>>>> problem
> >>>>>>>>>>>>>>>>>>>> statement itself is already wrong. What
> >>>>>> business
> >>>>>>>> case
> >>>>>>>>>> you
> >>>>>>>>>>>> are
> >>>>>>>>>>>>>>>> trying
> >>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>> solve? I'm sure everything you need can
> >>> be
> >>>>> done
> >>>>>>> in
> >>>>>>>> a
> >>>>>>>>>> much
> >>>>>>>>>>>>> more
> >>>>>>>>>>>>>>>> simple
> >>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>> efficient way at the application level.
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> Sergi
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> 2017-03-14 16:03 GMT+03:00 ALEKSEY
> >>>> KUZNETSOV
> >>>>> <
> >>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> >>>>>>>>>>>>>>>>>>> :
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> Why wrong ? You know the better
> >>> solution?
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 15:46, Sergi
> >>>>> Vladykin <
> >>>>>>>>>>>>>>>>>> sergi.vladykin@gmail.com
> >>>>>>>>>>>>>>>>>>>> :
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>> Just serializing TX object and
> >>>>>> deserializing
> >>>>>>> it
> >>>>>>>>> on
> >>>>>>>>>>>>> another
> >>>>>>>>>>>>>>> node
> >>>>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>>>> meaningless, because other nodes
> >>>>>>> participating
> >>>>>>>> in
> >>>>>>>>>> the
> >>>>>>>>>>>> TX
> >>>>>>>>>>>>>> have
> >>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>> know
> >>>>>>>>>>>>>>>>>>>>> about
> >>>>>>>>>>>>>>>>>>>>>> the new coordinator. This will
> >>> require
> >>>>>>> protocol
> >>>>>>>>>>>> changes,
> >>>>>>>>>>>>> we
> >>>>>>>>>>>>>>>>>>> definitely
> >>>>>>>>>>>>>>>>>>>>> will
> >>>>>>>>>>>>>>>>>>>>>> have fault tolerance and
> >> performance
> >>>>>> issues.
> >>>>>>>> IMO
> >>>>>>>>>> the
> >>>>>>>>>>>>> whole
> >>>>>>>>>>>>>>> idea
> >>>>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>> wrong
> >>>>>>>>>>>>>>>>>>>>>> and it makes no sense to waste time
> >>> on
> >>>>> it.
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>> Sergi
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>> 2017-03-14 10:57 GMT+03:00 ALEKSEY
> >>>>>> KUZNETSOV
> >>>>>>> <
> >>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> >>>>>>>>>>>>>>>>>>>>> :
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>> IgniteTransactionState
> >>>> implememntation
> >>>>>>>> contains
> >>>>>>>>>>>>>>>> IgniteTxEntry's
> >>>>>>>>>>>>>>>>>>> which
> >>>>>>>>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>>>>> supposed to be transferable
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>> пн, 13 мар. 2017 г. в 19:32,
> >>> Dmitriy
> >>>>>>>> Setrakyan
> >>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>> dsetrakyan@apache.org
> >>>>>>>>>>>>>>>>>>>>>> :
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> It sounds a little scary to me
> >>> that
> >>>>> we
> >>>>>>> are
> >>>>>>>>>>> passing
> >>>>>>>>>>>>>>>>> transaction
> >>>>>>>>>>>>>>>>>>>>> objects
> >>>>>>>>>>>>>>>>>>>>>>>> around. Such object may contain
> >>> all
> >>>>>> sorts
> >>>>>>>> of
> >>>>>>>>>>> Ignite
> >>>>>>>>>>>>>>>> context.
> >>>>>>>>>>>>>>>>> If
> >>>>>>>>>>>>>>>>>>>> some
> >>>>>>>>>>>>>>>>>>>>>> data
> >>>>>>>>>>>>>>>>>>>>>>>> needs to be passed across, we
> >>>> should
> >>>>>>>> create a
> >>>>>>>>>>>> special
> >>>>>>>>>>>>>>>>> transfer
> >>>>>>>>>>>>>>>>>>>> object
> >>>>>>>>>>>>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>>>>>>> this case.
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> D.
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> On Mon, Mar 13, 2017 at 9:10
> >> AM,
> >>>>>> ALEKSEY
> >>>>>>>>>>> KUZNETSOV
> >>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> >>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>> well, there a couple of
> >> issues
> >>>>>>> preventing
> >>>>>>>>>>>>> transaction
> >>>>>>>>>>>>>>>>>>> proceeding.
> >>>>>>>>>>>>>>>>>>>>>>>>> At first, After transaction
> >>>>>>> serialization
> >>>>>>>>> and
> >>>>>>>>>>>>>>>>> deserialization
> >>>>>>>>>>>>>>>>>>> on
> >>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>> remote
> >>>>>>>>>>>>>>>>>>>>>>>>> server, there is no txState.
> >> So
> >>>> im
> >>>>>>> going
> >>>>>>>> to
> >>>>>>>>>> put
> >>>>>>>>>>>> it
> >>>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>>>>>>>>
> >> writeExternal()\readExternal()
> >>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>> The last one is Deserialized
> >>>>>>> transaction
> >>>>>>>>>> lacks
> >>>>>>>>>>> of
> >>>>>>>>>>>>>>> shared
> >>>>>>>>>>>>>>>>>> cache
> >>>>>>>>>>>>>>>>>>>>>> context
> >>>>>>>>>>>>>>>>>>>>>>>>> field at
> >> TransactionProxyImpl.
> >>>>>> Perhaps,
> >>>>>>>> it
> >>>>>>>>>> must
> >>>>>>>>>>>> be
> >>>>>>>>>>>>>>>> injected
> >>>>>>>>>>>>>>>>>> by
> >>>>>>>>>>>>>>>>>>>>>>>>> GridResourceProcessor ?
> >>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>> пн, 13 мар. 2017 г. в 17:27,
> >>>>> ALEKSEY
> >>>>>>>>>> KUZNETSOV
> >>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> >>>>>>>>>>>>>>>>>>>>>>>>> :
> >>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>> while starting and
> >> continuing
> >>>>>>>> transaction
> >>>>>>>>>> in
> >>>>>>>>>>>>>>> different
> >>>>>>>>>>>>>>>>> jvms
> >>>>>>>>>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>>>> run
> >>>>>>>>>>>>>>>>>>>>>>> into
> >>>>>>>>>>>>>>>>>>>>>>>>>> serialization exception in
> >>>>>>>>>> writeExternalMeta
> >>>>>>>>>>> :
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>> @Override public void
> >>>>>>>>>>>> writeExternal(ObjectOutput
> >>>>>>>>>>>>>> out)
> >>>>>>>>>>>>>>>>>> throws
> >>>>>>>>>>>>>>>>>>>>>>>> IOException
> >>>>>>>>>>>>>>>>>>>>>>>>> {
> >>>>>>>>>>>>>>>>>>>>>>>>>>    writeExternalMeta(out);
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>> some meta is cannot be
> >>>>> serialized.
> >>>>>>>>>>>>>>>>>>>>>>>>>> пт, 10 мар. 2017 г. в
> >> 17:25,
> >>>>> Alexey
> >>>>>>>>>>> Goncharuk <
> >>>>>>>>>>>>>>>>>>>>>>>>> alexey.goncharuk@gmail.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>> :
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>> Aleksey,
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>> I think I am starting to
> >> get
> >>>> what
> >>>>>> you
> >>>>>>>>> want,
> >>>>>>>>>>>> but I
> >>>>>>>>>>>>>>> have
> >>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>> few
> >>>>>>>>>>>>>>>>>>>>>>> concerns:
> >>>>>>>>>>>>>>>>>>>>>>>>>> - What is the API for the
> >>>>> proposed
> >>>>>>>>> change?
> >>>>>>>>>>> In
> >>>>>>>>>>>>> your
> >>>>>>>>>>>>>>>> test,
> >>>>>>>>>>>>>>>>>> you
> >>>>>>>>>>>>>>>>>>>>> pass
> >>>>>>>>>>>>>>>>>>>>>> an
> >>>>>>>>>>>>>>>>>>>>>>>>>> instance of transaction
> >>> created
> >>>>> on
> >>>>>>>>>> ignite(0)
> >>>>>>>>>>> to
> >>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>> ignite
> >>>>>>>>>>>>>>>>>>>>> instance
> >>>>>>>>>>>>>>>>>>>>>>>>>> ignite(1). This is
> >> obviously
> >>>> not
> >>>>>>>> possible
> >>>>>>>>>> in
> >>>>>>>>>>> a
> >>>>>>>>>>>>>> truly
> >>>>>>>>>>>>>>>>>>>> distributed
> >>>>>>>>>>>>>>>>>>>>>>>>>> (multi-jvm) environment.
> >>>>>>>>>>>>>>>>>>>>>>>>>> - How will you synchronize
> >>>> cache
> >>>>>>> update
> >>>>>>>>>>> actions
> >>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>> transaction
> >>>>>>>>>>>>>>>>>>>>>>> commit?
> >>>>>>>>>>>>>>>>>>>>>>>>>> Say, you have one node that
> >>>>> decided
> >>>>>>> to
> >>>>>>>>>>> commit,
> >>>>>>>>>>>>> but
> >>>>>>>>>>>>>>>>> another
> >>>>>>>>>>>>>>>>>>> node
> >>>>>>>>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>>>>>> still
> >>>>>>>>>>>>>>>>>>>>>>>>>> writing within this
> >>>> transaction.
> >>>>>> How
> >>>>>>> do
> >>>>>>>>> you
> >>>>>>>>>>>> make
> >>>>>>>>>>>>>> sure
> >>>>>>>>>>>>>>>>> that
> >>>>>>>>>>>>>>>>>>> two
> >>>>>>>>>>>>>>>>>>>>>> nodes
> >>>>>>>>>>>>>>>>>>>>>>>> will
> >>>>>>>>>>>>>>>>>>>>>>>>>> not call commit() and
> >>>> rollback()
> >>>>>>>>>>>> simultaneously?
> >>>>>>>>>>>>>>>>>>>>>>>>>> - How do you make sure
> >> that
> >>>>> either
> >>>>>>>>>> commit()
> >>>>>>>>>>> or
> >>>>>>>>>>>>>>>>> rollback()
> >>>>>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>>>> called
> >>>>>>>>>>>>>>>>>>>>>>> if
> >>>>>>>>>>>>>>>>>>>>>>>>> an
> >>>>>>>>>>>>>>>>>>>>>>>>>> originator failed?
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>> 2017-03-10 15:38 GMT+03:00
> >>>>> Дмитрий
> >>>>>>>> Рябов
> >>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>> somefireone@gmail.com
> >>>>>>>>>>>>>>>>>>>>>> :
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>> Alexey Goncharuk, heh, my
> >>>>> initial
> >>>>>>>>>>>> understanding
> >>>>>>>>>>>>>> was
> >>>>>>>>>>>>>>>>> that
> >>>>>>>>>>>>>>>>>>>>>>> transferring
> >>>>>>>>>>>>>>>>>>>>>>>>> of
> >>>>>>>>>>>>>>>>>>>>>>>>>> tx
> >>>>>>>>>>>>>>>>>>>>>>>>>>> ownership from one node
> >> to
> >>>>>> another
> >>>>>>>> will
> >>>>>>>>>> be
> >>>>>>>>>>>>>> happened
> >>>>>>>>>>>>>>>>>>>>> automatically
> >>>>>>>>>>>>>>>>>>>>>>>> when
> >>>>>>>>>>>>>>>>>>>>>>>>>>> originating node is gone
> >>>> down.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>> 2017-03-10 15:36
> >> GMT+03:00
> >>>>>> ALEKSEY
> >>>>>>>>>>> KUZNETSOV
> >>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> >>>>>>>>>>>>>>>>>>>>>>>>>> :
> >>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> Im aiming to span
> >>>> transaction
> >>>>>> on
> >>>>>>>>>> multiple
> >>>>>>>>>>>>>>> threads,
> >>>>>>>>>>>>>>>>>> nodes,
> >>>>>>>>>>>>>>>>>>>>>>>> jvms(soon).
> >>>>>>>>>>>>>>>>>>>>>>>>>> So
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> every node is able to
> >>>>> rollback,
> >>>>>>> or
> >>>>>>>>>> commit
> >>>>>>>>>>>>>> common
> >>>>>>>>>>>>>>>>>>>>> transaction.It
> >>>>>>>>>>>>>>>>>>>>>>>>> turned
> >>>>>>>>>>>>>>>>>>>>>>>>>>> up i
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> need to transfer tx
> >>> between
> >>>>>> nodes
> >>>>>>>> in
> >>>>>>>>>>> order
> >>>>>>>>>>>> to
> >>>>>>>>>>>>>>>> commit
> >>>>>>>>>>>>>>>>>>>>>> transaction
> >>>>>>>>>>>>>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> different node(in the
> >>> same
> >>>>>> jvm).
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> пт, 10 мар. 2017 г. в
> >>>> 15:20,
> >>>>>>> Alexey
> >>>>>>>>>>>>> Goncharuk <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>
> >> alexey.goncharuk@gmail.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> :
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Aleksey,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Do you mean that you
> >>>> want a
> >>>>>>>> concept
> >>>>>>>>>> of
> >>>>>>>>>>>>>>>> transferring
> >>>>>>>>>>>>>>>>>> of
> >>>>>>>>>>>>>>>>>>> tx
> >>>>>>>>>>>>>>>>>>>>>>>> ownership
> >>>>>>>>>>>>>>>>>>>>>>>>>>> from
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> one node to another?
> >> My
> >>>>>> initial
> >>>>>>>>>>>>> understanding
> >>>>>>>>>>>>>>> was
> >>>>>>>>>>>>>>>>>> that
> >>>>>>>>>>>>>>>>>>>> you
> >>>>>>>>>>>>>>>>>>>>>> want
> >>>>>>>>>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>>>>>> be
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> able
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> to update keys in a
> >>>>>> transaction
> >>>>>>>>> from
> >>>>>>>>>>>>> multiple
> >>>>>>>>>>>>>>>>> threads
> >>>>>>>>>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>>>>>>> parallel.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> --AG
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> 2017-03-10 15:01
> >>>> GMT+03:00
> >>>>>>>> ALEKSEY
> >>>>>>>>>>>>> KUZNETSOV
> >>>>>>>>>>>>>> <
> >>>>>>>>>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> :
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Well. Consider
> >>>>> transaction
> >>>>>>>>> started
> >>>>>>>>>> in
> >>>>>>>>>>>> one
> >>>>>>>>>>>>>>> node,
> >>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>> continued
> >>>>>>>>>>>>>>>>>>>>>>>> in
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> another
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> one.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The following test
> >>>>>> describes
> >>>>>>> my
> >>>>>>>>>> idea:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Ignite ignite1 =
> >>>>> ignite(0);
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> IgniteTransactions
> >>>>>>>> transactions =
> >>>>>>>>>>>>>>>>>>>> ignite1.transactions();
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> IgniteCache<String,
> >>>>>> Integer>
> >>>>>>>>> cache
> >>>>>>>>>> =
> >>>>>>>>>>>>>>>>>>>>>>> ignite1.getOrCreateCache("
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> testCache");
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Transaction tx =
> >>>>>>>>>>> transactions.txStart(
> >>>>>>>>>>>>>>>>> concurrency,
> >>>>>>>>>>>>>>>>>>>>>>> isolation);
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> cache.put("key1",
> >> 1);
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> cache.put("key2",
> >> 2);
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> tx.stop();
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>> IgniteInternalFuture<Boolean>
> >>>>>>>>> fut =
> >>>>>>>>>>>>>>>>>>>>>> GridTestUtils.runAsync(()
> >>>>>>>>>>>>>>>>>>>>>>>> ->
> >>>>>>>>>>>>>>>>>>>>>>>>> {
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>> IgniteTransactions
> >>>>> ts =
> >>>>>>>>>>>>>>>>>> ignite(1).transactions();
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>> Assert.assertNull(ts.tx());
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Assert.assertEquals(
> >>>>>>>>>>>>>>>>> TransactionState.STOPPED,
> >>>>>>>>>>>>>>>>>>>>>>> tx.state());
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>    ts.txStart(tx);
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Assert.assertEquals(TransactionState.ACTIVE,
> >>>>>>>>>>>>>>>>>>>>>>> tx.state());
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >> cache.put("key3",
> >>>> 3);
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>> Assert.assertTrue(cache.remove("key2"));
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>    tx.commit();
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>    return true;
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> });
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> fut.get();
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >> Assert.assertEquals(
> >>>>>>>>>>>>>>> TransactionState.COMMITTED,
> >>>>>>>>>>>>>>>>>>>>>> tx.state());
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>> Assert.assertEquals((long)1,
> >>>>>>>>>>>>>>>>>>> (long)cache.get("key1"));
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>> Assert.assertEquals((long)3,
> >>>>>>>>>>>>>>>>>>> (long)cache.get("key3"));
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> Assert.assertFalse(cache.
> >>>>>>>>>>>>>>> containsKey("key2"));
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> In method
> >>>>> *ts.txStart(...)*
> >>>>>>> we
> >>>>>>>>> just
> >>>>>>>>>>>>> rebind
> >>>>>>>>>>>>>>> *tx*
> >>>>>>>>>>>>>>>>> to
> >>>>>>>>>>>>>>>>>>>>> current
> >>>>>>>>>>>>>>>>>>>>>>>>> thread:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> public void
> >>>>>>> txStart(Transaction
> >>>>>>>>>> tx) {
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> TransactionProxyImpl
> >>>>>>>>>>>>> transactionProxy =
> >>>>>>>>>>>>>>>>>>>>>>>>>> (TransactionProxyImpl)tx;
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>    cctx.tm
> >>>> ().reopenTx(
> >>>>>>>>>>>>>>> transactionProxy.tx());
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >> transactionProxy.
> >>>>>>>>>>>>> bindToCurrentThread();
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> }
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> In method
> >> *reopenTx*
> >>> we
> >>>>>> alter
> >>>>>>>>>>>> *threadMap*
> >>>>>>>>>>>>>> so
> >>>>>>>>>>>>>>>> that
> >>>>>>>>>>>>>>>>>> it
> >>>>>>>>>>>>>>>>>>>>> binds
> >>>>>>>>>>>>>>>>>>>>>>>>>>> transaction
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to current thread.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> How do u think
> >> about
> >>>> it ?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> вт, 7 мар. 2017 г.
> >> в
> >>>>> 22:38,
> >>>>>>>> Denis
> >>>>>>>>>>>> Magda <
> >>>>>>>>>>>>>>>>>>>>> dmagda@apache.org
> >>>>>>>>>>>>>>>>>>>>>>> :
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hi Alexey,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Please share the
> >>>>> rational
> >>>>>>>>> behind
> >>>>>>>>>>> this
> >>>>>>>>>>>>> and
> >>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>> thoughts,
> >>>>>>>>>>>>>>>>>>>>>>>> design
> >>>>>>>>>>>>>>>>>>>>>>>>>>> ideas
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> you
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> have in mind.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> —
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Denis
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mar 7, 2017,
> >>> at
> >>>>> 3:19
> >>>>>>> AM,
> >>>>>>>>>>> ALEKSEY
> >>>>>>>>>>>>>>>>> KUZNETSOV <
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>> alkuznetsov.sb@gmail.com
> >>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hi all! Im
> >>>> designing
> >>>>>>>>>> distributed
> >>>>>>>>>>>>>>>> transaction
> >>>>>>>>>>>>>>>>>>> which
> >>>>>>>>>>>>>>>>>>>>> can
> >>>>>>>>>>>>>>>>>>>>>> be
> >>>>>>>>>>>>>>>>>>>>>>>>>> started
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> at
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> one
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> node, and
> >>> continued
> >>>>> at
> >>>>>>>> other
> >>>>>>>>>> one.
> >>>>>>>>>>>> Has
> >>>>>>>>>>>>>>>> anybody
> >>>>>>>>>>>>>>>>>>>>> thoughts
> >>>>>>>>>>>>>>>>>>>>>> on
> >>>>>>>>>>>>>>>>>>>>>>>> it
> >>>>>>>>>>>>>>>>>>>>>>>>> ?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Best Regards,*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Kuznetsov
> >>> Aleksey*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Best Regards,*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Kuznetsov Aleksey*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> *Best Regards,*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> *Kuznetsov Aleksey*
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>> *Best Regards,*
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>> *Kuznetsov Aleksey*
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>> *Best Regards,*
> >>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>> *Kuznetsov Aleksey*
> >>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>> *Best Regards,*
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>> *Kuznetsov Aleksey*
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> *Best Regards,*
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> *Kuznetsov Aleksey*
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> *Best Regards,*
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> *Kuznetsov Aleksey*
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> *Best Regards,*
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> *Kuznetsov Aleksey*
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> *Best Regards,*
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> *Kuznetsov Aleksey*
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>> --
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> *Best Regards,*
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> *Kuznetsov Aleksey*
> >>>>>>>>>>>>>
> >>>>>>>>>>>>
> >>>>>>>>>>> --
> >>>>>>>>>>>
> >>>>>>>>>>> *Best Regards,*
> >>>>>>>>>>>
> >>>>>>>>>>> *Kuznetsov Aleksey*
> >>>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>> --
> >>>>>>>>>
> >>>>>>>>> *Best Regards,*
> >>>>>>>>>
> >>>>>>>>> *Kuznetsov Aleksey*
> >>>>>>>>>
> >>>>>>>>
> >>>>>>>> --
> >>>>>>>>
> >>>>>>>> *Best Regards,*
> >>>>>>>>
> >>>>>>>> *Kuznetsov Aleksey*
> >>>>>>>>
> >>>>>>> --
> >>>>>>>
> >>>>>>> *Best Regards,*
> >>>>>>>
> >>>>>>> *Kuznetsov Aleksey*
> >>>>>>>
> >>>>>>
> >>>>> --
> >>>>>
> >>>>> *Best Regards,*
> >>>>>
> >>>>> *Kuznetsov Aleksey*
> >>>>>
> >>>>
> >>> --
> >>>
> >>> *Best Regards,*
> >>>
> >>> *Kuznetsov Aleksey*
> >>>
> >>
> > --
> >
> > *Best Regards,*
> >
> > *Kuznetsov Aleksey*
>
> --

*Best Regards,*

*Kuznetsov Aleksey*

Re: distributed transaction of non-single coordinator

Posted by Denis Magda <dm...@apache.org>.
Aleksey,

Do you have a ticket for this? Could you briefly list what exactly was done and how the things work.

—
Denis

> On Mar 28, 2017, at 8:32 AM, ALEKSEY KUZNETSOV <al...@gmail.com> wrote:
> 
> Hi, Igniters! I 've made implementation of transactions of non-single
> coordinator. Here you can start transaction in one thread and commit it in
> another thread.
> Take a look on it. Give your thoughts on it.
> 
> https://github.com/voipp/ignite/pull/10/commits/3a3d90aa6ac84f125e4c3ce4ced4f269a695ef45
> 
> пт, 17 мар. 2017 г. в 19:26, Sergi Vladykin <se...@gmail.com>:
> 
>> You know better, go ahead! :)
>> 
>> Sergi
>> 
>> 2017-03-17 16:16 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:
>> 
>>> we've discovered several problems regarding your "accumulation"
>>> approach.These are
>>> 
>>>   1. perfomance issues when transfering data from temporary cache to
>>>   permanent one. Keep in mind big deal of concurent transactions in
>>> Service
>>>   commiter
>>>   2. extreme memory load when keeping temporary cache in memory
>>>   3. As long as user is not acquainted with ignite, working with cache
>>>   must be transparent for him. Keep this in mind. User's node can
>> evaluate
>>>   logic with no transaction at all, so we should deal with both types of
>>>   execution flow : transactional and non-transactional.Another one
>>> problem is
>>>   transaction id support at the user node. We would have handled all
>> this
>>>   issues and many more.
>>>   4. we cannot pessimistically lock entity.
>>> 
>>> As a result, we decided to move on building distributed transaction. We
>> put
>>> aside your "accumulation" approach until we realize how to solve
>>> difficulties above .
>>> 
>>> чт, 16 мар. 2017 г. в 16:56, Sergi Vladykin <se...@gmail.com>:
>>> 
>>>> The problem "How to run millions of entities, and millions of
>> operations
>>> on
>>>> a single Pentium3" is out of scope here. Do the math, plan capacity
>>>> reasonably.
>>>> 
>>>> Sergi
>>>> 
>>>> 2017-03-16 15:54 GMT+03:00 ALEKSEY KUZNETSOV <alkuznetsov.sb@gmail.com
>>> :
>>>> 
>>>>> hmm, If we have millions of entities, and millions of operations,
>> would
>>>> not
>>>>> this approache lead to memory overflow and perfomance degradation
>>>>> 
>>>>> чт, 16 мар. 2017 г. в 15:42, Sergi Vladykin <
>> sergi.vladykin@gmail.com
>>>> :
>>>>> 
>>>>>> 1. Actually you have to check versions on all the values you have
>>> read
>>>>>> during the tx.
>>>>>> 
>>>>>> For example if we have [k1 => v1, k2 => v2] and do:
>>>>>> 
>>>>>> put(k1, get(k2) + 5)
>>>>>> 
>>>>>> We have to remember the version for k2. This logic can be
>> relatively
>>>>> easily
>>>>>> encapsulated in a framework atop of Ignite. You need to implement
>> one
>>>> to
>>>>>> make all this stuff usable.
>>>>>> 
>>>>>> 2. I suggest to avoid any locking here, because you easily will end
>>> up
>>>>> with
>>>>>> deadlocks. If you do not have too frequent updates for your keys,
>>>>>> optimistic approach will work just fine.
>>>>>> 
>>>>>> Theoretically in the Committer Service you can start a thread for
>> the
>>>>>> lifetime of the whole distributed transaction, take a lock on the
>> key
>>>>> using
>>>>>> IgniteCache.lock(K key) before executing any Services, wait for all
>>> the
>>>>>> services to complete, execute optimistic commit in the same thread
>>>> while
>>>>>> keeping this lock and then release it. Notice that all the Ignite
>>>>>> transactions inside of all Services must be optimistic here to be
>>> able
>>>> to
>>>>>> read this locked key.
>>>>>> 
>>>>>> But again I do not recommend you using this approach until you
>> have a
>>>>>> reliable deadlock avoidance scheme.
>>>>>> 
>>>>>> Sergi
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 2017-03-16 12:53 GMT+03:00 ALEKSEY KUZNETSOV <
>>> alkuznetsov.sb@gmail.com
>>>>> :
>>>>>> 
>>>>>>> Yeah, now i got it.
>>>>>>> There are some doubts on this approach
>>>>>>> 1) During optimistic commit phase, when you assure no one altered
>>> the
>>>>>>> original values, you must check versions of other dependent keys.
>>> How
>>>>>> could
>>>>>>> we obtain those keys(in an automative manner, of course) ?
>>>>>>> 2) How could we lock a key before some Service A introduce
>> changes?
>>>> So
>>>>> no
>>>>>>> other service is allowed to change this key-value?(sort of
>>>> pessimistic
>>>>>>> blocking)
>>>>>>> May be you know some implementations of such approach ?
>>>>>>> 
>>>>>>> ср, 15 мар. 2017 г. в 17:54, ALEKSEY KUZNETSOV <
>>>>> alkuznetsov.sb@gmail.com
>>>>>>> :
>>>>>>> 
>>>>>>>> Thank you very much for help.  I will answer later.
>>>>>>>> 
>>>>>>>> ср, 15 мар. 2017 г. в 17:39, Sergi Vladykin <
>>>>> sergi.vladykin@gmail.com
>>>>>>> :
>>>>>>>> 
>>>>>>>> All the services do not update key in place, but only generate
>>> new
>>>>> keys
>>>>>>>> augmented by otx and store the updated value in the same cache
>> +
>>>>>> remember
>>>>>>>> the keys and versions participating in the transaction in some
>>>>> separate
>>>>>>>> atomic cache.
>>>>>>>> 
>>>>>>>> Follow this sequence of changes applied to cache contents by
>> each
>>>>>>> Service:
>>>>>>>> 
>>>>>>>> Initial cache contents:
>>>>>>>>            [k1 => v1]
>>>>>>>>            [k2 => v2]
>>>>>>>>            [k3 => v3]
>>>>>>>> 
>>>>>>>> Cache contents after Service A:
>>>>>>>>            [k1 => v1]
>>>>>>>>            [k2 => v2]
>>>>>>>>            [k3 => v3]
>>>>>>>>            [k1x => v1a]
>>>>>>>>            [k2x => v2a]
>>>>>>>> 
>>>>>>>>         + [x => (k1 -> ver1, k2 -> ver2)] in some separate
>>> atomic
>>>>>> cache
>>>>>>>> 
>>>>>>>> Cache contents after Service B:
>>>>>>>>            [k1 => v1]
>>>>>>>>            [k2 => v2]
>>>>>>>>            [k3 => v3]
>>>>>>>>            [k1x => v1a]
>>>>>>>>            [k2x => v2ab]
>>>>>>>>            [k3x => v3b]
>>>>>>>> 
>>>>>>>>        + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in some
>>>>> separate
>>>>>>>> atomic cache
>>>>>>>> 
>>>>>>>> Finally the Committer Service takes this map of updated keys
>> and
>>>>> their
>>>>>>>> versions from some separate atomic cache, starts Ignite
>>> transaction
>>>>> and
>>>>>>>> replaces all the values for k* keys to values taken from k*x
>>> keys.
>>>>> The
>>>>>>>> successful result must be the following:
>>>>>>>> 
>>>>>>>>            [k1 => v1a]
>>>>>>>>            [k2 => v2ab]
>>>>>>>>            [k3 => v3b]
>>>>>>>>            [k1x => v1a]
>>>>>>>>            [k2x => v2ab]
>>>>>>>>            [k3x => v3b]
>>>>>>>> 
>>>>>>>>        + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in some
>>>>> separate
>>>>>>>> atomic cache
>>>>>>>> 
>>>>>>>> But Committer Service also has to check that no one updated the
>>>>>> original
>>>>>>>> values before us, because otherwise we can not give any
>>>>> serializability
>>>>>>>> guarantee for these distributed transactions. Here we may need
>> to
>>>>> check
>>>>>>> not
>>>>>>>> only versions of the updated keys, but also versions of any
>> other
>>>>> keys
>>>>>>> end
>>>>>>>> result depends on.
>>>>>>>> 
>>>>>>>> After that Committer Service has to do a cleanup (may be
>> outside
>>> of
>>>>> the
>>>>>>>> committing tx) to come to the following final state:
>>>>>>>> 
>>>>>>>>            [k1 => v1a]
>>>>>>>>            [k2 => v2ab]
>>>>>>>>            [k3 => v3b]
>>>>>>>> 
>>>>>>>> Makes sense?
>>>>>>>> 
>>>>>>>> Sergi
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 2017-03-15 16:54 GMT+03:00 ALEKSEY KUZNETSOV <
>>>>> alkuznetsov.sb@gmail.com
>>>>>>> :
>>>>>>>> 
>>>>>>>>>   - what do u mean by saying "
>>>>>>>>> *in a single transaction checks value versions for all the
>> old
>>>>> values
>>>>>>>>>    and replaces them with calculated new ones *"? Every time
>>> you
>>>>>>> change
>>>>>>>>>   value(in some service), you store it to *some special
>> atomic
>>>>>> cache*
>>>>>>> ,
>>>>>>>> so
>>>>>>>>>   when all services ceased working, Service commiter got a
>>>> values
>>>>>> with
>>>>>>>> the
>>>>>>>>>   last versions.
>>>>>>>>>   - After "*does cleanup of temporary keys and values*"
>>> Service
>>>>>>> commiter
>>>>>>>>>   persists them into permanent store, isn't it ?
>>>>>>>>>   - I cant grasp your though, you say "*in case of version
>>>>> mismatch
>>>>>> or
>>>>>>>> TX
>>>>>>>>>   timeout just rollbacks*". But what versions would it
>> match?
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> ср, 15 мар. 2017 г. в 15:34, Sergi Vladykin <
>>>>>> sergi.vladykin@gmail.com
>>>>>>>> :
>>>>>>>>> 
>>>>>>>>>> Ok, here is what you actually need to implement at the
>>>>> application
>>>>>>>> level.
>>>>>>>>>> 
>>>>>>>>>> Lets say we have to call 2 services in the following order:
>>>>>>>>>> - Service A: wants to update keys [k1 => v1,   k2 => v2]
>> to
>>>>> [k1
>>>>>> =>
>>>>>>>>> v1a,
>>>>>>>>>>  k2 => v2a]
>>>>>>>>>> - Service B: wants to update keys [k2 => v2a, k3 => v3]
>> to
>>>> [k2
>>>>>> =>
>>>>>>>>> v2ab,
>>>>>>>>>> k3 => v3b]
>>>>>>>>>> 
>>>>>>>>>> The change
>>>>>>>>>>    from [ k1 => v1,   k2 => v2,     k3 => v3   ]
>>>>>>>>>>    to     [ k1 => v1a, k2 => v2ab, k3 => v3b ]
>>>>>>>>>> must happen in a single transaction.
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> Optimistic protocol to solve this:
>>>>>>>>>> 
>>>>>>>>>> Each cache key must have a field `otx`, which is a unique
>>>>>>> orchestrator
>>>>>>>> TX
>>>>>>>>>> identifier - it must be a parameter passed to all the
>>> services.
>>>>> If
>>>>>>>> `otx`
>>>>>>>>> is
>>>>>>>>>> set to some value it means that it is an intermediate key
>> and
>>>> is
>>>>>>>> visible
>>>>>>>>>> only inside of some transaction, for the finalized key
>> `otx`
>>>> must
>>>>>> be
>>>>>>>>> null -
>>>>>>>>>> it means the key is committed and visible for everyone.
>>>>>>>>>> 
>>>>>>>>>> Each cache value must have a field `ver` which is a version
>>> of
>>>>> that
>>>>>>>>> value.
>>>>>>>>>> 
>>>>>>>>>> For both fields (`otx` and `ver`) the safest way is to use
>>>> UUID.
>>>>>>>>>> 
>>>>>>>>>> Workflow is the following:
>>>>>>>>>> 
>>>>>>>>>> Orchestrator starts the distributed transaction with `otx`
>> =
>>> x
>>>>> and
>>>>>>>> passes
>>>>>>>>>> this parameter to all the services.
>>>>>>>>>> 
>>>>>>>>>> Service A:
>>>>>>>>>> - does some computations
>>>>>>>>>> - stores [k1x => v1a, k2x => v2a]  with TTL = Za
>>>>>>>>>>      where
>>>>>>>>>>          Za - left time from max Orchestrator TX duration
>>>> after
>>>>>>>> Service
>>>>>>>>> A
>>>>>>>>>> end
>>>>>>>>>>          k1x, k2x - new temporary keys with field `otx` =
>> x
>>>>>>>>>>          v2a has updated version `ver`
>>>>>>>>>> - returns a set of updated keys and all the old versions
>> to
>>>> the
>>>>>>>>>> orchestrator
>>>>>>>>>>       or just stores it in some special atomic cache like
>>>>>>>>>>       [x => (k1 -> ver1, k2 -> ver2)] TTL = Za
>>>>>>>>>> 
>>>>>>>>>> Service B:
>>>>>>>>>> - retrieves the updated value k2x => v2a because it knows
>>>> `otx`
>>>>> =
>>>>>> x
>>>>>>>>>> - does computations
>>>>>>>>>> - stores [k2x => v2ab, k3x => v3b] TTL = Zb
>>>>>>>>>> - updates the set of updated keys like [x => (k1 -> ver1,
>> k2
>>>> ->
>>>>>>> ver2,
>>>>>>>> k3
>>>>>>>>>> -> ver3)] TTL = Zb
>>>>>>>>>> 
>>>>>>>>>> Service Committer (may be embedded into Orchestrator):
>>>>>>>>>> - takes all the updated keys and versions for `otx` = x
>>>>>>>>>>       [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)]
>>>>>>>>>> - in a single transaction checks value versions for all
>> the
>>>> old
>>>>>>> values
>>>>>>>>>>       and replaces them with calculated new ones
>>>>>>>>>> - does cleanup of temporary keys and values
>>>>>>>>>> - in case of version mismatch or TX timeout just rollbacks
>>> and
>>>>>>> signals
>>>>>>>>>>        to Orchestrator to restart the job with new `otx`
>>>>>>>>>> 
>>>>>>>>>> PROFIT!!
>>>>>>>>>> 
>>>>>>>>>> This approach even allows you to run independent parts of
>> the
>>>>> graph
>>>>>>> in
>>>>>>>>>> parallel (with TX transfer you will always run only one at
>> a
>>>>> time).
>>>>>>>> Also
>>>>>>>>> it
>>>>>>>>>> does not require inventing any special fault tolerance
>>> technics
>>>>>>> because
>>>>>>>>>> Ignite caches are already fault tolerant and all the
>>>> intermediate
>>>>>>>> results
>>>>>>>>>> are virtually invisible and stored with TTL, thus in case
>> of
>>>> any
>>>>>>> crash
>>>>>>>>> you
>>>>>>>>>> will not have inconsistent state or garbage.
>>>>>>>>>> 
>>>>>>>>>> Sergi
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> 2017-03-15 11:42 GMT+03:00 ALEKSEY KUZNETSOV <
>>>>>>> alkuznetsov.sb@gmail.com
>>>>>>>>> :
>>>>>>>>>> 
>>>>>>>>>>> Okay, we are open for proposals on business task. I mean,
>>> we
>>>>> can
>>>>>>> make
>>>>>>>>> use
>>>>>>>>>>> of some other thing, not distributed transaction. Not
>>>>> transaction
>>>>>>>> yet.
>>>>>>>>>>> 
>>>>>>>>>>> ср, 15 мар. 2017 г. в 11:24, Vladimir Ozerov <
>>>>>> vozerov@gridgain.com
>>>>>>>> :
>>>>>>>>>>> 
>>>>>>>>>>>> IMO the use case makes sense. However, as Sergi already
>>>>>>> mentioned,
>>>>>>>>> the
>>>>>>>>>>>> problem is far more complex, than simply passing TX
>> state
>>>>> over
>>>>>> a
>>>>>>>>> wire.
>>>>>>>>>>> Most
>>>>>>>>>>>> probably a kind of coordinator will be required still
>> to
>>>>> manage
>>>>>>> all
>>>>>>>>>> kinds
>>>>>>>>>>>> of failures. This task should be started with clean
>>> design
>>>>>>> proposal
>>>>>>>>>>>> explaining how we handle all these concurrent events.
>> And
>>>>> only
>>>>>>>> then,
>>>>>>>>>> when
>>>>>>>>>>>> we understand all implications, we should move to
>>>> development
>>>>>>>> stage.
>>>>>>>>>>>> 
>>>>>>>>>>>> On Wed, Mar 15, 2017 at 10:38 AM, ALEKSEY KUZNETSOV <
>>>>>>>>>>>> alkuznetsov.sb@gmail.com> wrote:
>>>>>>>>>>>> 
>>>>>>>>>>>>> Right
>>>>>>>>>>>>> 
>>>>>>>>>>>>> ср, 15 мар. 2017 г. в 10:35, Sergi Vladykin <
>>>>>>>>>> sergi.vladykin@gmail.com
>>>>>>>>>>>> :
>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Good! Basically your orchestrator just takes some
>>>>>> predefined
>>>>>>>>> graph
>>>>>>>>>> of
>>>>>>>>>>>>>> distributed services to be invoked, calls them by
>>> some
>>>>> kind
>>>>>>> of
>>>>>>>>> RPC
>>>>>>>>>>> and
>>>>>>>>>>>>>> passes the needed parameters between them, right?
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Sergi
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 2017-03-14 22:46 GMT+03:00 ALEKSEY KUZNETSOV <
>>>>>>>>>>> alkuznetsov.sb@gmail.com
>>>>>>>>>>>>> :
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> orchestrator is a custom thing. He is responsible
>>> for
>>>>>>>> managing
>>>>>>>>>>>> business
>>>>>>>>>>>>>>> scenarios flows. Many nodes are involved in
>>>> scenarios.
>>>>>> They
>>>>>>>>>>> exchange
>>>>>>>>>>>>> data
>>>>>>>>>>>>>>> and folow one another. If you acquinted with BPMN
>>>>>>> framework,
>>>>>>>> so
>>>>>>>>>>>>>>> orchestrator is like bpmn engine.
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> вт, 14 Мар 2017 г., 18:56 Sergi Vladykin <
>>>>>>>>>> sergi.vladykin@gmail.com
>>>>>>>>>>>> :
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> What is Orchestrator for you? Is it a thing
>> from
>>>>>>> Microsoft
>>>>>>>> or
>>>>>>>>>>> your
>>>>>>>>>>>>>> custom
>>>>>>>>>>>>>>>> in-house software?
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> Sergi
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> 2017-03-14 18:00 GMT+03:00 ALEKSEY KUZNETSOV <
>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
>>>>>>>>>>>>>>> :
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> Fine. Let's say we've got multiple servers
>>> which
>>>>>>> fulfills
>>>>>>>>>>> custom
>>>>>>>>>>>>>> logic.
>>>>>>>>>>>>>>>>> This servers compound oriented graph (BPMN
>>>> process)
>>>>>>> which
>>>>>>>>>>>>> controlled
>>>>>>>>>>>>>> by
>>>>>>>>>>>>>>>>> Orchestrator.
>>>>>>>>>>>>>>>>> For instance, *server1  *creates *variable A
>>>> *with
>>>>>>> value
>>>>>>>> 1,
>>>>>>>>>>>>> persists
>>>>>>>>>>>>>> it
>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>> IGNITE cache and creates *variable B *and
>> sends
>>>> it
>>>>>> to*
>>>>>>>>>> server2.
>>>>>>>>>>>>> *The
>>>>>>>>>>>>>>>>> latests receives *variable B*, do some logic
>>> with
>>>>> it
>>>>>>> and
>>>>>>>>>> stores
>>>>>>>>>>>> to
>>>>>>>>>>>>>>>> IGNITE.
>>>>>>>>>>>>>>>>> All the work made by both servers must be
>>>> fulfilled
>>>>>> in
>>>>>>>>> *one*
>>>>>>>>>>>>>>> transaction.
>>>>>>>>>>>>>>>>> Because we need all information done, or
>>>>>>>>> nothing(rollbacked).
>>>>>>>>>>> The
>>>>>>>>>>>>>>>> scenario
>>>>>>>>>>>>>>>>> is managed by orchestrator.
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 17:31, Sergi Vladykin <
>>>>>>>>>>>>>> sergi.vladykin@gmail.com
>>>>>>>>>>>>>>>> :
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>> Ok, it is not a business case, it is your
>>> wrong
>>>>>>>> solution
>>>>>>>>>> for
>>>>>>>>>>>> it.
>>>>>>>>>>>>>>>>>> Lets try again, what is the business case?
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>> Sergi
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>> 2017-03-14 16:42 GMT+03:00 ALEKSEY
>> KUZNETSOV
>>> <
>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
>>>>>>>>>>>>>>>>> :
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> The case is the following, One starts
>>>>> transaction
>>>>>>> in
>>>>>>>>> one
>>>>>>>>>>>> node,
>>>>>>>>>>>>>> and
>>>>>>>>>>>>>>>>> commit
>>>>>>>>>>>>>>>>>>> this transaction in another jvm node(or
>>>>> rollback
>>>>>> it
>>>>>>>>>>>> remotely).
>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 16:30, Sergi
>>> Vladykin <
>>>>>>>>>>>>>>>> sergi.vladykin@gmail.com
>>>>>>>>>>>>>>>>>> :
>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>> Because even if you make it work for
>> some
>>>>>>>> simplistic
>>>>>>>>>>>>> scenario,
>>>>>>>>>>>>>>> get
>>>>>>>>>>>>>>>>>> ready
>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>>> write many fault tolerance tests and
>> make
>>>>> sure
>>>>>>> that
>>>>>>>>> you
>>>>>>>>>>> TXs
>>>>>>>>>>>>>> work
>>>>>>>>>>>>>>>>>>> gracefully
>>>>>>>>>>>>>>>>>>>> in all modes in case of crashes. Also
>>> make
>>>>> sure
>>>>>>>> that
>>>>>>>>> we
>>>>>>>>>>> do
>>>>>>>>>>>>> not
>>>>>>>>>>>>>>> have
>>>>>>>>>>>>>>>>> any
>>>>>>>>>>>>>>>>>>>> performance drops after all your
>> changes
>>> in
>>>>>>>> existing
>>>>>>>>>>>>>> benchmarks.
>>>>>>>>>>>>>>>> All
>>>>>>>>>>>>>>>>> in
>>>>>>>>>>>>>>>>>>> all
>>>>>>>>>>>>>>>>>>>> I don't believe these conditions will
>> be
>>>> met
>>>>>> and
>>>>>>>> your
>>>>>>>>>>>>>>> contribution
>>>>>>>>>>>>>>>>> will
>>>>>>>>>>>>>>>>>>> be
>>>>>>>>>>>>>>>>>>>> accepted.
>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>> Better solution to what problem?
>> Sending
>>> TX
>>>>> to
>>>>>>>>> another
>>>>>>>>>>>> node?
>>>>>>>>>>>>>> The
>>>>>>>>>>>>>>>>>> problem
>>>>>>>>>>>>>>>>>>>> statement itself is already wrong. What
>>>>>> business
>>>>>>>> case
>>>>>>>>>> you
>>>>>>>>>>>> are
>>>>>>>>>>>>>>>> trying
>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>>> solve? I'm sure everything you need can
>>> be
>>>>> done
>>>>>>> in
>>>>>>>> a
>>>>>>>>>> much
>>>>>>>>>>>>> more
>>>>>>>>>>>>>>>> simple
>>>>>>>>>>>>>>>>>> and
>>>>>>>>>>>>>>>>>>>> efficient way at the application level.
>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>> Sergi
>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>> 2017-03-14 16:03 GMT+03:00 ALEKSEY
>>>> KUZNETSOV
>>>>> <
>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
>>>>>>>>>>>>>>>>>>> :
>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>> Why wrong ? You know the better
>>> solution?
>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>> вт, 14 мар. 2017 г. в 15:46, Sergi
>>>>> Vladykin <
>>>>>>>>>>>>>>>>>> sergi.vladykin@gmail.com
>>>>>>>>>>>>>>>>>>>> :
>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>> Just serializing TX object and
>>>>>> deserializing
>>>>>>> it
>>>>>>>>> on
>>>>>>>>>>>>> another
>>>>>>>>>>>>>>> node
>>>>>>>>>>>>>>>>> is
>>>>>>>>>>>>>>>>>>>>>> meaningless, because other nodes
>>>>>>> participating
>>>>>>>> in
>>>>>>>>>> the
>>>>>>>>>>>> TX
>>>>>>>>>>>>>> have
>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>> know
>>>>>>>>>>>>>>>>>>>>> about
>>>>>>>>>>>>>>>>>>>>>> the new coordinator. This will
>>> require
>>>>>>> protocol
>>>>>>>>>>>> changes,
>>>>>>>>>>>>> we
>>>>>>>>>>>>>>>>>>> definitely
>>>>>>>>>>>>>>>>>>>>> will
>>>>>>>>>>>>>>>>>>>>>> have fault tolerance and
>> performance
>>>>>> issues.
>>>>>>>> IMO
>>>>>>>>>> the
>>>>>>>>>>>>> whole
>>>>>>>>>>>>>>> idea
>>>>>>>>>>>>>>>>> is
>>>>>>>>>>>>>>>>>>>> wrong
>>>>>>>>>>>>>>>>>>>>>> and it makes no sense to waste time
>>> on
>>>>> it.
>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>> Sergi
>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>> 2017-03-14 10:57 GMT+03:00 ALEKSEY
>>>>>> KUZNETSOV
>>>>>>> <
>>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
>>>>>>>>>>>>>>>>>>>>> :
>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>> IgniteTransactionState
>>>> implememntation
>>>>>>>> contains
>>>>>>>>>>>>>>>> IgniteTxEntry's
>>>>>>>>>>>>>>>>>>> which
>>>>>>>>>>>>>>>>>>>>> is
>>>>>>>>>>>>>>>>>>>>>>> supposed to be transferable
>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>> пн, 13 мар. 2017 г. в 19:32,
>>> Dmitriy
>>>>>>>> Setrakyan
>>>>>>>>> <
>>>>>>>>>>>>>>>>>>>> dsetrakyan@apache.org
>>>>>>>>>>>>>>>>>>>>>> :
>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>> It sounds a little scary to me
>>> that
>>>>> we
>>>>>>> are
>>>>>>>>>>> passing
>>>>>>>>>>>>>>>>> transaction
>>>>>>>>>>>>>>>>>>>>> objects
>>>>>>>>>>>>>>>>>>>>>>>> around. Such object may contain
>>> all
>>>>>> sorts
>>>>>>>> of
>>>>>>>>>>> Ignite
>>>>>>>>>>>>>>>> context.
>>>>>>>>>>>>>>>>> If
>>>>>>>>>>>>>>>>>>>> some
>>>>>>>>>>>>>>>>>>>>>> data
>>>>>>>>>>>>>>>>>>>>>>>> needs to be passed across, we
>>>> should
>>>>>>>> create a
>>>>>>>>>>>> special
>>>>>>>>>>>>>>>>> transfer
>>>>>>>>>>>>>>>>>>>> object
>>>>>>>>>>>>>>>>>>>>>> in
>>>>>>>>>>>>>>>>>>>>>>>> this case.
>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>> D.
>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Mar 13, 2017 at 9:10
>> AM,
>>>>>> ALEKSEY
>>>>>>>>>>> KUZNETSOV
>>>>>>>>>>>> <
>>>>>>>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>> well, there a couple of
>> issues
>>>>>>> preventing
>>>>>>>>>>>>> transaction
>>>>>>>>>>>>>>>>>>> proceeding.
>>>>>>>>>>>>>>>>>>>>>>>>> At first, After transaction
>>>>>>> serialization
>>>>>>>>> and
>>>>>>>>>>>>>>>>> deserialization
>>>>>>>>>>>>>>>>>>> on
>>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>>>>>> remote
>>>>>>>>>>>>>>>>>>>>>>>>> server, there is no txState.
>> So
>>>> im
>>>>>>> going
>>>>>>>> to
>>>>>>>>>> put
>>>>>>>>>>>> it
>>>>>>>>>>>>> in
>>>>>>>>>>>>>>>>>>>>>>>>> 
>> writeExternal()\readExternal()
>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>> The last one is Deserialized
>>>>>>> transaction
>>>>>>>>>> lacks
>>>>>>>>>>> of
>>>>>>>>>>>>>>> shared
>>>>>>>>>>>>>>>>>> cache
>>>>>>>>>>>>>>>>>>>>>> context
>>>>>>>>>>>>>>>>>>>>>>>>> field at
>> TransactionProxyImpl.
>>>>>> Perhaps,
>>>>>>>> it
>>>>>>>>>> must
>>>>>>>>>>>> be
>>>>>>>>>>>>>>>> injected
>>>>>>>>>>>>>>>>>> by
>>>>>>>>>>>>>>>>>>>>>>>>> GridResourceProcessor ?
>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>> пн, 13 мар. 2017 г. в 17:27,
>>>>> ALEKSEY
>>>>>>>>>> KUZNETSOV
>>>>>>>>>>> <
>>>>>>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
>>>>>>>>>>>>>>>>>>>>>>>>> :
>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>> while starting and
>> continuing
>>>>>>>> transaction
>>>>>>>>>> in
>>>>>>>>>>>>>>> different
>>>>>>>>>>>>>>>>> jvms
>>>>>>>>>>>>>>>>>>> in
>>>>>>>>>>>>>>>>>>>>> run
>>>>>>>>>>>>>>>>>>>>>>> into
>>>>>>>>>>>>>>>>>>>>>>>>>> serialization exception in
>>>>>>>>>> writeExternalMeta
>>>>>>>>>>> :
>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>> @Override public void
>>>>>>>>>>>> writeExternal(ObjectOutput
>>>>>>>>>>>>>> out)
>>>>>>>>>>>>>>>>>> throws
>>>>>>>>>>>>>>>>>>>>>>>> IOException
>>>>>>>>>>>>>>>>>>>>>>>>> {
>>>>>>>>>>>>>>>>>>>>>>>>>>    writeExternalMeta(out);
>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>> some meta is cannot be
>>>>> serialized.
>>>>>>>>>>>>>>>>>>>>>>>>>> пт, 10 мар. 2017 г. в
>> 17:25,
>>>>> Alexey
>>>>>>>>>>> Goncharuk <
>>>>>>>>>>>>>>>>>>>>>>>>> alexey.goncharuk@gmail.com
>>>>>>>>>>>>>>>>>>>>>>>>>>> :
>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>> Aleksey,
>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>> I think I am starting to
>> get
>>>> what
>>>>>> you
>>>>>>>>> want,
>>>>>>>>>>>> but I
>>>>>>>>>>>>>>> have
>>>>>>>>>>>>>>>> a
>>>>>>>>>>>>>>>>>> few
>>>>>>>>>>>>>>>>>>>>>>> concerns:
>>>>>>>>>>>>>>>>>>>>>>>>>> - What is the API for the
>>>>> proposed
>>>>>>>>> change?
>>>>>>>>>>> In
>>>>>>>>>>>>> your
>>>>>>>>>>>>>>>> test,
>>>>>>>>>>>>>>>>>> you
>>>>>>>>>>>>>>>>>>>>> pass
>>>>>>>>>>>>>>>>>>>>>> an
>>>>>>>>>>>>>>>>>>>>>>>>>> instance of transaction
>>> created
>>>>> on
>>>>>>>>>> ignite(0)
>>>>>>>>>>> to
>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>> ignite
>>>>>>>>>>>>>>>>>>>>> instance
>>>>>>>>>>>>>>>>>>>>>>>>>> ignite(1). This is
>> obviously
>>>> not
>>>>>>>> possible
>>>>>>>>>> in
>>>>>>>>>>> a
>>>>>>>>>>>>>> truly
>>>>>>>>>>>>>>>>>>>> distributed
>>>>>>>>>>>>>>>>>>>>>>>>>> (multi-jvm) environment.
>>>>>>>>>>>>>>>>>>>>>>>>>> - How will you synchronize
>>>> cache
>>>>>>> update
>>>>>>>>>>> actions
>>>>>>>>>>>>> and
>>>>>>>>>>>>>>>>>>> transaction
>>>>>>>>>>>>>>>>>>>>>>> commit?
>>>>>>>>>>>>>>>>>>>>>>>>>> Say, you have one node that
>>>>> decided
>>>>>>> to
>>>>>>>>>>> commit,
>>>>>>>>>>>>> but
>>>>>>>>>>>>>>>>> another
>>>>>>>>>>>>>>>>>>> node
>>>>>>>>>>>>>>>>>>>>> is
>>>>>>>>>>>>>>>>>>>>>>>> still
>>>>>>>>>>>>>>>>>>>>>>>>>> writing within this
>>>> transaction.
>>>>>> How
>>>>>>> do
>>>>>>>>> you
>>>>>>>>>>>> make
>>>>>>>>>>>>>> sure
>>>>>>>>>>>>>>>>> that
>>>>>>>>>>>>>>>>>>> two
>>>>>>>>>>>>>>>>>>>>>> nodes
>>>>>>>>>>>>>>>>>>>>>>>> will
>>>>>>>>>>>>>>>>>>>>>>>>>> not call commit() and
>>>> rollback()
>>>>>>>>>>>> simultaneously?
>>>>>>>>>>>>>>>>>>>>>>>>>> - How do you make sure
>> that
>>>>> either
>>>>>>>>>> commit()
>>>>>>>>>>> or
>>>>>>>>>>>>>>>>> rollback()
>>>>>>>>>>>>>>>>>> is
>>>>>>>>>>>>>>>>>>>>>> called
>>>>>>>>>>>>>>>>>>>>>>> if
>>>>>>>>>>>>>>>>>>>>>>>>> an
>>>>>>>>>>>>>>>>>>>>>>>>>> originator failed?
>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>> 2017-03-10 15:38 GMT+03:00
>>>>> Дмитрий
>>>>>>>> Рябов
>>>>>>>>> <
>>>>>>>>>>>>>>>>>>>> somefireone@gmail.com
>>>>>>>>>>>>>>>>>>>>>> :
>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>> Alexey Goncharuk, heh, my
>>>>> initial
>>>>>>>>>>>> understanding
>>>>>>>>>>>>>> was
>>>>>>>>>>>>>>>>> that
>>>>>>>>>>>>>>>>>>>>>>> transferring
>>>>>>>>>>>>>>>>>>>>>>>>> of
>>>>>>>>>>>>>>>>>>>>>>>>>> tx
>>>>>>>>>>>>>>>>>>>>>>>>>>> ownership from one node
>> to
>>>>>> another
>>>>>>>> will
>>>>>>>>>> be
>>>>>>>>>>>>>> happened
>>>>>>>>>>>>>>>>>>>>> automatically
>>>>>>>>>>>>>>>>>>>>>>>> when
>>>>>>>>>>>>>>>>>>>>>>>>>>> originating node is gone
>>>> down.
>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>> 2017-03-10 15:36
>> GMT+03:00
>>>>>> ALEKSEY
>>>>>>>>>>> KUZNETSOV
>>>>>>>>>>>> <
>>>>>>>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
>>>>>>>>>>>>>>>>>>>>>>>>>> :
>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Im aiming to span
>>>> transaction
>>>>>> on
>>>>>>>>>> multiple
>>>>>>>>>>>>>>> threads,
>>>>>>>>>>>>>>>>>> nodes,
>>>>>>>>>>>>>>>>>>>>>>>> jvms(soon).
>>>>>>>>>>>>>>>>>>>>>>>>>> So
>>>>>>>>>>>>>>>>>>>>>>>>>>>> every node is able to
>>>>> rollback,
>>>>>>> or
>>>>>>>>>> commit
>>>>>>>>>>>>>> common
>>>>>>>>>>>>>>>>>>>>> transaction.It
>>>>>>>>>>>>>>>>>>>>>>>>> turned
>>>>>>>>>>>>>>>>>>>>>>>>>>> up i
>>>>>>>>>>>>>>>>>>>>>>>>>>>> need to transfer tx
>>> between
>>>>>> nodes
>>>>>>>> in
>>>>>>>>>>> order
>>>>>>>>>>>> to
>>>>>>>>>>>>>>>> commit
>>>>>>>>>>>>>>>>>>>>>> transaction
>>>>>>>>>>>>>>>>>>>>>>> in
>>>>>>>>>>>>>>>>>>>>>>>>>>>> different node(in the
>>> same
>>>>>> jvm).
>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>> пт, 10 мар. 2017 г. в
>>>> 15:20,
>>>>>>> Alexey
>>>>>>>>>>>>> Goncharuk <
>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>> alexey.goncharuk@gmail.com
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> :
>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Aleksey,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Do you mean that you
>>>> want a
>>>>>>>> concept
>>>>>>>>>> of
>>>>>>>>>>>>>>>> transferring
>>>>>>>>>>>>>>>>>> of
>>>>>>>>>>>>>>>>>>> tx
>>>>>>>>>>>>>>>>>>>>>>>> ownership
>>>>>>>>>>>>>>>>>>>>>>>>>>> from
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> one node to another?
>> My
>>>>>> initial
>>>>>>>>>>>>> understanding
>>>>>>>>>>>>>>> was
>>>>>>>>>>>>>>>>>> that
>>>>>>>>>>>>>>>>>>>> you
>>>>>>>>>>>>>>>>>>>>>> want
>>>>>>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>>>>>>>> be
>>>>>>>>>>>>>>>>>>>>>>>>>>>> able
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to update keys in a
>>>>>> transaction
>>>>>>>>> from
>>>>>>>>>>>>> multiple
>>>>>>>>>>>>>>>>> threads
>>>>>>>>>>>>>>>>>>> in
>>>>>>>>>>>>>>>>>>>>>>>> parallel.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --AG
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 2017-03-10 15:01
>>>> GMT+03:00
>>>>>>>> ALEKSEY
>>>>>>>>>>>>> KUZNETSOV
>>>>>>>>>>>>>> <
>>>>>>>>>>>>>>>>>>>>>>>>>> alkuznetsov.sb@gmail.com
>>>>>>>>>>>>>>>>>>>>>>>>>>>> :
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Well. Consider
>>>>> transaction
>>>>>>>>> started
>>>>>>>>>> in
>>>>>>>>>>>> one
>>>>>>>>>>>>>>> node,
>>>>>>>>>>>>>>>>> and
>>>>>>>>>>>>>>>>>>>>>> continued
>>>>>>>>>>>>>>>>>>>>>>>> in
>>>>>>>>>>>>>>>>>>>>>>>>>>>> another
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> one.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The following test
>>>>>> describes
>>>>>>> my
>>>>>>>>>> idea:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Ignite ignite1 =
>>>>> ignite(0);
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> IgniteTransactions
>>>>>>>> transactions =
>>>>>>>>>>>>>>>>>>>> ignite1.transactions();
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> IgniteCache<String,
>>>>>> Integer>
>>>>>>>>> cache
>>>>>>>>>> =
>>>>>>>>>>>>>>>>>>>>>>> ignite1.getOrCreateCache("
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> testCache");
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Transaction tx =
>>>>>>>>>>> transactions.txStart(
>>>>>>>>>>>>>>>>> concurrency,
>>>>>>>>>>>>>>>>>>>>>>> isolation);
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> cache.put("key1",
>> 1);
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> cache.put("key2",
>> 2);
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> tx.stop();
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>> IgniteInternalFuture<Boolean>
>>>>>>>>> fut =
>>>>>>>>>>>>>>>>>>>>>> GridTestUtils.runAsync(()
>>>>>>>>>>>>>>>>>>>>>>>> ->
>>>>>>>>>>>>>>>>>>>>>>>>> {
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>> IgniteTransactions
>>>>> ts =
>>>>>>>>>>>>>>>>>> ignite(1).transactions();
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>> Assert.assertNull(ts.tx());
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>> Assert.assertEquals(
>>>>>>>>>>>>>>>>> TransactionState.STOPPED,
>>>>>>>>>>>>>>>>>>>>>>> tx.state());
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>    ts.txStart(tx);
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> Assert.assertEquals(TransactionState.ACTIVE,
>>>>>>>>>>>>>>>>>>>>>>> tx.state());
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>> cache.put("key3",
>>>> 3);
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Assert.assertTrue(cache.remove("key2"));
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>    tx.commit();
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>    return true;
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> });
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> fut.get();
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>> Assert.assertEquals(
>>>>>>>>>>>>>>> TransactionState.COMMITTED,
>>>>>>>>>>>>>>>>>>>>>> tx.state());
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>> Assert.assertEquals((long)1,
>>>>>>>>>>>>>>>>>>> (long)cache.get("key1"));
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>> Assert.assertEquals((long)3,
>>>>>>>>>>>>>>>>>>> (long)cache.get("key3"));
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>> Assert.assertFalse(cache.
>>>>>>>>>>>>>>> containsKey("key2"));
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> In method
>>>>> *ts.txStart(...)*
>>>>>>> we
>>>>>>>>> just
>>>>>>>>>>>>> rebind
>>>>>>>>>>>>>>> *tx*
>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>>>> current
>>>>>>>>>>>>>>>>>>>>>>>>> thread:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> public void
>>>>>>> txStart(Transaction
>>>>>>>>>> tx) {
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>> TransactionProxyImpl
>>>>>>>>>>>>> transactionProxy =
>>>>>>>>>>>>>>>>>>>>>>>>>> (TransactionProxyImpl)tx;
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>    cctx.tm
>>>> ().reopenTx(
>>>>>>>>>>>>>>> transactionProxy.tx());
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>> transactionProxy.
>>>>>>>>>>>>> bindToCurrentThread();
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> }
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> In method
>> *reopenTx*
>>> we
>>>>>> alter
>>>>>>>>>>>> *threadMap*
>>>>>>>>>>>>>> so
>>>>>>>>>>>>>>>> that
>>>>>>>>>>>>>>>>>> it
>>>>>>>>>>>>>>>>>>>>> binds
>>>>>>>>>>>>>>>>>>>>>>>>>>> transaction
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to current thread.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> How do u think
>> about
>>>> it ?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> вт, 7 мар. 2017 г.
>> в
>>>>> 22:38,
>>>>>>>> Denis
>>>>>>>>>>>> Magda <
>>>>>>>>>>>>>>>>>>>>> dmagda@apache.org
>>>>>>>>>>>>>>>>>>>>>>> :
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hi Alexey,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Please share the
>>>>> rational
>>>>>>>>> behind
>>>>>>>>>>> this
>>>>>>>>>>>>> and
>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>> thoughts,
>>>>>>>>>>>>>>>>>>>>>>>> design
>>>>>>>>>>>>>>>>>>>>>>>>>>> ideas
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> you
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> have in mind.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> —
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Denis
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mar 7, 2017,
>>> at
>>>>> 3:19
>>>>>>> AM,
>>>>>>>>>>> ALEKSEY
>>>>>>>>>>>>>>>>> KUZNETSOV <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>> alkuznetsov.sb@gmail.com
>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hi all! Im
>>>> designing
>>>>>>>>>> distributed
>>>>>>>>>>>>>>>> transaction
>>>>>>>>>>>>>>>>>>> which
>>>>>>>>>>>>>>>>>>>>> can
>>>>>>>>>>>>>>>>>>>>>> be
>>>>>>>>>>>>>>>>>>>>>>>>>> started
>>>>>>>>>>>>>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> one
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> node, and
>>> continued
>>>>> at
>>>>>>>> other
>>>>>>>>>> one.
>>>>>>>>>>>> Has
>>>>>>>>>>>>>>>> anybody
>>>>>>>>>>>>>>>>>>>>> thoughts
>>>>>>>>>>>>>>>>>>>>>> on
>>>>>>>>>>>>>>>>>>>>>>>> it
>>>>>>>>>>>>>>>>>>>>>>>>> ?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Best Regards,*
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Kuznetsov
>>> Aleksey*
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Best Regards,*
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Kuznetsov Aleksey*
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Best Regards,*
>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Kuznetsov Aleksey*
>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>> *Best Regards,*
>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>>> *Kuznetsov Aleksey*
>>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>> *Best Regards,*
>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>>> *Kuznetsov Aleksey*
>>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>> *Best Regards,*
>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>>> *Kuznetsov Aleksey*
>>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>> *Best Regards,*
>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>> *Kuznetsov Aleksey*
>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> *Best Regards,*
>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> *Kuznetsov Aleksey*
>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> *Best Regards,*
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> *Kuznetsov Aleksey*
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> *Best Regards,*
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> *Kuznetsov Aleksey*
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> --
>>>>>>>>>>>>> 
>>>>>>>>>>>>> *Best Regards,*
>>>>>>>>>>>>> 
>>>>>>>>>>>>> *Kuznetsov Aleksey*
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> --
>>>>>>>>>>> 
>>>>>>>>>>> *Best Regards,*
>>>>>>>>>>> 
>>>>>>>>>>> *Kuznetsov Aleksey*
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> --
>>>>>>>>> 
>>>>>>>>> *Best Regards,*
>>>>>>>>> 
>>>>>>>>> *Kuznetsov Aleksey*
>>>>>>>>> 
>>>>>>>> 
>>>>>>>> --
>>>>>>>> 
>>>>>>>> *Best Regards,*
>>>>>>>> 
>>>>>>>> *Kuznetsov Aleksey*
>>>>>>>> 
>>>>>>> --
>>>>>>> 
>>>>>>> *Best Regards,*
>>>>>>> 
>>>>>>> *Kuznetsov Aleksey*
>>>>>>> 
>>>>>> 
>>>>> --
>>>>> 
>>>>> *Best Regards,*
>>>>> 
>>>>> *Kuznetsov Aleksey*
>>>>> 
>>>> 
>>> --
>>> 
>>> *Best Regards,*
>>> 
>>> *Kuznetsov Aleksey*
>>> 
>> 
> -- 
> 
> *Best Regards,*
> 
> *Kuznetsov Aleksey*


Re: distributed transaction of non-single coordinator

Posted by ALEKSEY KUZNETSOV <al...@gmail.com>.
Hi, Igniters! I 've made implementation of transactions of non-single
coordinator. Here you can start transaction in one thread and commit it in
another thread.
Take a look on it. Give your thoughts on it.

https://github.com/voipp/ignite/pull/10/commits/3a3d90aa6ac84f125e4c3ce4ced4f269a695ef45

пт, 17 мар. 2017 г. в 19:26, Sergi Vladykin <se...@gmail.com>:

> You know better, go ahead! :)
>
> Sergi
>
> 2017-03-17 16:16 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:
>
> > we've discovered several problems regarding your "accumulation"
> > approach.These are
> >
> >    1. perfomance issues when transfering data from temporary cache to
> >    permanent one. Keep in mind big deal of concurent transactions in
> > Service
> >    commiter
> >    2. extreme memory load when keeping temporary cache in memory
> >    3. As long as user is not acquainted with ignite, working with cache
> >    must be transparent for him. Keep this in mind. User's node can
> evaluate
> >    logic with no transaction at all, so we should deal with both types of
> >    execution flow : transactional and non-transactional.Another one
> > problem is
> >    transaction id support at the user node. We would have handled all
> this
> >    issues and many more.
> >    4. we cannot pessimistically lock entity.
> >
> > As a result, we decided to move on building distributed transaction. We
> put
> > aside your "accumulation" approach until we realize how to solve
> > difficulties above .
> >
> > чт, 16 мар. 2017 г. в 16:56, Sergi Vladykin <se...@gmail.com>:
> >
> > > The problem "How to run millions of entities, and millions of
> operations
> > on
> > > a single Pentium3" is out of scope here. Do the math, plan capacity
> > > reasonably.
> > >
> > > Sergi
> > >
> > > 2017-03-16 15:54 GMT+03:00 ALEKSEY KUZNETSOV <alkuznetsov.sb@gmail.com
> >:
> > >
> > > > hmm, If we have millions of entities, and millions of operations,
> would
> > > not
> > > > this approache lead to memory overflow and perfomance degradation
> > > >
> > > > чт, 16 мар. 2017 г. в 15:42, Sergi Vladykin <
> sergi.vladykin@gmail.com
> > >:
> > > >
> > > > > 1. Actually you have to check versions on all the values you have
> > read
> > > > > during the tx.
> > > > >
> > > > > For example if we have [k1 => v1, k2 => v2] and do:
> > > > >
> > > > > put(k1, get(k2) + 5)
> > > > >
> > > > > We have to remember the version for k2. This logic can be
> relatively
> > > > easily
> > > > > encapsulated in a framework atop of Ignite. You need to implement
> one
> > > to
> > > > > make all this stuff usable.
> > > > >
> > > > > 2. I suggest to avoid any locking here, because you easily will end
> > up
> > > > with
> > > > > deadlocks. If you do not have too frequent updates for your keys,
> > > > > optimistic approach will work just fine.
> > > > >
> > > > > Theoretically in the Committer Service you can start a thread for
> the
> > > > > lifetime of the whole distributed transaction, take a lock on the
> key
> > > > using
> > > > > IgniteCache.lock(K key) before executing any Services, wait for all
> > the
> > > > > services to complete, execute optimistic commit in the same thread
> > > while
> > > > > keeping this lock and then release it. Notice that all the Ignite
> > > > > transactions inside of all Services must be optimistic here to be
> > able
> > > to
> > > > > read this locked key.
> > > > >
> > > > > But again I do not recommend you using this approach until you
> have a
> > > > > reliable deadlock avoidance scheme.
> > > > >
> > > > > Sergi
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > 2017-03-16 12:53 GMT+03:00 ALEKSEY KUZNETSOV <
> > alkuznetsov.sb@gmail.com
> > > >:
> > > > >
> > > > > > Yeah, now i got it.
> > > > > > There are some doubts on this approach
> > > > > > 1) During optimistic commit phase, when you assure no one altered
> > the
> > > > > > original values, you must check versions of other dependent keys.
> > How
> > > > > could
> > > > > > we obtain those keys(in an automative manner, of course) ?
> > > > > > 2) How could we lock a key before some Service A introduce
> changes?
> > > So
> > > > no
> > > > > > other service is allowed to change this key-value?(sort of
> > > pessimistic
> > > > > > blocking)
> > > > > > May be you know some implementations of such approach ?
> > > > > >
> > > > > > ср, 15 мар. 2017 г. в 17:54, ALEKSEY KUZNETSOV <
> > > > alkuznetsov.sb@gmail.com
> > > > > >:
> > > > > >
> > > > > > >  Thank you very much for help.  I will answer later.
> > > > > > >
> > > > > > > ср, 15 мар. 2017 г. в 17:39, Sergi Vladykin <
> > > > sergi.vladykin@gmail.com
> > > > > >:
> > > > > > >
> > > > > > > All the services do not update key in place, but only generate
> > new
> > > > keys
> > > > > > > augmented by otx and store the updated value in the same cache
> +
> > > > > remember
> > > > > > > the keys and versions participating in the transaction in some
> > > > separate
> > > > > > > atomic cache.
> > > > > > >
> > > > > > > Follow this sequence of changes applied to cache contents by
> each
> > > > > > Service:
> > > > > > >
> > > > > > > Initial cache contents:
> > > > > > >             [k1 => v1]
> > > > > > >             [k2 => v2]
> > > > > > >             [k3 => v3]
> > > > > > >
> > > > > > > Cache contents after Service A:
> > > > > > >             [k1 => v1]
> > > > > > >             [k2 => v2]
> > > > > > >             [k3 => v3]
> > > > > > >             [k1x => v1a]
> > > > > > >             [k2x => v2a]
> > > > > > >
> > > > > > >          + [x => (k1 -> ver1, k2 -> ver2)] in some separate
> > atomic
> > > > > cache
> > > > > > >
> > > > > > > Cache contents after Service B:
> > > > > > >             [k1 => v1]
> > > > > > >             [k2 => v2]
> > > > > > >             [k3 => v3]
> > > > > > >             [k1x => v1a]
> > > > > > >             [k2x => v2ab]
> > > > > > >             [k3x => v3b]
> > > > > > >
> > > > > > >         + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in some
> > > > separate
> > > > > > > atomic cache
> > > > > > >
> > > > > > > Finally the Committer Service takes this map of updated keys
> and
> > > > their
> > > > > > > versions from some separate atomic cache, starts Ignite
> > transaction
> > > > and
> > > > > > > replaces all the values for k* keys to values taken from k*x
> > keys.
> > > > The
> > > > > > > successful result must be the following:
> > > > > > >
> > > > > > >             [k1 => v1a]
> > > > > > >             [k2 => v2ab]
> > > > > > >             [k3 => v3b]
> > > > > > >             [k1x => v1a]
> > > > > > >             [k2x => v2ab]
> > > > > > >             [k3x => v3b]
> > > > > > >
> > > > > > >         + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in some
> > > > separate
> > > > > > > atomic cache
> > > > > > >
> > > > > > > But Committer Service also has to check that no one updated the
> > > > > original
> > > > > > > values before us, because otherwise we can not give any
> > > > serializability
> > > > > > > guarantee for these distributed transactions. Here we may need
> to
> > > > check
> > > > > > not
> > > > > > > only versions of the updated keys, but also versions of any
> other
> > > > keys
> > > > > > end
> > > > > > > result depends on.
> > > > > > >
> > > > > > > After that Committer Service has to do a cleanup (may be
> outside
> > of
> > > > the
> > > > > > > committing tx) to come to the following final state:
> > > > > > >
> > > > > > >             [k1 => v1a]
> > > > > > >             [k2 => v2ab]
> > > > > > >             [k3 => v3b]
> > > > > > >
> > > > > > > Makes sense?
> > > > > > >
> > > > > > > Sergi
> > > > > > >
> > > > > > >
> > > > > > > 2017-03-15 16:54 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > alkuznetsov.sb@gmail.com
> > > > > >:
> > > > > > >
> > > > > > > >    - what do u mean by saying "
> > > > > > > > *in a single transaction checks value versions for all the
> old
> > > > values
> > > > > > > >     and replaces them with calculated new ones *"? Every time
> > you
> > > > > > change
> > > > > > > >    value(in some service), you store it to *some special
> atomic
> > > > > cache*
> > > > > > ,
> > > > > > > so
> > > > > > > >    when all services ceased working, Service commiter got a
> > > values
> > > > > with
> > > > > > > the
> > > > > > > >    last versions.
> > > > > > > >    - After "*does cleanup of temporary keys and values*"
> > Service
> > > > > > commiter
> > > > > > > >    persists them into permanent store, isn't it ?
> > > > > > > >    - I cant grasp your though, you say "*in case of version
> > > > mismatch
> > > > > or
> > > > > > > TX
> > > > > > > >    timeout just rollbacks*". But what versions would it
> match?
> > > > > > > >
> > > > > > > >
> > > > > > > > ср, 15 мар. 2017 г. в 15:34, Sergi Vladykin <
> > > > > sergi.vladykin@gmail.com
> > > > > > >:
> > > > > > > >
> > > > > > > > > Ok, here is what you actually need to implement at the
> > > > application
> > > > > > > level.
> > > > > > > > >
> > > > > > > > > Lets say we have to call 2 services in the following order:
> > > > > > > > >  - Service A: wants to update keys [k1 => v1,   k2 => v2]
> to
> > > > [k1
> > > > > =>
> > > > > > > > v1a,
> > > > > > > > >   k2 => v2a]
> > > > > > > > >  - Service B: wants to update keys [k2 => v2a, k3 => v3]
> to
> > > [k2
> > > > > =>
> > > > > > > > v2ab,
> > > > > > > > > k3 => v3b]
> > > > > > > > >
> > > > > > > > > The change
> > > > > > > > >     from [ k1 => v1,   k2 => v2,     k3 => v3   ]
> > > > > > > > >     to     [ k1 => v1a, k2 => v2ab, k3 => v3b ]
> > > > > > > > > must happen in a single transaction.
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Optimistic protocol to solve this:
> > > > > > > > >
> > > > > > > > > Each cache key must have a field `otx`, which is a unique
> > > > > > orchestrator
> > > > > > > TX
> > > > > > > > > identifier - it must be a parameter passed to all the
> > services.
> > > > If
> > > > > > > `otx`
> > > > > > > > is
> > > > > > > > > set to some value it means that it is an intermediate key
> and
> > > is
> > > > > > > visible
> > > > > > > > > only inside of some transaction, for the finalized key
> `otx`
> > > must
> > > > > be
> > > > > > > > null -
> > > > > > > > > it means the key is committed and visible for everyone.
> > > > > > > > >
> > > > > > > > > Each cache value must have a field `ver` which is a version
> > of
> > > > that
> > > > > > > > value.
> > > > > > > > >
> > > > > > > > > For both fields (`otx` and `ver`) the safest way is to use
> > > UUID.
> > > > > > > > >
> > > > > > > > > Workflow is the following:
> > > > > > > > >
> > > > > > > > > Orchestrator starts the distributed transaction with `otx`
> =
> > x
> > > > and
> > > > > > > passes
> > > > > > > > > this parameter to all the services.
> > > > > > > > >
> > > > > > > > > Service A:
> > > > > > > > >  - does some computations
> > > > > > > > >  - stores [k1x => v1a, k2x => v2a]  with TTL = Za
> > > > > > > > >       where
> > > > > > > > >           Za - left time from max Orchestrator TX duration
> > > after
> > > > > > > Service
> > > > > > > > A
> > > > > > > > > end
> > > > > > > > >           k1x, k2x - new temporary keys with field `otx` =
> x
> > > > > > > > >           v2a has updated version `ver`
> > > > > > > > >  - returns a set of updated keys and all the old versions
> to
> > > the
> > > > > > > > > orchestrator
> > > > > > > > >        or just stores it in some special atomic cache like
> > > > > > > > >        [x => (k1 -> ver1, k2 -> ver2)] TTL = Za
> > > > > > > > >
> > > > > > > > > Service B:
> > > > > > > > >  - retrieves the updated value k2x => v2a because it knows
> > > `otx`
> > > > =
> > > > > x
> > > > > > > > >  - does computations
> > > > > > > > >  - stores [k2x => v2ab, k3x => v3b] TTL = Zb
> > > > > > > > >  - updates the set of updated keys like [x => (k1 -> ver1,
> k2
> > > ->
> > > > > > ver2,
> > > > > > > k3
> > > > > > > > > -> ver3)] TTL = Zb
> > > > > > > > >
> > > > > > > > > Service Committer (may be embedded into Orchestrator):
> > > > > > > > >  - takes all the updated keys and versions for `otx` = x
> > > > > > > > >        [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)]
> > > > > > > > >  - in a single transaction checks value versions for all
> the
> > > old
> > > > > > values
> > > > > > > > >        and replaces them with calculated new ones
> > > > > > > > >  - does cleanup of temporary keys and values
> > > > > > > > >  - in case of version mismatch or TX timeout just rollbacks
> > and
> > > > > > signals
> > > > > > > > >         to Orchestrator to restart the job with new `otx`
> > > > > > > > >
> > > > > > > > > PROFIT!!
> > > > > > > > >
> > > > > > > > > This approach even allows you to run independent parts of
> the
> > > > graph
> > > > > > in
> > > > > > > > > parallel (with TX transfer you will always run only one at
> a
> > > > time).
> > > > > > > Also
> > > > > > > > it
> > > > > > > > > does not require inventing any special fault tolerance
> > technics
> > > > > > because
> > > > > > > > > Ignite caches are already fault tolerant and all the
> > > intermediate
> > > > > > > results
> > > > > > > > > are virtually invisible and stored with TTL, thus in case
> of
> > > any
> > > > > > crash
> > > > > > > > you
> > > > > > > > > will not have inconsistent state or garbage.
> > > > > > > > >
> > > > > > > > > Sergi
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > 2017-03-15 11:42 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > alkuznetsov.sb@gmail.com
> > > > > > > >:
> > > > > > > > >
> > > > > > > > > > Okay, we are open for proposals on business task. I mean,
> > we
> > > > can
> > > > > > make
> > > > > > > > use
> > > > > > > > > > of some other thing, not distributed transaction. Not
> > > > transaction
> > > > > > > yet.
> > > > > > > > > >
> > > > > > > > > > ср, 15 мар. 2017 г. в 11:24, Vladimir Ozerov <
> > > > > vozerov@gridgain.com
> > > > > > >:
> > > > > > > > > >
> > > > > > > > > > > IMO the use case makes sense. However, as Sergi already
> > > > > > mentioned,
> > > > > > > > the
> > > > > > > > > > > problem is far more complex, than simply passing TX
> state
> > > > over
> > > > > a
> > > > > > > > wire.
> > > > > > > > > > Most
> > > > > > > > > > > probably a kind of coordinator will be required still
> to
> > > > manage
> > > > > > all
> > > > > > > > > kinds
> > > > > > > > > > > of failures. This task should be started with clean
> > design
> > > > > > proposal
> > > > > > > > > > > explaining how we handle all these concurrent events.
> And
> > > > only
> > > > > > > then,
> > > > > > > > > when
> > > > > > > > > > > we understand all implications, we should move to
> > > development
> > > > > > > stage.
> > > > > > > > > > >
> > > > > > > > > > > On Wed, Mar 15, 2017 at 10:38 AM, ALEKSEY KUZNETSOV <
> > > > > > > > > > > alkuznetsov.sb@gmail.com> wrote:
> > > > > > > > > > >
> > > > > > > > > > > > Right
> > > > > > > > > > > >
> > > > > > > > > > > > ср, 15 мар. 2017 г. в 10:35, Sergi Vladykin <
> > > > > > > > > sergi.vladykin@gmail.com
> > > > > > > > > > >:
> > > > > > > > > > > >
> > > > > > > > > > > > > Good! Basically your orchestrator just takes some
> > > > > predefined
> > > > > > > > graph
> > > > > > > > > of
> > > > > > > > > > > > > distributed services to be invoked, calls them by
> > some
> > > > kind
> > > > > > of
> > > > > > > > RPC
> > > > > > > > > > and
> > > > > > > > > > > > > passes the needed parameters between them, right?
> > > > > > > > > > > > >
> > > > > > > > > > > > > Sergi
> > > > > > > > > > > > >
> > > > > > > > > > > > > 2017-03-14 22:46 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > >:
> > > > > > > > > > > > >
> > > > > > > > > > > > > > orchestrator is a custom thing. He is responsible
> > for
> > > > > > > managing
> > > > > > > > > > > business
> > > > > > > > > > > > > > scenarios flows. Many nodes are involved in
> > > scenarios.
> > > > > They
> > > > > > > > > > exchange
> > > > > > > > > > > > data
> > > > > > > > > > > > > > and folow one another. If you acquinted with BPMN
> > > > > > framework,
> > > > > > > so
> > > > > > > > > > > > > > orchestrator is like bpmn engine.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > вт, 14 Мар 2017 г., 18:56 Sergi Vladykin <
> > > > > > > > > sergi.vladykin@gmail.com
> > > > > > > > > > >:
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > > What is Orchestrator for you? Is it a thing
> from
> > > > > > Microsoft
> > > > > > > or
> > > > > > > > > > your
> > > > > > > > > > > > > custom
> > > > > > > > > > > > > > > in-house software?
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Sergi
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > 2017-03-14 18:00 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > >:
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > Fine. Let's say we've got multiple servers
> > which
> > > > > > fulfills
> > > > > > > > > > custom
> > > > > > > > > > > > > logic.
> > > > > > > > > > > > > > > > This servers compound oriented graph (BPMN
> > > process)
> > > > > > which
> > > > > > > > > > > > controlled
> > > > > > > > > > > > > by
> > > > > > > > > > > > > > > > Orchestrator.
> > > > > > > > > > > > > > > > For instance, *server1  *creates *variable A
> > > *with
> > > > > > value
> > > > > > > 1,
> > > > > > > > > > > > persists
> > > > > > > > > > > > > it
> > > > > > > > > > > > > > > to
> > > > > > > > > > > > > > > > IGNITE cache and creates *variable B *and
> sends
> > > it
> > > > > to*
> > > > > > > > > server2.
> > > > > > > > > > > > *The
> > > > > > > > > > > > > > > > latests receives *variable B*, do some logic
> > with
> > > > it
> > > > > > and
> > > > > > > > > stores
> > > > > > > > > > > to
> > > > > > > > > > > > > > > IGNITE.
> > > > > > > > > > > > > > > > All the work made by both servers must be
> > > fulfilled
> > > > > in
> > > > > > > > *one*
> > > > > > > > > > > > > > transaction.
> > > > > > > > > > > > > > > > Because we need all information done, or
> > > > > > > > nothing(rollbacked).
> > > > > > > > > > The
> > > > > > > > > > > > > > > scenario
> > > > > > > > > > > > > > > > is managed by orchestrator.
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > вт, 14 мар. 2017 г. в 17:31, Sergi Vladykin <
> > > > > > > > > > > > > sergi.vladykin@gmail.com
> > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > Ok, it is not a business case, it is your
> > wrong
> > > > > > > solution
> > > > > > > > > for
> > > > > > > > > > > it.
> > > > > > > > > > > > > > > > > Lets try again, what is the business case?
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > Sergi
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > 2017-03-14 16:42 GMT+03:00 ALEKSEY
> KUZNETSOV
> > <
> > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > The case is the following, One starts
> > > > transaction
> > > > > > in
> > > > > > > > one
> > > > > > > > > > > node,
> > > > > > > > > > > > > and
> > > > > > > > > > > > > > > > commit
> > > > > > > > > > > > > > > > > > this transaction in another jvm node(or
> > > > rollback
> > > > > it
> > > > > > > > > > > remotely).
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > вт, 14 мар. 2017 г. в 16:30, Sergi
> > Vladykin <
> > > > > > > > > > > > > > > sergi.vladykin@gmail.com
> > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > Because even if you make it work for
> some
> > > > > > > simplistic
> > > > > > > > > > > > scenario,
> > > > > > > > > > > > > > get
> > > > > > > > > > > > > > > > > ready
> > > > > > > > > > > > > > > > > > to
> > > > > > > > > > > > > > > > > > > write many fault tolerance tests and
> make
> > > > sure
> > > > > > that
> > > > > > > > you
> > > > > > > > > > TXs
> > > > > > > > > > > > > work
> > > > > > > > > > > > > > > > > > gracefully
> > > > > > > > > > > > > > > > > > > in all modes in case of crashes. Also
> > make
> > > > sure
> > > > > > > that
> > > > > > > > we
> > > > > > > > > > do
> > > > > > > > > > > > not
> > > > > > > > > > > > > > have
> > > > > > > > > > > > > > > > any
> > > > > > > > > > > > > > > > > > > performance drops after all your
> changes
> > in
> > > > > > > existing
> > > > > > > > > > > > > benchmarks.
> > > > > > > > > > > > > > > All
> > > > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > all
> > > > > > > > > > > > > > > > > > > I don't believe these conditions will
> be
> > > met
> > > > > and
> > > > > > > your
> > > > > > > > > > > > > > contribution
> > > > > > > > > > > > > > > > will
> > > > > > > > > > > > > > > > > > be
> > > > > > > > > > > > > > > > > > > accepted.
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > Better solution to what problem?
> Sending
> > TX
> > > > to
> > > > > > > > another
> > > > > > > > > > > node?
> > > > > > > > > > > > > The
> > > > > > > > > > > > > > > > > problem
> > > > > > > > > > > > > > > > > > > statement itself is already wrong. What
> > > > > business
> > > > > > > case
> > > > > > > > > you
> > > > > > > > > > > are
> > > > > > > > > > > > > > > trying
> > > > > > > > > > > > > > > > to
> > > > > > > > > > > > > > > > > > > solve? I'm sure everything you need can
> > be
> > > > done
> > > > > > in
> > > > > > > a
> > > > > > > > > much
> > > > > > > > > > > > more
> > > > > > > > > > > > > > > simple
> > > > > > > > > > > > > > > > > and
> > > > > > > > > > > > > > > > > > > efficient way at the application level.
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > Sergi
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > 2017-03-14 16:03 GMT+03:00 ALEKSEY
> > > KUZNETSOV
> > > > <
> > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > Why wrong ? You know the better
> > solution?
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > вт, 14 мар. 2017 г. в 15:46, Sergi
> > > > Vladykin <
> > > > > > > > > > > > > > > > > sergi.vladykin@gmail.com
> > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > Just serializing TX object and
> > > > > deserializing
> > > > > > it
> > > > > > > > on
> > > > > > > > > > > > another
> > > > > > > > > > > > > > node
> > > > > > > > > > > > > > > > is
> > > > > > > > > > > > > > > > > > > > > meaningless, because other nodes
> > > > > > participating
> > > > > > > in
> > > > > > > > > the
> > > > > > > > > > > TX
> > > > > > > > > > > > > have
> > > > > > > > > > > > > > > to
> > > > > > > > > > > > > > > > > know
> > > > > > > > > > > > > > > > > > > > about
> > > > > > > > > > > > > > > > > > > > > the new coordinator. This will
> > require
> > > > > > protocol
> > > > > > > > > > > changes,
> > > > > > > > > > > > we
> > > > > > > > > > > > > > > > > > definitely
> > > > > > > > > > > > > > > > > > > > will
> > > > > > > > > > > > > > > > > > > > > have fault tolerance and
> performance
> > > > > issues.
> > > > > > > IMO
> > > > > > > > > the
> > > > > > > > > > > > whole
> > > > > > > > > > > > > > idea
> > > > > > > > > > > > > > > > is
> > > > > > > > > > > > > > > > > > > wrong
> > > > > > > > > > > > > > > > > > > > > and it makes no sense to waste time
> > on
> > > > it.
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > Sergi
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > 2017-03-14 10:57 GMT+03:00 ALEKSEY
> > > > > KUZNETSOV
> > > > > > <
> > > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > IgniteTransactionState
> > > implememntation
> > > > > > > contains
> > > > > > > > > > > > > > > IgniteTxEntry's
> > > > > > > > > > > > > > > > > > which
> > > > > > > > > > > > > > > > > > > > is
> > > > > > > > > > > > > > > > > > > > > > supposed to be transferable
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > пн, 13 мар. 2017 г. в 19:32,
> > Dmitriy
> > > > > > > Setrakyan
> > > > > > > > <
> > > > > > > > > > > > > > > > > > > dsetrakyan@apache.org
> > > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > It sounds a little scary to me
> > that
> > > > we
> > > > > > are
> > > > > > > > > > passing
> > > > > > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > > > > > objects
> > > > > > > > > > > > > > > > > > > > > > > around. Such object may contain
> > all
> > > > > sorts
> > > > > > > of
> > > > > > > > > > Ignite
> > > > > > > > > > > > > > > context.
> > > > > > > > > > > > > > > > If
> > > > > > > > > > > > > > > > > > > some
> > > > > > > > > > > > > > > > > > > > > data
> > > > > > > > > > > > > > > > > > > > > > > needs to be passed across, we
> > > should
> > > > > > > create a
> > > > > > > > > > > special
> > > > > > > > > > > > > > > > transfer
> > > > > > > > > > > > > > > > > > > object
> > > > > > > > > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > > > > > > this case.
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > D.
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > On Mon, Mar 13, 2017 at 9:10
> AM,
> > > > > ALEKSEY
> > > > > > > > > > KUZNETSOV
> > > > > > > > > > > <
> > > > > > > > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > well, there a couple of
> issues
> > > > > > preventing
> > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > > > proceeding.
> > > > > > > > > > > > > > > > > > > > > > > > At first, After transaction
> > > > > > serialization
> > > > > > > > and
> > > > > > > > > > > > > > > > deserialization
> > > > > > > > > > > > > > > > > > on
> > > > > > > > > > > > > > > > > > > > the
> > > > > > > > > > > > > > > > > > > > > > > remote
> > > > > > > > > > > > > > > > > > > > > > > > server, there is no txState.
> So
> > > im
> > > > > > going
> > > > > > > to
> > > > > > > > > put
> > > > > > > > > > > it
> > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > > > > > > >
> writeExternal()\readExternal()
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > The last one is Deserialized
> > > > > > transaction
> > > > > > > > > lacks
> > > > > > > > > > of
> > > > > > > > > > > > > > shared
> > > > > > > > > > > > > > > > > cache
> > > > > > > > > > > > > > > > > > > > > context
> > > > > > > > > > > > > > > > > > > > > > > > field at
> TransactionProxyImpl.
> > > > > Perhaps,
> > > > > > > it
> > > > > > > > > must
> > > > > > > > > > > be
> > > > > > > > > > > > > > > injected
> > > > > > > > > > > > > > > > > by
> > > > > > > > > > > > > > > > > > > > > > > > GridResourceProcessor ?
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > пн, 13 мар. 2017 г. в 17:27,
> > > > ALEKSEY
> > > > > > > > > KUZNETSOV
> > > > > > > > > > <
> > > > > > > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > while starting and
> continuing
> > > > > > > transaction
> > > > > > > > > in
> > > > > > > > > > > > > > different
> > > > > > > > > > > > > > > > jvms
> > > > > > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > > > run
> > > > > > > > > > > > > > > > > > > > > > into
> > > > > > > > > > > > > > > > > > > > > > > > > serialization exception in
> > > > > > > > > writeExternalMeta
> > > > > > > > > > :
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > @Override public void
> > > > > > > > > > > writeExternal(ObjectOutput
> > > > > > > > > > > > > out)
> > > > > > > > > > > > > > > > > throws
> > > > > > > > > > > > > > > > > > > > > > > IOException
> > > > > > > > > > > > > > > > > > > > > > > > {
> > > > > > > > > > > > > > > > > > > > > > > > >     writeExternalMeta(out);
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > some meta is cannot be
> > > > serialized.
> > > > > > > > > > > > > > > > > > > > > > > > > пт, 10 мар. 2017 г. в
> 17:25,
> > > > Alexey
> > > > > > > > > > Goncharuk <
> > > > > > > > > > > > > > > > > > > > > > > > alexey.goncharuk@gmail.com
> > > > > > > > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > Aleksey,
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > I think I am starting to
> get
> > > what
> > > > > you
> > > > > > > > want,
> > > > > > > > > > > but I
> > > > > > > > > > > > > > have
> > > > > > > > > > > > > > > a
> > > > > > > > > > > > > > > > > few
> > > > > > > > > > > > > > > > > > > > > > concerns:
> > > > > > > > > > > > > > > > > > > > > > > > >  - What is the API for the
> > > > proposed
> > > > > > > > change?
> > > > > > > > > > In
> > > > > > > > > > > > your
> > > > > > > > > > > > > > > test,
> > > > > > > > > > > > > > > > > you
> > > > > > > > > > > > > > > > > > > > pass
> > > > > > > > > > > > > > > > > > > > > an
> > > > > > > > > > > > > > > > > > > > > > > > > instance of transaction
> > created
> > > > on
> > > > > > > > > ignite(0)
> > > > > > > > > > to
> > > > > > > > > > > > the
> > > > > > > > > > > > > > > > ignite
> > > > > > > > > > > > > > > > > > > > instance
> > > > > > > > > > > > > > > > > > > > > > > > > ignite(1). This is
> obviously
> > > not
> > > > > > > possible
> > > > > > > > > in
> > > > > > > > > > a
> > > > > > > > > > > > > truly
> > > > > > > > > > > > > > > > > > > distributed
> > > > > > > > > > > > > > > > > > > > > > > > > (multi-jvm) environment.
> > > > > > > > > > > > > > > > > > > > > > > > > - How will you synchronize
> > > cache
> > > > > > update
> > > > > > > > > > actions
> > > > > > > > > > > > and
> > > > > > > > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > > > > > > > commit?
> > > > > > > > > > > > > > > > > > > > > > > > > Say, you have one node that
> > > > decided
> > > > > > to
> > > > > > > > > > commit,
> > > > > > > > > > > > but
> > > > > > > > > > > > > > > > another
> > > > > > > > > > > > > > > > > > node
> > > > > > > > > > > > > > > > > > > > is
> > > > > > > > > > > > > > > > > > > > > > > still
> > > > > > > > > > > > > > > > > > > > > > > > > writing within this
> > > transaction.
> > > > > How
> > > > > > do
> > > > > > > > you
> > > > > > > > > > > make
> > > > > > > > > > > > > sure
> > > > > > > > > > > > > > > > that
> > > > > > > > > > > > > > > > > > two
> > > > > > > > > > > > > > > > > > > > > nodes
> > > > > > > > > > > > > > > > > > > > > > > will
> > > > > > > > > > > > > > > > > > > > > > > > > not call commit() and
> > > rollback()
> > > > > > > > > > > simultaneously?
> > > > > > > > > > > > > > > > > > > > > > > > >  - How do you make sure
> that
> > > > either
> > > > > > > > > commit()
> > > > > > > > > > or
> > > > > > > > > > > > > > > > rollback()
> > > > > > > > > > > > > > > > > is
> > > > > > > > > > > > > > > > > > > > > called
> > > > > > > > > > > > > > > > > > > > > > if
> > > > > > > > > > > > > > > > > > > > > > > > an
> > > > > > > > > > > > > > > > > > > > > > > > > originator failed?
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > 2017-03-10 15:38 GMT+03:00
> > > > Дмитрий
> > > > > > > Рябов
> > > > > > > > <
> > > > > > > > > > > > > > > > > > > somefireone@gmail.com
> > > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > Alexey Goncharuk, heh, my
> > > > initial
> > > > > > > > > > > understanding
> > > > > > > > > > > > > was
> > > > > > > > > > > > > > > > that
> > > > > > > > > > > > > > > > > > > > > > transferring
> > > > > > > > > > > > > > > > > > > > > > > > of
> > > > > > > > > > > > > > > > > > > > > > > > > tx
> > > > > > > > > > > > > > > > > > > > > > > > > > ownership from one node
> to
> > > > > another
> > > > > > > will
> > > > > > > > > be
> > > > > > > > > > > > > happened
> > > > > > > > > > > > > > > > > > > > automatically
> > > > > > > > > > > > > > > > > > > > > > > when
> > > > > > > > > > > > > > > > > > > > > > > > > > originating node is gone
> > > down.
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > 2017-03-10 15:36
> GMT+03:00
> > > > > ALEKSEY
> > > > > > > > > > KUZNETSOV
> > > > > > > > > > > <
> > > > > > > > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > Im aiming to span
> > > transaction
> > > > > on
> > > > > > > > > multiple
> > > > > > > > > > > > > > threads,
> > > > > > > > > > > > > > > > > nodes,
> > > > > > > > > > > > > > > > > > > > > > > jvms(soon).
> > > > > > > > > > > > > > > > > > > > > > > > > So
> > > > > > > > > > > > > > > > > > > > > > > > > > > every node is able to
> > > > rollback,
> > > > > > or
> > > > > > > > > commit
> > > > > > > > > > > > > common
> > > > > > > > > > > > > > > > > > > > transaction.It
> > > > > > > > > > > > > > > > > > > > > > > > turned
> > > > > > > > > > > > > > > > > > > > > > > > > > up i
> > > > > > > > > > > > > > > > > > > > > > > > > > > need to transfer tx
> > between
> > > > > nodes
> > > > > > > in
> > > > > > > > > > order
> > > > > > > > > > > to
> > > > > > > > > > > > > > > commit
> > > > > > > > > > > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > > > > > > > > > > different node(in the
> > same
> > > > > jvm).
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > пт, 10 мар. 2017 г. в
> > > 15:20,
> > > > > > Alexey
> > > > > > > > > > > > Goncharuk <
> > > > > > > > > > > > > > > > > > > > > > > > > >
> alexey.goncharuk@gmail.com
> > > > > > > > > > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > Aleksey,
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > Do you mean that you
> > > want a
> > > > > > > concept
> > > > > > > > > of
> > > > > > > > > > > > > > > transferring
> > > > > > > > > > > > > > > > > of
> > > > > > > > > > > > > > > > > > tx
> > > > > > > > > > > > > > > > > > > > > > > ownership
> > > > > > > > > > > > > > > > > > > > > > > > > > from
> > > > > > > > > > > > > > > > > > > > > > > > > > > > one node to another?
> My
> > > > > initial
> > > > > > > > > > > > understanding
> > > > > > > > > > > > > > was
> > > > > > > > > > > > > > > > > that
> > > > > > > > > > > > > > > > > > > you
> > > > > > > > > > > > > > > > > > > > > want
> > > > > > > > > > > > > > > > > > > > > > > to
> > > > > > > > > > > > > > > > > > > > > > > > be
> > > > > > > > > > > > > > > > > > > > > > > > > > > able
> > > > > > > > > > > > > > > > > > > > > > > > > > > > to update keys in a
> > > > > transaction
> > > > > > > > from
> > > > > > > > > > > > multiple
> > > > > > > > > > > > > > > > threads
> > > > > > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > > > > > > parallel.
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > --AG
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > 2017-03-10 15:01
> > > GMT+03:00
> > > > > > > ALEKSEY
> > > > > > > > > > > > KUZNETSOV
> > > > > > > > > > > > > <
> > > > > > > > > > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > Well. Consider
> > > > transaction
> > > > > > > > started
> > > > > > > > > in
> > > > > > > > > > > one
> > > > > > > > > > > > > > node,
> > > > > > > > > > > > > > > > and
> > > > > > > > > > > > > > > > > > > > > continued
> > > > > > > > > > > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > > > > > > > > > > another
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > one.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > The following test
> > > > > describes
> > > > > > my
> > > > > > > > > idea:
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > Ignite ignite1 =
> > > > ignite(0);
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > IgniteTransactions
> > > > > > > transactions =
> > > > > > > > > > > > > > > > > > > ignite1.transactions();
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > IgniteCache<String,
> > > > > Integer>
> > > > > > > > cache
> > > > > > > > > =
> > > > > > > > > > > > > > > > > > > > > > ignite1.getOrCreateCache("
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > testCache");
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > Transaction tx =
> > > > > > > > > > transactions.txStart(
> > > > > > > > > > > > > > > > concurrency,
> > > > > > > > > > > > > > > > > > > > > > isolation);
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > cache.put("key1",
> 1);
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > cache.put("key2",
> 2);
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > tx.stop();
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > IgniteInternalFuture<Boolean>
> > > > > > > > fut =
> > > > > > > > > > > > > > > > > > > > > GridTestUtils.runAsync(()
> > > > > > > > > > > > > > > > > > > > > > > ->
> > > > > > > > > > > > > > > > > > > > > > > > {
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> >  IgniteTransactions
> > > > ts =
> > > > > > > > > > > > > > > > > ignite(1).transactions();
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > >  Assert.assertNull(ts.tx());
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > >  Assert.assertEquals(
> > > > > > > > > > > > > > > > TransactionState.STOPPED,
> > > > > > > > > > > > > > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >     ts.txStart(tx);
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > >  Assert.assertEquals(TransactionState.ACTIVE,
> > > > > > > > > > > > > > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
>  cache.put("key3",
> > > 3);
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > >  Assert.assertTrue(cache.remove("key2"));
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >     tx.commit();
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >     return true;
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > });
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > fut.get();
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> Assert.assertEquals(
> > > > > > > > > > > > > > TransactionState.COMMITTED,
> > > > > > > > > > > > > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > Assert.assertEquals((long)1,
> > > > > > > > > > > > > > > > > > (long)cache.get("key1"));
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > Assert.assertEquals((long)3,
> > > > > > > > > > > > > > > > > > (long)cache.get("key3"));
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > Assert.assertFalse(cache.
> > > > > > > > > > > > > > containsKey("key2"));
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > In method
> > > > *ts.txStart(...)*
> > > > > > we
> > > > > > > > just
> > > > > > > > > > > > rebind
> > > > > > > > > > > > > > *tx*
> > > > > > > > > > > > > > > > to
> > > > > > > > > > > > > > > > > > > > current
> > > > > > > > > > > > > > > > > > > > > > > > thread:
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > public void
> > > > > > txStart(Transaction
> > > > > > > > > tx) {
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > >  TransactionProxyImpl
> > > > > > > > > > > > transactionProxy =
> > > > > > > > > > > > > > > > > > > > > > > > > (TransactionProxyImpl)tx;
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >     cctx.tm
> > > ().reopenTx(
> > > > > > > > > > > > > > transactionProxy.tx());
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
>  transactionProxy.
> > > > > > > > > > > > bindToCurrentThread();
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > }
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > In method
> *reopenTx*
> > we
> > > > > alter
> > > > > > > > > > > *threadMap*
> > > > > > > > > > > > > so
> > > > > > > > > > > > > > > that
> > > > > > > > > > > > > > > > > it
> > > > > > > > > > > > > > > > > > > > binds
> > > > > > > > > > > > > > > > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > to current thread.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > How do u think
> about
> > > it ?
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > вт, 7 мар. 2017 г.
> в
> > > > 22:38,
> > > > > > > Denis
> > > > > > > > > > > Magda <
> > > > > > > > > > > > > > > > > > > > dmagda@apache.org
> > > > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Hi Alexey,
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Please share the
> > > > rational
> > > > > > > > behind
> > > > > > > > > > this
> > > > > > > > > > > > and
> > > > > > > > > > > > > > the
> > > > > > > > > > > > > > > > > > > thoughts,
> > > > > > > > > > > > > > > > > > > > > > > design
> > > > > > > > > > > > > > > > > > > > > > > > > > ideas
> > > > > > > > > > > > > > > > > > > > > > > > > > > > you
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > have in mind.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > —
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Denis
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > On Mar 7, 2017,
> > at
> > > > 3:19
> > > > > > AM,
> > > > > > > > > > ALEKSEY
> > > > > > > > > > > > > > > > KUZNETSOV <
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > alkuznetsov.sb@gmail.com
> > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Hi all! Im
> > > designing
> > > > > > > > > distributed
> > > > > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > > > which
> > > > > > > > > > > > > > > > > > > > can
> > > > > > > > > > > > > > > > > > > > > be
> > > > > > > > > > > > > > > > > > > > > > > > > started
> > > > > > > > > > > > > > > > > > > > > > > > > > > at
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > one
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > node, and
> > continued
> > > > at
> > > > > > > other
> > > > > > > > > one.
> > > > > > > > > > > Has
> > > > > > > > > > > > > > > anybody
> > > > > > > > > > > > > > > > > > > > thoughts
> > > > > > > > > > > > > > > > > > > > > on
> > > > > > > > > > > > > > > > > > > > > > > it
> > > > > > > > > > > > > > > > > > > > > > > > ?
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > *Kuznetsov
> > Aleksey*
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > --
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > --
> > > > > > > > > > > >
> > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > >
> > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > --
> > > > > > > > > >
> > > > > > > > > > *Best Regards,*
> > > > > > > > > >
> > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > --
> > > > > > > >
> > > > > > > > *Best Regards,*
> > > > > > > >
> > > > > > > > *Kuznetsov Aleksey*
> > > > > > > >
> > > > > > >
> > > > > > > --
> > > > > > >
> > > > > > > *Best Regards,*
> > > > > > >
> > > > > > > *Kuznetsov Aleksey*
> > > > > > >
> > > > > > --
> > > > > >
> > > > > > *Best Regards,*
> > > > > >
> > > > > > *Kuznetsov Aleksey*
> > > > > >
> > > > >
> > > > --
> > > >
> > > > *Best Regards,*
> > > >
> > > > *Kuznetsov Aleksey*
> > > >
> > >
> > --
> >
> > *Best Regards,*
> >
> > *Kuznetsov Aleksey*
> >
>
-- 

*Best Regards,*

*Kuznetsov Aleksey*

Re: distributed transaction of non-single coordinator

Posted by Sergi Vladykin <se...@gmail.com>.
You know better, go ahead! :)

Sergi

2017-03-17 16:16 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:

> we've discovered several problems regarding your "accumulation"
> approach.These are
>
>    1. perfomance issues when transfering data from temporary cache to
>    permanent one. Keep in mind big deal of concurent transactions in
> Service
>    commiter
>    2. extreme memory load when keeping temporary cache in memory
>    3. As long as user is not acquainted with ignite, working with cache
>    must be transparent for him. Keep this in mind. User's node can evaluate
>    logic with no transaction at all, so we should deal with both types of
>    execution flow : transactional and non-transactional.Another one
> problem is
>    transaction id support at the user node. We would have handled all this
>    issues and many more.
>    4. we cannot pessimistically lock entity.
>
> As a result, we decided to move on building distributed transaction. We put
> aside your "accumulation" approach until we realize how to solve
> difficulties above .
>
> чт, 16 мар. 2017 г. в 16:56, Sergi Vladykin <se...@gmail.com>:
>
> > The problem "How to run millions of entities, and millions of operations
> on
> > a single Pentium3" is out of scope here. Do the math, plan capacity
> > reasonably.
> >
> > Sergi
> >
> > 2017-03-16 15:54 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:
> >
> > > hmm, If we have millions of entities, and millions of operations, would
> > not
> > > this approache lead to memory overflow and perfomance degradation
> > >
> > > чт, 16 мар. 2017 г. в 15:42, Sergi Vladykin <sergi.vladykin@gmail.com
> >:
> > >
> > > > 1. Actually you have to check versions on all the values you have
> read
> > > > during the tx.
> > > >
> > > > For example if we have [k1 => v1, k2 => v2] and do:
> > > >
> > > > put(k1, get(k2) + 5)
> > > >
> > > > We have to remember the version for k2. This logic can be relatively
> > > easily
> > > > encapsulated in a framework atop of Ignite. You need to implement one
> > to
> > > > make all this stuff usable.
> > > >
> > > > 2. I suggest to avoid any locking here, because you easily will end
> up
> > > with
> > > > deadlocks. If you do not have too frequent updates for your keys,
> > > > optimistic approach will work just fine.
> > > >
> > > > Theoretically in the Committer Service you can start a thread for the
> > > > lifetime of the whole distributed transaction, take a lock on the key
> > > using
> > > > IgniteCache.lock(K key) before executing any Services, wait for all
> the
> > > > services to complete, execute optimistic commit in the same thread
> > while
> > > > keeping this lock and then release it. Notice that all the Ignite
> > > > transactions inside of all Services must be optimistic here to be
> able
> > to
> > > > read this locked key.
> > > >
> > > > But again I do not recommend you using this approach until you have a
> > > > reliable deadlock avoidance scheme.
> > > >
> > > > Sergi
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > 2017-03-16 12:53 GMT+03:00 ALEKSEY KUZNETSOV <
> alkuznetsov.sb@gmail.com
> > >:
> > > >
> > > > > Yeah, now i got it.
> > > > > There are some doubts on this approach
> > > > > 1) During optimistic commit phase, when you assure no one altered
> the
> > > > > original values, you must check versions of other dependent keys.
> How
> > > > could
> > > > > we obtain those keys(in an automative manner, of course) ?
> > > > > 2) How could we lock a key before some Service A introduce changes?
> > So
> > > no
> > > > > other service is allowed to change this key-value?(sort of
> > pessimistic
> > > > > blocking)
> > > > > May be you know some implementations of such approach ?
> > > > >
> > > > > ср, 15 мар. 2017 г. в 17:54, ALEKSEY KUZNETSOV <
> > > alkuznetsov.sb@gmail.com
> > > > >:
> > > > >
> > > > > >  Thank you very much for help.  I will answer later.
> > > > > >
> > > > > > ср, 15 мар. 2017 г. в 17:39, Sergi Vladykin <
> > > sergi.vladykin@gmail.com
> > > > >:
> > > > > >
> > > > > > All the services do not update key in place, but only generate
> new
> > > keys
> > > > > > augmented by otx and store the updated value in the same cache +
> > > > remember
> > > > > > the keys and versions participating in the transaction in some
> > > separate
> > > > > > atomic cache.
> > > > > >
> > > > > > Follow this sequence of changes applied to cache contents by each
> > > > > Service:
> > > > > >
> > > > > > Initial cache contents:
> > > > > >             [k1 => v1]
> > > > > >             [k2 => v2]
> > > > > >             [k3 => v3]
> > > > > >
> > > > > > Cache contents after Service A:
> > > > > >             [k1 => v1]
> > > > > >             [k2 => v2]
> > > > > >             [k3 => v3]
> > > > > >             [k1x => v1a]
> > > > > >             [k2x => v2a]
> > > > > >
> > > > > >          + [x => (k1 -> ver1, k2 -> ver2)] in some separate
> atomic
> > > > cache
> > > > > >
> > > > > > Cache contents after Service B:
> > > > > >             [k1 => v1]
> > > > > >             [k2 => v2]
> > > > > >             [k3 => v3]
> > > > > >             [k1x => v1a]
> > > > > >             [k2x => v2ab]
> > > > > >             [k3x => v3b]
> > > > > >
> > > > > >         + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in some
> > > separate
> > > > > > atomic cache
> > > > > >
> > > > > > Finally the Committer Service takes this map of updated keys and
> > > their
> > > > > > versions from some separate atomic cache, starts Ignite
> transaction
> > > and
> > > > > > replaces all the values for k* keys to values taken from k*x
> keys.
> > > The
> > > > > > successful result must be the following:
> > > > > >
> > > > > >             [k1 => v1a]
> > > > > >             [k2 => v2ab]
> > > > > >             [k3 => v3b]
> > > > > >             [k1x => v1a]
> > > > > >             [k2x => v2ab]
> > > > > >             [k3x => v3b]
> > > > > >
> > > > > >         + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in some
> > > separate
> > > > > > atomic cache
> > > > > >
> > > > > > But Committer Service also has to check that no one updated the
> > > > original
> > > > > > values before us, because otherwise we can not give any
> > > serializability
> > > > > > guarantee for these distributed transactions. Here we may need to
> > > check
> > > > > not
> > > > > > only versions of the updated keys, but also versions of any other
> > > keys
> > > > > end
> > > > > > result depends on.
> > > > > >
> > > > > > After that Committer Service has to do a cleanup (may be outside
> of
> > > the
> > > > > > committing tx) to come to the following final state:
> > > > > >
> > > > > >             [k1 => v1a]
> > > > > >             [k2 => v2ab]
> > > > > >             [k3 => v3b]
> > > > > >
> > > > > > Makes sense?
> > > > > >
> > > > > > Sergi
> > > > > >
> > > > > >
> > > > > > 2017-03-15 16:54 GMT+03:00 ALEKSEY KUZNETSOV <
> > > alkuznetsov.sb@gmail.com
> > > > >:
> > > > > >
> > > > > > >    - what do u mean by saying "
> > > > > > > *in a single transaction checks value versions for all the old
> > > values
> > > > > > >     and replaces them with calculated new ones *"? Every time
> you
> > > > > change
> > > > > > >    value(in some service), you store it to *some special atomic
> > > > cache*
> > > > > ,
> > > > > > so
> > > > > > >    when all services ceased working, Service commiter got a
> > values
> > > > with
> > > > > > the
> > > > > > >    last versions.
> > > > > > >    - After "*does cleanup of temporary keys and values*"
> Service
> > > > > commiter
> > > > > > >    persists them into permanent store, isn't it ?
> > > > > > >    - I cant grasp your though, you say "*in case of version
> > > mismatch
> > > > or
> > > > > > TX
> > > > > > >    timeout just rollbacks*". But what versions would it match?
> > > > > > >
> > > > > > >
> > > > > > > ср, 15 мар. 2017 г. в 15:34, Sergi Vladykin <
> > > > sergi.vladykin@gmail.com
> > > > > >:
> > > > > > >
> > > > > > > > Ok, here is what you actually need to implement at the
> > > application
> > > > > > level.
> > > > > > > >
> > > > > > > > Lets say we have to call 2 services in the following order:
> > > > > > > >  - Service A: wants to update keys [k1 => v1,   k2 => v2]  to
> > > [k1
> > > > =>
> > > > > > > v1a,
> > > > > > > >   k2 => v2a]
> > > > > > > >  - Service B: wants to update keys [k2 => v2a, k3 => v3]  to
> > [k2
> > > > =>
> > > > > > > v2ab,
> > > > > > > > k3 => v3b]
> > > > > > > >
> > > > > > > > The change
> > > > > > > >     from [ k1 => v1,   k2 => v2,     k3 => v3   ]
> > > > > > > >     to     [ k1 => v1a, k2 => v2ab, k3 => v3b ]
> > > > > > > > must happen in a single transaction.
> > > > > > > >
> > > > > > > >
> > > > > > > > Optimistic protocol to solve this:
> > > > > > > >
> > > > > > > > Each cache key must have a field `otx`, which is a unique
> > > > > orchestrator
> > > > > > TX
> > > > > > > > identifier - it must be a parameter passed to all the
> services.
> > > If
> > > > > > `otx`
> > > > > > > is
> > > > > > > > set to some value it means that it is an intermediate key and
> > is
> > > > > > visible
> > > > > > > > only inside of some transaction, for the finalized key `otx`
> > must
> > > > be
> > > > > > > null -
> > > > > > > > it means the key is committed and visible for everyone.
> > > > > > > >
> > > > > > > > Each cache value must have a field `ver` which is a version
> of
> > > that
> > > > > > > value.
> > > > > > > >
> > > > > > > > For both fields (`otx` and `ver`) the safest way is to use
> > UUID.
> > > > > > > >
> > > > > > > > Workflow is the following:
> > > > > > > >
> > > > > > > > Orchestrator starts the distributed transaction with `otx` =
> x
> > > and
> > > > > > passes
> > > > > > > > this parameter to all the services.
> > > > > > > >
> > > > > > > > Service A:
> > > > > > > >  - does some computations
> > > > > > > >  - stores [k1x => v1a, k2x => v2a]  with TTL = Za
> > > > > > > >       where
> > > > > > > >           Za - left time from max Orchestrator TX duration
> > after
> > > > > > Service
> > > > > > > A
> > > > > > > > end
> > > > > > > >           k1x, k2x - new temporary keys with field `otx` = x
> > > > > > > >           v2a has updated version `ver`
> > > > > > > >  - returns a set of updated keys and all the old versions to
> > the
> > > > > > > > orchestrator
> > > > > > > >        or just stores it in some special atomic cache like
> > > > > > > >        [x => (k1 -> ver1, k2 -> ver2)] TTL = Za
> > > > > > > >
> > > > > > > > Service B:
> > > > > > > >  - retrieves the updated value k2x => v2a because it knows
> > `otx`
> > > =
> > > > x
> > > > > > > >  - does computations
> > > > > > > >  - stores [k2x => v2ab, k3x => v3b] TTL = Zb
> > > > > > > >  - updates the set of updated keys like [x => (k1 -> ver1, k2
> > ->
> > > > > ver2,
> > > > > > k3
> > > > > > > > -> ver3)] TTL = Zb
> > > > > > > >
> > > > > > > > Service Committer (may be embedded into Orchestrator):
> > > > > > > >  - takes all the updated keys and versions for `otx` = x
> > > > > > > >        [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)]
> > > > > > > >  - in a single transaction checks value versions for all the
> > old
> > > > > values
> > > > > > > >        and replaces them with calculated new ones
> > > > > > > >  - does cleanup of temporary keys and values
> > > > > > > >  - in case of version mismatch or TX timeout just rollbacks
> and
> > > > > signals
> > > > > > > >         to Orchestrator to restart the job with new `otx`
> > > > > > > >
> > > > > > > > PROFIT!!
> > > > > > > >
> > > > > > > > This approach even allows you to run independent parts of the
> > > graph
> > > > > in
> > > > > > > > parallel (with TX transfer you will always run only one at a
> > > time).
> > > > > > Also
> > > > > > > it
> > > > > > > > does not require inventing any special fault tolerance
> technics
> > > > > because
> > > > > > > > Ignite caches are already fault tolerant and all the
> > intermediate
> > > > > > results
> > > > > > > > are virtually invisible and stored with TTL, thus in case of
> > any
> > > > > crash
> > > > > > > you
> > > > > > > > will not have inconsistent state or garbage.
> > > > > > > >
> > > > > > > > Sergi
> > > > > > > >
> > > > > > > >
> > > > > > > > 2017-03-15 11:42 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > alkuznetsov.sb@gmail.com
> > > > > > >:
> > > > > > > >
> > > > > > > > > Okay, we are open for proposals on business task. I mean,
> we
> > > can
> > > > > make
> > > > > > > use
> > > > > > > > > of some other thing, not distributed transaction. Not
> > > transaction
> > > > > > yet.
> > > > > > > > >
> > > > > > > > > ср, 15 мар. 2017 г. в 11:24, Vladimir Ozerov <
> > > > vozerov@gridgain.com
> > > > > >:
> > > > > > > > >
> > > > > > > > > > IMO the use case makes sense. However, as Sergi already
> > > > > mentioned,
> > > > > > > the
> > > > > > > > > > problem is far more complex, than simply passing TX state
> > > over
> > > > a
> > > > > > > wire.
> > > > > > > > > Most
> > > > > > > > > > probably a kind of coordinator will be required still to
> > > manage
> > > > > all
> > > > > > > > kinds
> > > > > > > > > > of failures. This task should be started with clean
> design
> > > > > proposal
> > > > > > > > > > explaining how we handle all these concurrent events. And
> > > only
> > > > > > then,
> > > > > > > > when
> > > > > > > > > > we understand all implications, we should move to
> > development
> > > > > > stage.
> > > > > > > > > >
> > > > > > > > > > On Wed, Mar 15, 2017 at 10:38 AM, ALEKSEY KUZNETSOV <
> > > > > > > > > > alkuznetsov.sb@gmail.com> wrote:
> > > > > > > > > >
> > > > > > > > > > > Right
> > > > > > > > > > >
> > > > > > > > > > > ср, 15 мар. 2017 г. в 10:35, Sergi Vladykin <
> > > > > > > > sergi.vladykin@gmail.com
> > > > > > > > > >:
> > > > > > > > > > >
> > > > > > > > > > > > Good! Basically your orchestrator just takes some
> > > > predefined
> > > > > > > graph
> > > > > > > > of
> > > > > > > > > > > > distributed services to be invoked, calls them by
> some
> > > kind
> > > > > of
> > > > > > > RPC
> > > > > > > > > and
> > > > > > > > > > > > passes the needed parameters between them, right?
> > > > > > > > > > > >
> > > > > > > > > > > > Sergi
> > > > > > > > > > > >
> > > > > > > > > > > > 2017-03-14 22:46 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > >:
> > > > > > > > > > > >
> > > > > > > > > > > > > orchestrator is a custom thing. He is responsible
> for
> > > > > > managing
> > > > > > > > > > business
> > > > > > > > > > > > > scenarios flows. Many nodes are involved in
> > scenarios.
> > > > They
> > > > > > > > > exchange
> > > > > > > > > > > data
> > > > > > > > > > > > > and folow one another. If you acquinted with BPMN
> > > > > framework,
> > > > > > so
> > > > > > > > > > > > > orchestrator is like bpmn engine.
> > > > > > > > > > > > >
> > > > > > > > > > > > > вт, 14 Мар 2017 г., 18:56 Sergi Vladykin <
> > > > > > > > sergi.vladykin@gmail.com
> > > > > > > > > >:
> > > > > > > > > > > > >
> > > > > > > > > > > > > > What is Orchestrator for you? Is it a thing from
> > > > > Microsoft
> > > > > > or
> > > > > > > > > your
> > > > > > > > > > > > custom
> > > > > > > > > > > > > > in-house software?
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Sergi
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > 2017-03-14 18:00 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > >:
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Fine. Let's say we've got multiple servers
> which
> > > > > fulfills
> > > > > > > > > custom
> > > > > > > > > > > > logic.
> > > > > > > > > > > > > > > This servers compound oriented graph (BPMN
> > process)
> > > > > which
> > > > > > > > > > > controlled
> > > > > > > > > > > > by
> > > > > > > > > > > > > > > Orchestrator.
> > > > > > > > > > > > > > > For instance, *server1  *creates *variable A
> > *with
> > > > > value
> > > > > > 1,
> > > > > > > > > > > persists
> > > > > > > > > > > > it
> > > > > > > > > > > > > > to
> > > > > > > > > > > > > > > IGNITE cache and creates *variable B *and sends
> > it
> > > > to*
> > > > > > > > server2.
> > > > > > > > > > > *The
> > > > > > > > > > > > > > > latests receives *variable B*, do some logic
> with
> > > it
> > > > > and
> > > > > > > > stores
> > > > > > > > > > to
> > > > > > > > > > > > > > IGNITE.
> > > > > > > > > > > > > > > All the work made by both servers must be
> > fulfilled
> > > > in
> > > > > > > *one*
> > > > > > > > > > > > > transaction.
> > > > > > > > > > > > > > > Because we need all information done, or
> > > > > > > nothing(rollbacked).
> > > > > > > > > The
> > > > > > > > > > > > > > scenario
> > > > > > > > > > > > > > > is managed by orchestrator.
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > вт, 14 мар. 2017 г. в 17:31, Sergi Vladykin <
> > > > > > > > > > > > sergi.vladykin@gmail.com
> > > > > > > > > > > > > >:
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > Ok, it is not a business case, it is your
> wrong
> > > > > > solution
> > > > > > > > for
> > > > > > > > > > it.
> > > > > > > > > > > > > > > > Lets try again, what is the business case?
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > Sergi
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > 2017-03-14 16:42 GMT+03:00 ALEKSEY KUZNETSOV
> <
> > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > The case is the following, One starts
> > > transaction
> > > > > in
> > > > > > > one
> > > > > > > > > > node,
> > > > > > > > > > > > and
> > > > > > > > > > > > > > > commit
> > > > > > > > > > > > > > > > > this transaction in another jvm node(or
> > > rollback
> > > > it
> > > > > > > > > > remotely).
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > вт, 14 мар. 2017 г. в 16:30, Sergi
> Vladykin <
> > > > > > > > > > > > > > sergi.vladykin@gmail.com
> > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > Because even if you make it work for some
> > > > > > simplistic
> > > > > > > > > > > scenario,
> > > > > > > > > > > > > get
> > > > > > > > > > > > > > > > ready
> > > > > > > > > > > > > > > > > to
> > > > > > > > > > > > > > > > > > write many fault tolerance tests and make
> > > sure
> > > > > that
> > > > > > > you
> > > > > > > > > TXs
> > > > > > > > > > > > work
> > > > > > > > > > > > > > > > > gracefully
> > > > > > > > > > > > > > > > > > in all modes in case of crashes. Also
> make
> > > sure
> > > > > > that
> > > > > > > we
> > > > > > > > > do
> > > > > > > > > > > not
> > > > > > > > > > > > > have
> > > > > > > > > > > > > > > any
> > > > > > > > > > > > > > > > > > performance drops after all your changes
> in
> > > > > > existing
> > > > > > > > > > > > benchmarks.
> > > > > > > > > > > > > > All
> > > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > all
> > > > > > > > > > > > > > > > > > I don't believe these conditions will be
> > met
> > > > and
> > > > > > your
> > > > > > > > > > > > > contribution
> > > > > > > > > > > > > > > will
> > > > > > > > > > > > > > > > > be
> > > > > > > > > > > > > > > > > > accepted.
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > Better solution to what problem? Sending
> TX
> > > to
> > > > > > > another
> > > > > > > > > > node?
> > > > > > > > > > > > The
> > > > > > > > > > > > > > > > problem
> > > > > > > > > > > > > > > > > > statement itself is already wrong. What
> > > > business
> > > > > > case
> > > > > > > > you
> > > > > > > > > > are
> > > > > > > > > > > > > > trying
> > > > > > > > > > > > > > > to
> > > > > > > > > > > > > > > > > > solve? I'm sure everything you need can
> be
> > > done
> > > > > in
> > > > > > a
> > > > > > > > much
> > > > > > > > > > > more
> > > > > > > > > > > > > > simple
> > > > > > > > > > > > > > > > and
> > > > > > > > > > > > > > > > > > efficient way at the application level.
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > Sergi
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > 2017-03-14 16:03 GMT+03:00 ALEKSEY
> > KUZNETSOV
> > > <
> > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > Why wrong ? You know the better
> solution?
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > вт, 14 мар. 2017 г. в 15:46, Sergi
> > > Vladykin <
> > > > > > > > > > > > > > > > sergi.vladykin@gmail.com
> > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > Just serializing TX object and
> > > > deserializing
> > > > > it
> > > > > > > on
> > > > > > > > > > > another
> > > > > > > > > > > > > node
> > > > > > > > > > > > > > > is
> > > > > > > > > > > > > > > > > > > > meaningless, because other nodes
> > > > > participating
> > > > > > in
> > > > > > > > the
> > > > > > > > > > TX
> > > > > > > > > > > > have
> > > > > > > > > > > > > > to
> > > > > > > > > > > > > > > > know
> > > > > > > > > > > > > > > > > > > about
> > > > > > > > > > > > > > > > > > > > the new coordinator. This will
> require
> > > > > protocol
> > > > > > > > > > changes,
> > > > > > > > > > > we
> > > > > > > > > > > > > > > > > definitely
> > > > > > > > > > > > > > > > > > > will
> > > > > > > > > > > > > > > > > > > > have fault tolerance and performance
> > > > issues.
> > > > > > IMO
> > > > > > > > the
> > > > > > > > > > > whole
> > > > > > > > > > > > > idea
> > > > > > > > > > > > > > > is
> > > > > > > > > > > > > > > > > > wrong
> > > > > > > > > > > > > > > > > > > > and it makes no sense to waste time
> on
> > > it.
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > Sergi
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > 2017-03-14 10:57 GMT+03:00 ALEKSEY
> > > > KUZNETSOV
> > > > > <
> > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > IgniteTransactionState
> > implememntation
> > > > > > contains
> > > > > > > > > > > > > > IgniteTxEntry's
> > > > > > > > > > > > > > > > > which
> > > > > > > > > > > > > > > > > > > is
> > > > > > > > > > > > > > > > > > > > > supposed to be transferable
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > пн, 13 мар. 2017 г. в 19:32,
> Dmitriy
> > > > > > Setrakyan
> > > > > > > <
> > > > > > > > > > > > > > > > > > dsetrakyan@apache.org
> > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > It sounds a little scary to me
> that
> > > we
> > > > > are
> > > > > > > > > passing
> > > > > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > > > > objects
> > > > > > > > > > > > > > > > > > > > > > around. Such object may contain
> all
> > > > sorts
> > > > > > of
> > > > > > > > > Ignite
> > > > > > > > > > > > > > context.
> > > > > > > > > > > > > > > If
> > > > > > > > > > > > > > > > > > some
> > > > > > > > > > > > > > > > > > > > data
> > > > > > > > > > > > > > > > > > > > > > needs to be passed across, we
> > should
> > > > > > create a
> > > > > > > > > > special
> > > > > > > > > > > > > > > transfer
> > > > > > > > > > > > > > > > > > object
> > > > > > > > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > > > > > this case.
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > D.
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > On Mon, Mar 13, 2017 at 9:10 AM,
> > > > ALEKSEY
> > > > > > > > > KUZNETSOV
> > > > > > > > > > <
> > > > > > > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > well, there a couple of issues
> > > > > preventing
> > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > > proceeding.
> > > > > > > > > > > > > > > > > > > > > > > At first, After transaction
> > > > > serialization
> > > > > > > and
> > > > > > > > > > > > > > > deserialization
> > > > > > > > > > > > > > > > > on
> > > > > > > > > > > > > > > > > > > the
> > > > > > > > > > > > > > > > > > > > > > remote
> > > > > > > > > > > > > > > > > > > > > > > server, there is no txState. So
> > im
> > > > > going
> > > > > > to
> > > > > > > > put
> > > > > > > > > > it
> > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > > > > > > writeExternal()\readExternal()
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > The last one is Deserialized
> > > > > transaction
> > > > > > > > lacks
> > > > > > > > > of
> > > > > > > > > > > > > shared
> > > > > > > > > > > > > > > > cache
> > > > > > > > > > > > > > > > > > > > context
> > > > > > > > > > > > > > > > > > > > > > > field at TransactionProxyImpl.
> > > > Perhaps,
> > > > > > it
> > > > > > > > must
> > > > > > > > > > be
> > > > > > > > > > > > > > injected
> > > > > > > > > > > > > > > > by
> > > > > > > > > > > > > > > > > > > > > > > GridResourceProcessor ?
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > пн, 13 мар. 2017 г. в 17:27,
> > > ALEKSEY
> > > > > > > > KUZNETSOV
> > > > > > > > > <
> > > > > > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > while starting and continuing
> > > > > > transaction
> > > > > > > > in
> > > > > > > > > > > > > different
> > > > > > > > > > > > > > > jvms
> > > > > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > > run
> > > > > > > > > > > > > > > > > > > > > into
> > > > > > > > > > > > > > > > > > > > > > > > serialization exception in
> > > > > > > > writeExternalMeta
> > > > > > > > > :
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > @Override public void
> > > > > > > > > > writeExternal(ObjectOutput
> > > > > > > > > > > > out)
> > > > > > > > > > > > > > > > throws
> > > > > > > > > > > > > > > > > > > > > > IOException
> > > > > > > > > > > > > > > > > > > > > > > {
> > > > > > > > > > > > > > > > > > > > > > > >     writeExternalMeta(out);
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > some meta is cannot be
> > > serialized.
> > > > > > > > > > > > > > > > > > > > > > > > пт, 10 мар. 2017 г. в 17:25,
> > > Alexey
> > > > > > > > > Goncharuk <
> > > > > > > > > > > > > > > > > > > > > > > alexey.goncharuk@gmail.com
> > > > > > > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > Aleksey,
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > I think I am starting to get
> > what
> > > > you
> > > > > > > want,
> > > > > > > > > > but I
> > > > > > > > > > > > > have
> > > > > > > > > > > > > > a
> > > > > > > > > > > > > > > > few
> > > > > > > > > > > > > > > > > > > > > concerns:
> > > > > > > > > > > > > > > > > > > > > > > >  - What is the API for the
> > > proposed
> > > > > > > change?
> > > > > > > > > In
> > > > > > > > > > > your
> > > > > > > > > > > > > > test,
> > > > > > > > > > > > > > > > you
> > > > > > > > > > > > > > > > > > > pass
> > > > > > > > > > > > > > > > > > > > an
> > > > > > > > > > > > > > > > > > > > > > > > instance of transaction
> created
> > > on
> > > > > > > > ignite(0)
> > > > > > > > > to
> > > > > > > > > > > the
> > > > > > > > > > > > > > > ignite
> > > > > > > > > > > > > > > > > > > instance
> > > > > > > > > > > > > > > > > > > > > > > > ignite(1). This is obviously
> > not
> > > > > > possible
> > > > > > > > in
> > > > > > > > > a
> > > > > > > > > > > > truly
> > > > > > > > > > > > > > > > > > distributed
> > > > > > > > > > > > > > > > > > > > > > > > (multi-jvm) environment.
> > > > > > > > > > > > > > > > > > > > > > > > - How will you synchronize
> > cache
> > > > > update
> > > > > > > > > actions
> > > > > > > > > > > and
> > > > > > > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > > > > > > commit?
> > > > > > > > > > > > > > > > > > > > > > > > Say, you have one node that
> > > decided
> > > > > to
> > > > > > > > > commit,
> > > > > > > > > > > but
> > > > > > > > > > > > > > > another
> > > > > > > > > > > > > > > > > node
> > > > > > > > > > > > > > > > > > > is
> > > > > > > > > > > > > > > > > > > > > > still
> > > > > > > > > > > > > > > > > > > > > > > > writing within this
> > transaction.
> > > > How
> > > > > do
> > > > > > > you
> > > > > > > > > > make
> > > > > > > > > > > > sure
> > > > > > > > > > > > > > > that
> > > > > > > > > > > > > > > > > two
> > > > > > > > > > > > > > > > > > > > nodes
> > > > > > > > > > > > > > > > > > > > > > will
> > > > > > > > > > > > > > > > > > > > > > > > not call commit() and
> > rollback()
> > > > > > > > > > simultaneously?
> > > > > > > > > > > > > > > > > > > > > > > >  - How do you make sure that
> > > either
> > > > > > > > commit()
> > > > > > > > > or
> > > > > > > > > > > > > > > rollback()
> > > > > > > > > > > > > > > > is
> > > > > > > > > > > > > > > > > > > > called
> > > > > > > > > > > > > > > > > > > > > if
> > > > > > > > > > > > > > > > > > > > > > > an
> > > > > > > > > > > > > > > > > > > > > > > > originator failed?
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > 2017-03-10 15:38 GMT+03:00
> > > Дмитрий
> > > > > > Рябов
> > > > > > > <
> > > > > > > > > > > > > > > > > > somefireone@gmail.com
> > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > Alexey Goncharuk, heh, my
> > > initial
> > > > > > > > > > understanding
> > > > > > > > > > > > was
> > > > > > > > > > > > > > > that
> > > > > > > > > > > > > > > > > > > > > transferring
> > > > > > > > > > > > > > > > > > > > > > > of
> > > > > > > > > > > > > > > > > > > > > > > > tx
> > > > > > > > > > > > > > > > > > > > > > > > > ownership from one node to
> > > > another
> > > > > > will
> > > > > > > > be
> > > > > > > > > > > > happened
> > > > > > > > > > > > > > > > > > > automatically
> > > > > > > > > > > > > > > > > > > > > > when
> > > > > > > > > > > > > > > > > > > > > > > > > originating node is gone
> > down.
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > 2017-03-10 15:36 GMT+03:00
> > > > ALEKSEY
> > > > > > > > > KUZNETSOV
> > > > > > > > > > <
> > > > > > > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > Im aiming to span
> > transaction
> > > > on
> > > > > > > > multiple
> > > > > > > > > > > > > threads,
> > > > > > > > > > > > > > > > nodes,
> > > > > > > > > > > > > > > > > > > > > > jvms(soon).
> > > > > > > > > > > > > > > > > > > > > > > > So
> > > > > > > > > > > > > > > > > > > > > > > > > > every node is able to
> > > rollback,
> > > > > or
> > > > > > > > commit
> > > > > > > > > > > > common
> > > > > > > > > > > > > > > > > > > transaction.It
> > > > > > > > > > > > > > > > > > > > > > > turned
> > > > > > > > > > > > > > > > > > > > > > > > > up i
> > > > > > > > > > > > > > > > > > > > > > > > > > need to transfer tx
> between
> > > > nodes
> > > > > > in
> > > > > > > > > order
> > > > > > > > > > to
> > > > > > > > > > > > > > commit
> > > > > > > > > > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > > > > > > > > > different node(in the
> same
> > > > jvm).
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > пт, 10 мар. 2017 г. в
> > 15:20,
> > > > > Alexey
> > > > > > > > > > > Goncharuk <
> > > > > > > > > > > > > > > > > > > > > > > > > alexey.goncharuk@gmail.com
> > > > > > > > > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > Aleksey,
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > Do you mean that you
> > want a
> > > > > > concept
> > > > > > > > of
> > > > > > > > > > > > > > transferring
> > > > > > > > > > > > > > > > of
> > > > > > > > > > > > > > > > > tx
> > > > > > > > > > > > > > > > > > > > > > ownership
> > > > > > > > > > > > > > > > > > > > > > > > > from
> > > > > > > > > > > > > > > > > > > > > > > > > > > one node to another? My
> > > > initial
> > > > > > > > > > > understanding
> > > > > > > > > > > > > was
> > > > > > > > > > > > > > > > that
> > > > > > > > > > > > > > > > > > you
> > > > > > > > > > > > > > > > > > > > want
> > > > > > > > > > > > > > > > > > > > > > to
> > > > > > > > > > > > > > > > > > > > > > > be
> > > > > > > > > > > > > > > > > > > > > > > > > > able
> > > > > > > > > > > > > > > > > > > > > > > > > > > to update keys in a
> > > > transaction
> > > > > > > from
> > > > > > > > > > > multiple
> > > > > > > > > > > > > > > threads
> > > > > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > > > > > parallel.
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > --AG
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > 2017-03-10 15:01
> > GMT+03:00
> > > > > > ALEKSEY
> > > > > > > > > > > KUZNETSOV
> > > > > > > > > > > > <
> > > > > > > > > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > Well. Consider
> > > transaction
> > > > > > > started
> > > > > > > > in
> > > > > > > > > > one
> > > > > > > > > > > > > node,
> > > > > > > > > > > > > > > and
> > > > > > > > > > > > > > > > > > > > continued
> > > > > > > > > > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > > > > > > > > > another
> > > > > > > > > > > > > > > > > > > > > > > > > > > > one.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > The following test
> > > > describes
> > > > > my
> > > > > > > > idea:
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > Ignite ignite1 =
> > > ignite(0);
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > IgniteTransactions
> > > > > > transactions =
> > > > > > > > > > > > > > > > > > ignite1.transactions();
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > IgniteCache<String,
> > > > Integer>
> > > > > > > cache
> > > > > > > > =
> > > > > > > > > > > > > > > > > > > > > ignite1.getOrCreateCache("
> > > > > > > > > > > > > > > > > > > > > > > > > > > > testCache");
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > Transaction tx =
> > > > > > > > > transactions.txStart(
> > > > > > > > > > > > > > > concurrency,
> > > > > > > > > > > > > > > > > > > > > isolation);
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > cache.put("key1", 1);
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > cache.put("key2", 2);
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > tx.stop();
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > IgniteInternalFuture<Boolean>
> > > > > > > fut =
> > > > > > > > > > > > > > > > > > > > GridTestUtils.runAsync(()
> > > > > > > > > > > > > > > > > > > > > > ->
> > > > > > > > > > > > > > > > > > > > > > > {
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
>  IgniteTransactions
> > > ts =
> > > > > > > > > > > > > > > > ignite(1).transactions();
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > >  Assert.assertNull(ts.tx());
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> >  Assert.assertEquals(
> > > > > > > > > > > > > > > TransactionState.STOPPED,
> > > > > > > > > > > > > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > > > > > > > > > > > >     ts.txStart(tx);
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > >  Assert.assertEquals(TransactionState.ACTIVE,
> > > > > > > > > > > > > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > > > > > > > > > > > >     cache.put("key3",
> > 3);
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > >  Assert.assertTrue(cache.remove("key2"));
> > > > > > > > > > > > > > > > > > > > > > > > > > > >     tx.commit();
> > > > > > > > > > > > > > > > > > > > > > > > > > > >     return true;
> > > > > > > > > > > > > > > > > > > > > > > > > > > > });
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > fut.get();
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > Assert.assertEquals(
> > > > > > > > > > > > > TransactionState.COMMITTED,
> > > > > > > > > > > > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > Assert.assertEquals((long)1,
> > > > > > > > > > > > > > > > > (long)cache.get("key1"));
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > Assert.assertEquals((long)3,
> > > > > > > > > > > > > > > > > (long)cache.get("key3"));
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > Assert.assertFalse(cache.
> > > > > > > > > > > > > containsKey("key2"));
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > In method
> > > *ts.txStart(...)*
> > > > > we
> > > > > > > just
> > > > > > > > > > > rebind
> > > > > > > > > > > > > *tx*
> > > > > > > > > > > > > > > to
> > > > > > > > > > > > > > > > > > > current
> > > > > > > > > > > > > > > > > > > > > > > thread:
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > public void
> > > > > txStart(Transaction
> > > > > > > > tx) {
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> >  TransactionProxyImpl
> > > > > > > > > > > transactionProxy =
> > > > > > > > > > > > > > > > > > > > > > > > (TransactionProxyImpl)tx;
> > > > > > > > > > > > > > > > > > > > > > > > > > > >     cctx.tm
> > ().reopenTx(
> > > > > > > > > > > > > transactionProxy.tx());
> > > > > > > > > > > > > > > > > > > > > > > > > > > >     transactionProxy.
> > > > > > > > > > > bindToCurrentThread();
> > > > > > > > > > > > > > > > > > > > > > > > > > > > }
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > In method *reopenTx*
> we
> > > > alter
> > > > > > > > > > *threadMap*
> > > > > > > > > > > > so
> > > > > > > > > > > > > > that
> > > > > > > > > > > > > > > > it
> > > > > > > > > > > > > > > > > > > binds
> > > > > > > > > > > > > > > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > > > > > > > > > > > > > to current thread.
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > How do u think about
> > it ?
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > вт, 7 мар. 2017 г. в
> > > 22:38,
> > > > > > Denis
> > > > > > > > > > Magda <
> > > > > > > > > > > > > > > > > > > dmagda@apache.org
> > > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > Hi Alexey,
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > Please share the
> > > rational
> > > > > > > behind
> > > > > > > > > this
> > > > > > > > > > > and
> > > > > > > > > > > > > the
> > > > > > > > > > > > > > > > > > thoughts,
> > > > > > > > > > > > > > > > > > > > > > design
> > > > > > > > > > > > > > > > > > > > > > > > > ideas
> > > > > > > > > > > > > > > > > > > > > > > > > > > you
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > have in mind.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > —
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > Denis
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > On Mar 7, 2017,
> at
> > > 3:19
> > > > > AM,
> > > > > > > > > ALEKSEY
> > > > > > > > > > > > > > > KUZNETSOV <
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > alkuznetsov.sb@gmail.com
> > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Hi all! Im
> > designing
> > > > > > > > distributed
> > > > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > > which
> > > > > > > > > > > > > > > > > > > can
> > > > > > > > > > > > > > > > > > > > be
> > > > > > > > > > > > > > > > > > > > > > > > started
> > > > > > > > > > > > > > > > > > > > > > > > > > at
> > > > > > > > > > > > > > > > > > > > > > > > > > > > one
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > node, and
> continued
> > > at
> > > > > > other
> > > > > > > > one.
> > > > > > > > > > Has
> > > > > > > > > > > > > > anybody
> > > > > > > > > > > > > > > > > > > thoughts
> > > > > > > > > > > > > > > > > > > > on
> > > > > > > > > > > > > > > > > > > > > > it
> > > > > > > > > > > > > > > > > > > > > > > ?
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > *Kuznetsov
> Aleksey*
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > --
> > > > > > > > > > > > >
> > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > >
> > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > --
> > > > > > > > > > >
> > > > > > > > > > > *Best Regards,*
> > > > > > > > > > >
> > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > --
> > > > > > > > >
> > > > > > > > > *Best Regards,*
> > > > > > > > >
> > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > >
> > > > > > > >
> > > > > > > --
> > > > > > >
> > > > > > > *Best Regards,*
> > > > > > >
> > > > > > > *Kuznetsov Aleksey*
> > > > > > >
> > > > > >
> > > > > > --
> > > > > >
> > > > > > *Best Regards,*
> > > > > >
> > > > > > *Kuznetsov Aleksey*
> > > > > >
> > > > > --
> > > > >
> > > > > *Best Regards,*
> > > > >
> > > > > *Kuznetsov Aleksey*
> > > > >
> > > >
> > > --
> > >
> > > *Best Regards,*
> > >
> > > *Kuznetsov Aleksey*
> > >
> >
> --
>
> *Best Regards,*
>
> *Kuznetsov Aleksey*
>

Re: distributed transaction of non-single coordinator

Posted by ALEKSEY KUZNETSOV <al...@gmail.com>.
we've discovered several problems regarding your "accumulation"
approach.These are

   1. perfomance issues when transfering data from temporary cache to
   permanent one. Keep in mind big deal of concurent transactions in Service
   commiter
   2. extreme memory load when keeping temporary cache in memory
   3. As long as user is not acquainted with ignite, working with cache
   must be transparent for him. Keep this in mind. User's node can evaluate
   logic with no transaction at all, so we should deal with both types of
   execution flow : transactional and non-transactional.Another one problem is
   transaction id support at the user node. We would have handled all this
   issues and many more.
   4. we cannot pessimistically lock entity.

As a result, we decided to move on building distributed transaction. We put
aside your "accumulation" approach until we realize how to solve
difficulties above .

чт, 16 мар. 2017 г. в 16:56, Sergi Vladykin <se...@gmail.com>:

> The problem "How to run millions of entities, and millions of operations on
> a single Pentium3" is out of scope here. Do the math, plan capacity
> reasonably.
>
> Sergi
>
> 2017-03-16 15:54 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:
>
> > hmm, If we have millions of entities, and millions of operations, would
> not
> > this approache lead to memory overflow and perfomance degradation
> >
> > чт, 16 мар. 2017 г. в 15:42, Sergi Vladykin <se...@gmail.com>:
> >
> > > 1. Actually you have to check versions on all the values you have read
> > > during the tx.
> > >
> > > For example if we have [k1 => v1, k2 => v2] and do:
> > >
> > > put(k1, get(k2) + 5)
> > >
> > > We have to remember the version for k2. This logic can be relatively
> > easily
> > > encapsulated in a framework atop of Ignite. You need to implement one
> to
> > > make all this stuff usable.
> > >
> > > 2. I suggest to avoid any locking here, because you easily will end up
> > with
> > > deadlocks. If you do not have too frequent updates for your keys,
> > > optimistic approach will work just fine.
> > >
> > > Theoretically in the Committer Service you can start a thread for the
> > > lifetime of the whole distributed transaction, take a lock on the key
> > using
> > > IgniteCache.lock(K key) before executing any Services, wait for all the
> > > services to complete, execute optimistic commit in the same thread
> while
> > > keeping this lock and then release it. Notice that all the Ignite
> > > transactions inside of all Services must be optimistic here to be able
> to
> > > read this locked key.
> > >
> > > But again I do not recommend you using this approach until you have a
> > > reliable deadlock avoidance scheme.
> > >
> > > Sergi
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > 2017-03-16 12:53 GMT+03:00 ALEKSEY KUZNETSOV <alkuznetsov.sb@gmail.com
> >:
> > >
> > > > Yeah, now i got it.
> > > > There are some doubts on this approach
> > > > 1) During optimistic commit phase, when you assure no one altered the
> > > > original values, you must check versions of other dependent keys. How
> > > could
> > > > we obtain those keys(in an automative manner, of course) ?
> > > > 2) How could we lock a key before some Service A introduce changes?
> So
> > no
> > > > other service is allowed to change this key-value?(sort of
> pessimistic
> > > > blocking)
> > > > May be you know some implementations of such approach ?
> > > >
> > > > ср, 15 мар. 2017 г. в 17:54, ALEKSEY KUZNETSOV <
> > alkuznetsov.sb@gmail.com
> > > >:
> > > >
> > > > >  Thank you very much for help.  I will answer later.
> > > > >
> > > > > ср, 15 мар. 2017 г. в 17:39, Sergi Vladykin <
> > sergi.vladykin@gmail.com
> > > >:
> > > > >
> > > > > All the services do not update key in place, but only generate new
> > keys
> > > > > augmented by otx and store the updated value in the same cache +
> > > remember
> > > > > the keys and versions participating in the transaction in some
> > separate
> > > > > atomic cache.
> > > > >
> > > > > Follow this sequence of changes applied to cache contents by each
> > > > Service:
> > > > >
> > > > > Initial cache contents:
> > > > >             [k1 => v1]
> > > > >             [k2 => v2]
> > > > >             [k3 => v3]
> > > > >
> > > > > Cache contents after Service A:
> > > > >             [k1 => v1]
> > > > >             [k2 => v2]
> > > > >             [k3 => v3]
> > > > >             [k1x => v1a]
> > > > >             [k2x => v2a]
> > > > >
> > > > >          + [x => (k1 -> ver1, k2 -> ver2)] in some separate atomic
> > > cache
> > > > >
> > > > > Cache contents after Service B:
> > > > >             [k1 => v1]
> > > > >             [k2 => v2]
> > > > >             [k3 => v3]
> > > > >             [k1x => v1a]
> > > > >             [k2x => v2ab]
> > > > >             [k3x => v3b]
> > > > >
> > > > >         + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in some
> > separate
> > > > > atomic cache
> > > > >
> > > > > Finally the Committer Service takes this map of updated keys and
> > their
> > > > > versions from some separate atomic cache, starts Ignite transaction
> > and
> > > > > replaces all the values for k* keys to values taken from k*x keys.
> > The
> > > > > successful result must be the following:
> > > > >
> > > > >             [k1 => v1a]
> > > > >             [k2 => v2ab]
> > > > >             [k3 => v3b]
> > > > >             [k1x => v1a]
> > > > >             [k2x => v2ab]
> > > > >             [k3x => v3b]
> > > > >
> > > > >         + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in some
> > separate
> > > > > atomic cache
> > > > >
> > > > > But Committer Service also has to check that no one updated the
> > > original
> > > > > values before us, because otherwise we can not give any
> > serializability
> > > > > guarantee for these distributed transactions. Here we may need to
> > check
> > > > not
> > > > > only versions of the updated keys, but also versions of any other
> > keys
> > > > end
> > > > > result depends on.
> > > > >
> > > > > After that Committer Service has to do a cleanup (may be outside of
> > the
> > > > > committing tx) to come to the following final state:
> > > > >
> > > > >             [k1 => v1a]
> > > > >             [k2 => v2ab]
> > > > >             [k3 => v3b]
> > > > >
> > > > > Makes sense?
> > > > >
> > > > > Sergi
> > > > >
> > > > >
> > > > > 2017-03-15 16:54 GMT+03:00 ALEKSEY KUZNETSOV <
> > alkuznetsov.sb@gmail.com
> > > >:
> > > > >
> > > > > >    - what do u mean by saying "
> > > > > > *in a single transaction checks value versions for all the old
> > values
> > > > > >     and replaces them with calculated new ones *"? Every time you
> > > > change
> > > > > >    value(in some service), you store it to *some special atomic
> > > cache*
> > > > ,
> > > > > so
> > > > > >    when all services ceased working, Service commiter got a
> values
> > > with
> > > > > the
> > > > > >    last versions.
> > > > > >    - After "*does cleanup of temporary keys and values*" Service
> > > > commiter
> > > > > >    persists them into permanent store, isn't it ?
> > > > > >    - I cant grasp your though, you say "*in case of version
> > mismatch
> > > or
> > > > > TX
> > > > > >    timeout just rollbacks*". But what versions would it match?
> > > > > >
> > > > > >
> > > > > > ср, 15 мар. 2017 г. в 15:34, Sergi Vladykin <
> > > sergi.vladykin@gmail.com
> > > > >:
> > > > > >
> > > > > > > Ok, here is what you actually need to implement at the
> > application
> > > > > level.
> > > > > > >
> > > > > > > Lets say we have to call 2 services in the following order:
> > > > > > >  - Service A: wants to update keys [k1 => v1,   k2 => v2]  to
> > [k1
> > > =>
> > > > > > v1a,
> > > > > > >   k2 => v2a]
> > > > > > >  - Service B: wants to update keys [k2 => v2a, k3 => v3]  to
> [k2
> > > =>
> > > > > > v2ab,
> > > > > > > k3 => v3b]
> > > > > > >
> > > > > > > The change
> > > > > > >     from [ k1 => v1,   k2 => v2,     k3 => v3   ]
> > > > > > >     to     [ k1 => v1a, k2 => v2ab, k3 => v3b ]
> > > > > > > must happen in a single transaction.
> > > > > > >
> > > > > > >
> > > > > > > Optimistic protocol to solve this:
> > > > > > >
> > > > > > > Each cache key must have a field `otx`, which is a unique
> > > > orchestrator
> > > > > TX
> > > > > > > identifier - it must be a parameter passed to all the services.
> > If
> > > > > `otx`
> > > > > > is
> > > > > > > set to some value it means that it is an intermediate key and
> is
> > > > > visible
> > > > > > > only inside of some transaction, for the finalized key `otx`
> must
> > > be
> > > > > > null -
> > > > > > > it means the key is committed and visible for everyone.
> > > > > > >
> > > > > > > Each cache value must have a field `ver` which is a version of
> > that
> > > > > > value.
> > > > > > >
> > > > > > > For both fields (`otx` and `ver`) the safest way is to use
> UUID.
> > > > > > >
> > > > > > > Workflow is the following:
> > > > > > >
> > > > > > > Orchestrator starts the distributed transaction with `otx` = x
> > and
> > > > > passes
> > > > > > > this parameter to all the services.
> > > > > > >
> > > > > > > Service A:
> > > > > > >  - does some computations
> > > > > > >  - stores [k1x => v1a, k2x => v2a]  with TTL = Za
> > > > > > >       where
> > > > > > >           Za - left time from max Orchestrator TX duration
> after
> > > > > Service
> > > > > > A
> > > > > > > end
> > > > > > >           k1x, k2x - new temporary keys with field `otx` = x
> > > > > > >           v2a has updated version `ver`
> > > > > > >  - returns a set of updated keys and all the old versions to
> the
> > > > > > > orchestrator
> > > > > > >        or just stores it in some special atomic cache like
> > > > > > >        [x => (k1 -> ver1, k2 -> ver2)] TTL = Za
> > > > > > >
> > > > > > > Service B:
> > > > > > >  - retrieves the updated value k2x => v2a because it knows
> `otx`
> > =
> > > x
> > > > > > >  - does computations
> > > > > > >  - stores [k2x => v2ab, k3x => v3b] TTL = Zb
> > > > > > >  - updates the set of updated keys like [x => (k1 -> ver1, k2
> ->
> > > > ver2,
> > > > > k3
> > > > > > > -> ver3)] TTL = Zb
> > > > > > >
> > > > > > > Service Committer (may be embedded into Orchestrator):
> > > > > > >  - takes all the updated keys and versions for `otx` = x
> > > > > > >        [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)]
> > > > > > >  - in a single transaction checks value versions for all the
> old
> > > > values
> > > > > > >        and replaces them with calculated new ones
> > > > > > >  - does cleanup of temporary keys and values
> > > > > > >  - in case of version mismatch or TX timeout just rollbacks and
> > > > signals
> > > > > > >         to Orchestrator to restart the job with new `otx`
> > > > > > >
> > > > > > > PROFIT!!
> > > > > > >
> > > > > > > This approach even allows you to run independent parts of the
> > graph
> > > > in
> > > > > > > parallel (with TX transfer you will always run only one at a
> > time).
> > > > > Also
> > > > > > it
> > > > > > > does not require inventing any special fault tolerance technics
> > > > because
> > > > > > > Ignite caches are already fault tolerant and all the
> intermediate
> > > > > results
> > > > > > > are virtually invisible and stored with TTL, thus in case of
> any
> > > > crash
> > > > > > you
> > > > > > > will not have inconsistent state or garbage.
> > > > > > >
> > > > > > > Sergi
> > > > > > >
> > > > > > >
> > > > > > > 2017-03-15 11:42 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > alkuznetsov.sb@gmail.com
> > > > > >:
> > > > > > >
> > > > > > > > Okay, we are open for proposals on business task. I mean, we
> > can
> > > > make
> > > > > > use
> > > > > > > > of some other thing, not distributed transaction. Not
> > transaction
> > > > > yet.
> > > > > > > >
> > > > > > > > ср, 15 мар. 2017 г. в 11:24, Vladimir Ozerov <
> > > vozerov@gridgain.com
> > > > >:
> > > > > > > >
> > > > > > > > > IMO the use case makes sense. However, as Sergi already
> > > > mentioned,
> > > > > > the
> > > > > > > > > problem is far more complex, than simply passing TX state
> > over
> > > a
> > > > > > wire.
> > > > > > > > Most
> > > > > > > > > probably a kind of coordinator will be required still to
> > manage
> > > > all
> > > > > > > kinds
> > > > > > > > > of failures. This task should be started with clean design
> > > > proposal
> > > > > > > > > explaining how we handle all these concurrent events. And
> > only
> > > > > then,
> > > > > > > when
> > > > > > > > > we understand all implications, we should move to
> development
> > > > > stage.
> > > > > > > > >
> > > > > > > > > On Wed, Mar 15, 2017 at 10:38 AM, ALEKSEY KUZNETSOV <
> > > > > > > > > alkuznetsov.sb@gmail.com> wrote:
> > > > > > > > >
> > > > > > > > > > Right
> > > > > > > > > >
> > > > > > > > > > ср, 15 мар. 2017 г. в 10:35, Sergi Vladykin <
> > > > > > > sergi.vladykin@gmail.com
> > > > > > > > >:
> > > > > > > > > >
> > > > > > > > > > > Good! Basically your orchestrator just takes some
> > > predefined
> > > > > > graph
> > > > > > > of
> > > > > > > > > > > distributed services to be invoked, calls them by some
> > kind
> > > > of
> > > > > > RPC
> > > > > > > > and
> > > > > > > > > > > passes the needed parameters between them, right?
> > > > > > > > > > >
> > > > > > > > > > > Sergi
> > > > > > > > > > >
> > > > > > > > > > > 2017-03-14 22:46 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > >:
> > > > > > > > > > >
> > > > > > > > > > > > orchestrator is a custom thing. He is responsible for
> > > > > managing
> > > > > > > > > business
> > > > > > > > > > > > scenarios flows. Many nodes are involved in
> scenarios.
> > > They
> > > > > > > > exchange
> > > > > > > > > > data
> > > > > > > > > > > > and folow one another. If you acquinted with BPMN
> > > > framework,
> > > > > so
> > > > > > > > > > > > orchestrator is like bpmn engine.
> > > > > > > > > > > >
> > > > > > > > > > > > вт, 14 Мар 2017 г., 18:56 Sergi Vladykin <
> > > > > > > sergi.vladykin@gmail.com
> > > > > > > > >:
> > > > > > > > > > > >
> > > > > > > > > > > > > What is Orchestrator for you? Is it a thing from
> > > > Microsoft
> > > > > or
> > > > > > > > your
> > > > > > > > > > > custom
> > > > > > > > > > > > > in-house software?
> > > > > > > > > > > > >
> > > > > > > > > > > > > Sergi
> > > > > > > > > > > > >
> > > > > > > > > > > > > 2017-03-14 18:00 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > >:
> > > > > > > > > > > > >
> > > > > > > > > > > > > > Fine. Let's say we've got multiple servers which
> > > > fulfills
> > > > > > > > custom
> > > > > > > > > > > logic.
> > > > > > > > > > > > > > This servers compound oriented graph (BPMN
> process)
> > > > which
> > > > > > > > > > controlled
> > > > > > > > > > > by
> > > > > > > > > > > > > > Orchestrator.
> > > > > > > > > > > > > > For instance, *server1  *creates *variable A
> *with
> > > > value
> > > > > 1,
> > > > > > > > > > persists
> > > > > > > > > > > it
> > > > > > > > > > > > > to
> > > > > > > > > > > > > > IGNITE cache and creates *variable B *and sends
> it
> > > to*
> > > > > > > server2.
> > > > > > > > > > *The
> > > > > > > > > > > > > > latests receives *variable B*, do some logic with
> > it
> > > > and
> > > > > > > stores
> > > > > > > > > to
> > > > > > > > > > > > > IGNITE.
> > > > > > > > > > > > > > All the work made by both servers must be
> fulfilled
> > > in
> > > > > > *one*
> > > > > > > > > > > > transaction.
> > > > > > > > > > > > > > Because we need all information done, or
> > > > > > nothing(rollbacked).
> > > > > > > > The
> > > > > > > > > > > > > scenario
> > > > > > > > > > > > > > is managed by orchestrator.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > вт, 14 мар. 2017 г. в 17:31, Sergi Vladykin <
> > > > > > > > > > > sergi.vladykin@gmail.com
> > > > > > > > > > > > >:
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Ok, it is not a business case, it is your wrong
> > > > > solution
> > > > > > > for
> > > > > > > > > it.
> > > > > > > > > > > > > > > Lets try again, what is the business case?
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Sergi
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > 2017-03-14 16:42 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > >:
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > The case is the following, One starts
> > transaction
> > > > in
> > > > > > one
> > > > > > > > > node,
> > > > > > > > > > > and
> > > > > > > > > > > > > > commit
> > > > > > > > > > > > > > > > this transaction in another jvm node(or
> > rollback
> > > it
> > > > > > > > > remotely).
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > вт, 14 мар. 2017 г. в 16:30, Sergi Vladykin <
> > > > > > > > > > > > > sergi.vladykin@gmail.com
> > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > Because even if you make it work for some
> > > > > simplistic
> > > > > > > > > > scenario,
> > > > > > > > > > > > get
> > > > > > > > > > > > > > > ready
> > > > > > > > > > > > > > > > to
> > > > > > > > > > > > > > > > > write many fault tolerance tests and make
> > sure
> > > > that
> > > > > > you
> > > > > > > > TXs
> > > > > > > > > > > work
> > > > > > > > > > > > > > > > gracefully
> > > > > > > > > > > > > > > > > in all modes in case of crashes. Also make
> > sure
> > > > > that
> > > > > > we
> > > > > > > > do
> > > > > > > > > > not
> > > > > > > > > > > > have
> > > > > > > > > > > > > > any
> > > > > > > > > > > > > > > > > performance drops after all your changes in
> > > > > existing
> > > > > > > > > > > benchmarks.
> > > > > > > > > > > > > All
> > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > all
> > > > > > > > > > > > > > > > > I don't believe these conditions will be
> met
> > > and
> > > > > your
> > > > > > > > > > > > contribution
> > > > > > > > > > > > > > will
> > > > > > > > > > > > > > > > be
> > > > > > > > > > > > > > > > > accepted.
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > Better solution to what problem? Sending TX
> > to
> > > > > > another
> > > > > > > > > node?
> > > > > > > > > > > The
> > > > > > > > > > > > > > > problem
> > > > > > > > > > > > > > > > > statement itself is already wrong. What
> > > business
> > > > > case
> > > > > > > you
> > > > > > > > > are
> > > > > > > > > > > > > trying
> > > > > > > > > > > > > > to
> > > > > > > > > > > > > > > > > solve? I'm sure everything you need can be
> > done
> > > > in
> > > > > a
> > > > > > > much
> > > > > > > > > > more
> > > > > > > > > > > > > simple
> > > > > > > > > > > > > > > and
> > > > > > > > > > > > > > > > > efficient way at the application level.
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > Sergi
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > 2017-03-14 16:03 GMT+03:00 ALEKSEY
> KUZNETSOV
> > <
> > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > Why wrong ? You know the better solution?
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > вт, 14 мар. 2017 г. в 15:46, Sergi
> > Vladykin <
> > > > > > > > > > > > > > > sergi.vladykin@gmail.com
> > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > Just serializing TX object and
> > > deserializing
> > > > it
> > > > > > on
> > > > > > > > > > another
> > > > > > > > > > > > node
> > > > > > > > > > > > > > is
> > > > > > > > > > > > > > > > > > > meaningless, because other nodes
> > > > participating
> > > > > in
> > > > > > > the
> > > > > > > > > TX
> > > > > > > > > > > have
> > > > > > > > > > > > > to
> > > > > > > > > > > > > > > know
> > > > > > > > > > > > > > > > > > about
> > > > > > > > > > > > > > > > > > > the new coordinator. This will require
> > > > protocol
> > > > > > > > > changes,
> > > > > > > > > > we
> > > > > > > > > > > > > > > > definitely
> > > > > > > > > > > > > > > > > > will
> > > > > > > > > > > > > > > > > > > have fault tolerance and performance
> > > issues.
> > > > > IMO
> > > > > > > the
> > > > > > > > > > whole
> > > > > > > > > > > > idea
> > > > > > > > > > > > > > is
> > > > > > > > > > > > > > > > > wrong
> > > > > > > > > > > > > > > > > > > and it makes no sense to waste time on
> > it.
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > Sergi
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > 2017-03-14 10:57 GMT+03:00 ALEKSEY
> > > KUZNETSOV
> > > > <
> > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > IgniteTransactionState
> implememntation
> > > > > contains
> > > > > > > > > > > > > IgniteTxEntry's
> > > > > > > > > > > > > > > > which
> > > > > > > > > > > > > > > > > > is
> > > > > > > > > > > > > > > > > > > > supposed to be transferable
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > пн, 13 мар. 2017 г. в 19:32, Dmitriy
> > > > > Setrakyan
> > > > > > <
> > > > > > > > > > > > > > > > > dsetrakyan@apache.org
> > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > It sounds a little scary to me that
> > we
> > > > are
> > > > > > > > passing
> > > > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > > > objects
> > > > > > > > > > > > > > > > > > > > > around. Such object may contain all
> > > sorts
> > > > > of
> > > > > > > > Ignite
> > > > > > > > > > > > > context.
> > > > > > > > > > > > > > If
> > > > > > > > > > > > > > > > > some
> > > > > > > > > > > > > > > > > > > data
> > > > > > > > > > > > > > > > > > > > > needs to be passed across, we
> should
> > > > > create a
> > > > > > > > > special
> > > > > > > > > > > > > > transfer
> > > > > > > > > > > > > > > > > object
> > > > > > > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > > > > this case.
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > D.
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > On Mon, Mar 13, 2017 at 9:10 AM,
> > > ALEKSEY
> > > > > > > > KUZNETSOV
> > > > > > > > > <
> > > > > > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > well, there a couple of issues
> > > > preventing
> > > > > > > > > > transaction
> > > > > > > > > > > > > > > > proceeding.
> > > > > > > > > > > > > > > > > > > > > > At first, After transaction
> > > > serialization
> > > > > > and
> > > > > > > > > > > > > > deserialization
> > > > > > > > > > > > > > > > on
> > > > > > > > > > > > > > > > > > the
> > > > > > > > > > > > > > > > > > > > > remote
> > > > > > > > > > > > > > > > > > > > > > server, there is no txState. So
> im
> > > > going
> > > > > to
> > > > > > > put
> > > > > > > > > it
> > > > > > > > > > in
> > > > > > > > > > > > > > > > > > > > > > writeExternal()\readExternal()
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > The last one is Deserialized
> > > > transaction
> > > > > > > lacks
> > > > > > > > of
> > > > > > > > > > > > shared
> > > > > > > > > > > > > > > cache
> > > > > > > > > > > > > > > > > > > context
> > > > > > > > > > > > > > > > > > > > > > field at TransactionProxyImpl.
> > > Perhaps,
> > > > > it
> > > > > > > must
> > > > > > > > > be
> > > > > > > > > > > > > injected
> > > > > > > > > > > > > > > by
> > > > > > > > > > > > > > > > > > > > > > GridResourceProcessor ?
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > пн, 13 мар. 2017 г. в 17:27,
> > ALEKSEY
> > > > > > > KUZNETSOV
> > > > > > > > <
> > > > > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > while starting and continuing
> > > > > transaction
> > > > > > > in
> > > > > > > > > > > > different
> > > > > > > > > > > > > > jvms
> > > > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > run
> > > > > > > > > > > > > > > > > > > > into
> > > > > > > > > > > > > > > > > > > > > > > serialization exception in
> > > > > > > writeExternalMeta
> > > > > > > > :
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > @Override public void
> > > > > > > > > writeExternal(ObjectOutput
> > > > > > > > > > > out)
> > > > > > > > > > > > > > > throws
> > > > > > > > > > > > > > > > > > > > > IOException
> > > > > > > > > > > > > > > > > > > > > > {
> > > > > > > > > > > > > > > > > > > > > > >     writeExternalMeta(out);
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > some meta is cannot be
> > serialized.
> > > > > > > > > > > > > > > > > > > > > > > пт, 10 мар. 2017 г. в 17:25,
> > Alexey
> > > > > > > > Goncharuk <
> > > > > > > > > > > > > > > > > > > > > > alexey.goncharuk@gmail.com
> > > > > > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > Aleksey,
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > I think I am starting to get
> what
> > > you
> > > > > > want,
> > > > > > > > > but I
> > > > > > > > > > > > have
> > > > > > > > > > > > > a
> > > > > > > > > > > > > > > few
> > > > > > > > > > > > > > > > > > > > concerns:
> > > > > > > > > > > > > > > > > > > > > > >  - What is the API for the
> > proposed
> > > > > > change?
> > > > > > > > In
> > > > > > > > > > your
> > > > > > > > > > > > > test,
> > > > > > > > > > > > > > > you
> > > > > > > > > > > > > > > > > > pass
> > > > > > > > > > > > > > > > > > > an
> > > > > > > > > > > > > > > > > > > > > > > instance of transaction created
> > on
> > > > > > > ignite(0)
> > > > > > > > to
> > > > > > > > > > the
> > > > > > > > > > > > > > ignite
> > > > > > > > > > > > > > > > > > instance
> > > > > > > > > > > > > > > > > > > > > > > ignite(1). This is obviously
> not
> > > > > possible
> > > > > > > in
> > > > > > > > a
> > > > > > > > > > > truly
> > > > > > > > > > > > > > > > > distributed
> > > > > > > > > > > > > > > > > > > > > > > (multi-jvm) environment.
> > > > > > > > > > > > > > > > > > > > > > > - How will you synchronize
> cache
> > > > update
> > > > > > > > actions
> > > > > > > > > > and
> > > > > > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > > > > > commit?
> > > > > > > > > > > > > > > > > > > > > > > Say, you have one node that
> > decided
> > > > to
> > > > > > > > commit,
> > > > > > > > > > but
> > > > > > > > > > > > > > another
> > > > > > > > > > > > > > > > node
> > > > > > > > > > > > > > > > > > is
> > > > > > > > > > > > > > > > > > > > > still
> > > > > > > > > > > > > > > > > > > > > > > writing within this
> transaction.
> > > How
> > > > do
> > > > > > you
> > > > > > > > > make
> > > > > > > > > > > sure
> > > > > > > > > > > > > > that
> > > > > > > > > > > > > > > > two
> > > > > > > > > > > > > > > > > > > nodes
> > > > > > > > > > > > > > > > > > > > > will
> > > > > > > > > > > > > > > > > > > > > > > not call commit() and
> rollback()
> > > > > > > > > simultaneously?
> > > > > > > > > > > > > > > > > > > > > > >  - How do you make sure that
> > either
> > > > > > > commit()
> > > > > > > > or
> > > > > > > > > > > > > > rollback()
> > > > > > > > > > > > > > > is
> > > > > > > > > > > > > > > > > > > called
> > > > > > > > > > > > > > > > > > > > if
> > > > > > > > > > > > > > > > > > > > > > an
> > > > > > > > > > > > > > > > > > > > > > > originator failed?
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > 2017-03-10 15:38 GMT+03:00
> > Дмитрий
> > > > > Рябов
> > > > > > <
> > > > > > > > > > > > > > > > > somefireone@gmail.com
> > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > Alexey Goncharuk, heh, my
> > initial
> > > > > > > > > understanding
> > > > > > > > > > > was
> > > > > > > > > > > > > > that
> > > > > > > > > > > > > > > > > > > > transferring
> > > > > > > > > > > > > > > > > > > > > > of
> > > > > > > > > > > > > > > > > > > > > > > tx
> > > > > > > > > > > > > > > > > > > > > > > > ownership from one node to
> > > another
> > > > > will
> > > > > > > be
> > > > > > > > > > > happened
> > > > > > > > > > > > > > > > > > automatically
> > > > > > > > > > > > > > > > > > > > > when
> > > > > > > > > > > > > > > > > > > > > > > > originating node is gone
> down.
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > 2017-03-10 15:36 GMT+03:00
> > > ALEKSEY
> > > > > > > > KUZNETSOV
> > > > > > > > > <
> > > > > > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > Im aiming to span
> transaction
> > > on
> > > > > > > multiple
> > > > > > > > > > > > threads,
> > > > > > > > > > > > > > > nodes,
> > > > > > > > > > > > > > > > > > > > > jvms(soon).
> > > > > > > > > > > > > > > > > > > > > > > So
> > > > > > > > > > > > > > > > > > > > > > > > > every node is able to
> > rollback,
> > > > or
> > > > > > > commit
> > > > > > > > > > > common
> > > > > > > > > > > > > > > > > > transaction.It
> > > > > > > > > > > > > > > > > > > > > > turned
> > > > > > > > > > > > > > > > > > > > > > > > up i
> > > > > > > > > > > > > > > > > > > > > > > > > need to transfer tx between
> > > nodes
> > > > > in
> > > > > > > > order
> > > > > > > > > to
> > > > > > > > > > > > > commit
> > > > > > > > > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > > > > > > > > different node(in the same
> > > jvm).
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > пт, 10 мар. 2017 г. в
> 15:20,
> > > > Alexey
> > > > > > > > > > Goncharuk <
> > > > > > > > > > > > > > > > > > > > > > > > alexey.goncharuk@gmail.com
> > > > > > > > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > Aleksey,
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > Do you mean that you
> want a
> > > > > concept
> > > > > > > of
> > > > > > > > > > > > > transferring
> > > > > > > > > > > > > > > of
> > > > > > > > > > > > > > > > tx
> > > > > > > > > > > > > > > > > > > > > ownership
> > > > > > > > > > > > > > > > > > > > > > > > from
> > > > > > > > > > > > > > > > > > > > > > > > > > one node to another? My
> > > initial
> > > > > > > > > > understanding
> > > > > > > > > > > > was
> > > > > > > > > > > > > > > that
> > > > > > > > > > > > > > > > > you
> > > > > > > > > > > > > > > > > > > want
> > > > > > > > > > > > > > > > > > > > > to
> > > > > > > > > > > > > > > > > > > > > > be
> > > > > > > > > > > > > > > > > > > > > > > > > able
> > > > > > > > > > > > > > > > > > > > > > > > > > to update keys in a
> > > transaction
> > > > > > from
> > > > > > > > > > multiple
> > > > > > > > > > > > > > threads
> > > > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > > > > parallel.
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > --AG
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > 2017-03-10 15:01
> GMT+03:00
> > > > > ALEKSEY
> > > > > > > > > > KUZNETSOV
> > > > > > > > > > > <
> > > > > > > > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > Well. Consider
> > transaction
> > > > > > started
> > > > > > > in
> > > > > > > > > one
> > > > > > > > > > > > node,
> > > > > > > > > > > > > > and
> > > > > > > > > > > > > > > > > > > continued
> > > > > > > > > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > > > > > > > > another
> > > > > > > > > > > > > > > > > > > > > > > > > > > one.
> > > > > > > > > > > > > > > > > > > > > > > > > > > The following test
> > > describes
> > > > my
> > > > > > > idea:
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > Ignite ignite1 =
> > ignite(0);
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > IgniteTransactions
> > > > > transactions =
> > > > > > > > > > > > > > > > > ignite1.transactions();
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > IgniteCache<String,
> > > Integer>
> > > > > > cache
> > > > > > > =
> > > > > > > > > > > > > > > > > > > > ignite1.getOrCreateCache("
> > > > > > > > > > > > > > > > > > > > > > > > > > > testCache");
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > Transaction tx =
> > > > > > > > transactions.txStart(
> > > > > > > > > > > > > > concurrency,
> > > > > > > > > > > > > > > > > > > > isolation);
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > cache.put("key1", 1);
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > cache.put("key2", 2);
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > tx.stop();
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > IgniteInternalFuture<Boolean>
> > > > > > fut =
> > > > > > > > > > > > > > > > > > > GridTestUtils.runAsync(()
> > > > > > > > > > > > > > > > > > > > > ->
> > > > > > > > > > > > > > > > > > > > > > {
> > > > > > > > > > > > > > > > > > > > > > > > > > >     IgniteTransactions
> > ts =
> > > > > > > > > > > > > > > ignite(1).transactions();
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > >  Assert.assertNull(ts.tx());
> > > > > > > > > > > > > > > > > > > > > > > > > > >
>  Assert.assertEquals(
> > > > > > > > > > > > > > TransactionState.STOPPED,
> > > > > > > > > > > > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > > > > > > > > > > >     ts.txStart(tx);
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > >  Assert.assertEquals(TransactionState.ACTIVE,
> > > > > > > > > > > > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > > > > > > > > > > >     cache.put("key3",
> 3);
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > >  Assert.assertTrue(cache.remove("key2"));
> > > > > > > > > > > > > > > > > > > > > > > > > > >     tx.commit();
> > > > > > > > > > > > > > > > > > > > > > > > > > >     return true;
> > > > > > > > > > > > > > > > > > > > > > > > > > > });
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > fut.get();
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > Assert.assertEquals(
> > > > > > > > > > > > TransactionState.COMMITTED,
> > > > > > > > > > > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > Assert.assertEquals((long)1,
> > > > > > > > > > > > > > > > (long)cache.get("key1"));
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > Assert.assertEquals((long)3,
> > > > > > > > > > > > > > > > (long)cache.get("key3"));
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> Assert.assertFalse(cache.
> > > > > > > > > > > > containsKey("key2"));
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > In method
> > *ts.txStart(...)*
> > > > we
> > > > > > just
> > > > > > > > > > rebind
> > > > > > > > > > > > *tx*
> > > > > > > > > > > > > > to
> > > > > > > > > > > > > > > > > > current
> > > > > > > > > > > > > > > > > > > > > > thread:
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > public void
> > > > txStart(Transaction
> > > > > > > tx) {
> > > > > > > > > > > > > > > > > > > > > > > > > > >
>  TransactionProxyImpl
> > > > > > > > > > transactionProxy =
> > > > > > > > > > > > > > > > > > > > > > > (TransactionProxyImpl)tx;
> > > > > > > > > > > > > > > > > > > > > > > > > > >     cctx.tm
> ().reopenTx(
> > > > > > > > > > > > transactionProxy.tx());
> > > > > > > > > > > > > > > > > > > > > > > > > > >     transactionProxy.
> > > > > > > > > > bindToCurrentThread();
> > > > > > > > > > > > > > > > > > > > > > > > > > > }
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > In method *reopenTx* we
> > > alter
> > > > > > > > > *threadMap*
> > > > > > > > > > > so
> > > > > > > > > > > > > that
> > > > > > > > > > > > > > > it
> > > > > > > > > > > > > > > > > > binds
> > > > > > > > > > > > > > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > > > > > > > > > > > > to current thread.
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > How do u think about
> it ?
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > вт, 7 мар. 2017 г. в
> > 22:38,
> > > > > Denis
> > > > > > > > > Magda <
> > > > > > > > > > > > > > > > > > dmagda@apache.org
> > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > Hi Alexey,
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > Please share the
> > rational
> > > > > > behind
> > > > > > > > this
> > > > > > > > > > and
> > > > > > > > > > > > the
> > > > > > > > > > > > > > > > > thoughts,
> > > > > > > > > > > > > > > > > > > > > design
> > > > > > > > > > > > > > > > > > > > > > > > ideas
> > > > > > > > > > > > > > > > > > > > > > > > > > you
> > > > > > > > > > > > > > > > > > > > > > > > > > > > have in mind.
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > —
> > > > > > > > > > > > > > > > > > > > > > > > > > > > Denis
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > On Mar 7, 2017, at
> > 3:19
> > > > AM,
> > > > > > > > ALEKSEY
> > > > > > > > > > > > > > KUZNETSOV <
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> alkuznetsov.sb@gmail.com
> > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > Hi all! Im
> designing
> > > > > > > distributed
> > > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > which
> > > > > > > > > > > > > > > > > > can
> > > > > > > > > > > > > > > > > > > be
> > > > > > > > > > > > > > > > > > > > > > > started
> > > > > > > > > > > > > > > > > > > > > > > > > at
> > > > > > > > > > > > > > > > > > > > > > > > > > > one
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > node, and continued
> > at
> > > > > other
> > > > > > > one.
> > > > > > > > > Has
> > > > > > > > > > > > > anybody
> > > > > > > > > > > > > > > > > > thoughts
> > > > > > > > > > > > > > > > > > > on
> > > > > > > > > > > > > > > > > > > > > it
> > > > > > > > > > > > > > > > > > > > > > ?
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > --
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > --
> > > > > > > > > > > >
> > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > >
> > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > --
> > > > > > > > > >
> > > > > > > > > > *Best Regards,*
> > > > > > > > > >
> > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > --
> > > > > > > >
> > > > > > > > *Best Regards,*
> > > > > > > >
> > > > > > > > *Kuznetsov Aleksey*
> > > > > > > >
> > > > > > >
> > > > > > --
> > > > > >
> > > > > > *Best Regards,*
> > > > > >
> > > > > > *Kuznetsov Aleksey*
> > > > > >
> > > > >
> > > > > --
> > > > >
> > > > > *Best Regards,*
> > > > >
> > > > > *Kuznetsov Aleksey*
> > > > >
> > > > --
> > > >
> > > > *Best Regards,*
> > > >
> > > > *Kuznetsov Aleksey*
> > > >
> > >
> > --
> >
> > *Best Regards,*
> >
> > *Kuznetsov Aleksey*
> >
>
-- 

*Best Regards,*

*Kuznetsov Aleksey*

Re: distributed transaction of non-single coordinator

Posted by Sergi Vladykin <se...@gmail.com>.
The problem "How to run millions of entities, and millions of operations on
a single Pentium3" is out of scope here. Do the math, plan capacity
reasonably.

Sergi

2017-03-16 15:54 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:

> hmm, If we have millions of entities, and millions of operations, would not
> this approache lead to memory overflow and perfomance degradation
>
> чт, 16 мар. 2017 г. в 15:42, Sergi Vladykin <se...@gmail.com>:
>
> > 1. Actually you have to check versions on all the values you have read
> > during the tx.
> >
> > For example if we have [k1 => v1, k2 => v2] and do:
> >
> > put(k1, get(k2) + 5)
> >
> > We have to remember the version for k2. This logic can be relatively
> easily
> > encapsulated in a framework atop of Ignite. You need to implement one to
> > make all this stuff usable.
> >
> > 2. I suggest to avoid any locking here, because you easily will end up
> with
> > deadlocks. If you do not have too frequent updates for your keys,
> > optimistic approach will work just fine.
> >
> > Theoretically in the Committer Service you can start a thread for the
> > lifetime of the whole distributed transaction, take a lock on the key
> using
> > IgniteCache.lock(K key) before executing any Services, wait for all the
> > services to complete, execute optimistic commit in the same thread while
> > keeping this lock and then release it. Notice that all the Ignite
> > transactions inside of all Services must be optimistic here to be able to
> > read this locked key.
> >
> > But again I do not recommend you using this approach until you have a
> > reliable deadlock avoidance scheme.
> >
> > Sergi
> >
> >
> >
> >
> >
> >
> >
> > 2017-03-16 12:53 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:
> >
> > > Yeah, now i got it.
> > > There are some doubts on this approach
> > > 1) During optimistic commit phase, when you assure no one altered the
> > > original values, you must check versions of other dependent keys. How
> > could
> > > we obtain those keys(in an automative manner, of course) ?
> > > 2) How could we lock a key before some Service A introduce changes? So
> no
> > > other service is allowed to change this key-value?(sort of pessimistic
> > > blocking)
> > > May be you know some implementations of such approach ?
> > >
> > > ср, 15 мар. 2017 г. в 17:54, ALEKSEY KUZNETSOV <
> alkuznetsov.sb@gmail.com
> > >:
> > >
> > > >  Thank you very much for help.  I will answer later.
> > > >
> > > > ср, 15 мар. 2017 г. в 17:39, Sergi Vladykin <
> sergi.vladykin@gmail.com
> > >:
> > > >
> > > > All the services do not update key in place, but only generate new
> keys
> > > > augmented by otx and store the updated value in the same cache +
> > remember
> > > > the keys and versions participating in the transaction in some
> separate
> > > > atomic cache.
> > > >
> > > > Follow this sequence of changes applied to cache contents by each
> > > Service:
> > > >
> > > > Initial cache contents:
> > > >             [k1 => v1]
> > > >             [k2 => v2]
> > > >             [k3 => v3]
> > > >
> > > > Cache contents after Service A:
> > > >             [k1 => v1]
> > > >             [k2 => v2]
> > > >             [k3 => v3]
> > > >             [k1x => v1a]
> > > >             [k2x => v2a]
> > > >
> > > >          + [x => (k1 -> ver1, k2 -> ver2)] in some separate atomic
> > cache
> > > >
> > > > Cache contents after Service B:
> > > >             [k1 => v1]
> > > >             [k2 => v2]
> > > >             [k3 => v3]
> > > >             [k1x => v1a]
> > > >             [k2x => v2ab]
> > > >             [k3x => v3b]
> > > >
> > > >         + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in some
> separate
> > > > atomic cache
> > > >
> > > > Finally the Committer Service takes this map of updated keys and
> their
> > > > versions from some separate atomic cache, starts Ignite transaction
> and
> > > > replaces all the values for k* keys to values taken from k*x keys.
> The
> > > > successful result must be the following:
> > > >
> > > >             [k1 => v1a]
> > > >             [k2 => v2ab]
> > > >             [k3 => v3b]
> > > >             [k1x => v1a]
> > > >             [k2x => v2ab]
> > > >             [k3x => v3b]
> > > >
> > > >         + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in some
> separate
> > > > atomic cache
> > > >
> > > > But Committer Service also has to check that no one updated the
> > original
> > > > values before us, because otherwise we can not give any
> serializability
> > > > guarantee for these distributed transactions. Here we may need to
> check
> > > not
> > > > only versions of the updated keys, but also versions of any other
> keys
> > > end
> > > > result depends on.
> > > >
> > > > After that Committer Service has to do a cleanup (may be outside of
> the
> > > > committing tx) to come to the following final state:
> > > >
> > > >             [k1 => v1a]
> > > >             [k2 => v2ab]
> > > >             [k3 => v3b]
> > > >
> > > > Makes sense?
> > > >
> > > > Sergi
> > > >
> > > >
> > > > 2017-03-15 16:54 GMT+03:00 ALEKSEY KUZNETSOV <
> alkuznetsov.sb@gmail.com
> > >:
> > > >
> > > > >    - what do u mean by saying "
> > > > > *in a single transaction checks value versions for all the old
> values
> > > > >     and replaces them with calculated new ones *"? Every time you
> > > change
> > > > >    value(in some service), you store it to *some special atomic
> > cache*
> > > ,
> > > > so
> > > > >    when all services ceased working, Service commiter got a values
> > with
> > > > the
> > > > >    last versions.
> > > > >    - After "*does cleanup of temporary keys and values*" Service
> > > commiter
> > > > >    persists them into permanent store, isn't it ?
> > > > >    - I cant grasp your though, you say "*in case of version
> mismatch
> > or
> > > > TX
> > > > >    timeout just rollbacks*". But what versions would it match?
> > > > >
> > > > >
> > > > > ср, 15 мар. 2017 г. в 15:34, Sergi Vladykin <
> > sergi.vladykin@gmail.com
> > > >:
> > > > >
> > > > > > Ok, here is what you actually need to implement at the
> application
> > > > level.
> > > > > >
> > > > > > Lets say we have to call 2 services in the following order:
> > > > > >  - Service A: wants to update keys [k1 => v1,   k2 => v2]  to
> [k1
> > =>
> > > > > v1a,
> > > > > >   k2 => v2a]
> > > > > >  - Service B: wants to update keys [k2 => v2a, k3 => v3]  to  [k2
> > =>
> > > > > v2ab,
> > > > > > k3 => v3b]
> > > > > >
> > > > > > The change
> > > > > >     from [ k1 => v1,   k2 => v2,     k3 => v3   ]
> > > > > >     to     [ k1 => v1a, k2 => v2ab, k3 => v3b ]
> > > > > > must happen in a single transaction.
> > > > > >
> > > > > >
> > > > > > Optimistic protocol to solve this:
> > > > > >
> > > > > > Each cache key must have a field `otx`, which is a unique
> > > orchestrator
> > > > TX
> > > > > > identifier - it must be a parameter passed to all the services.
> If
> > > > `otx`
> > > > > is
> > > > > > set to some value it means that it is an intermediate key and is
> > > > visible
> > > > > > only inside of some transaction, for the finalized key `otx` must
> > be
> > > > > null -
> > > > > > it means the key is committed and visible for everyone.
> > > > > >
> > > > > > Each cache value must have a field `ver` which is a version of
> that
> > > > > value.
> > > > > >
> > > > > > For both fields (`otx` and `ver`) the safest way is to use UUID.
> > > > > >
> > > > > > Workflow is the following:
> > > > > >
> > > > > > Orchestrator starts the distributed transaction with `otx` = x
> and
> > > > passes
> > > > > > this parameter to all the services.
> > > > > >
> > > > > > Service A:
> > > > > >  - does some computations
> > > > > >  - stores [k1x => v1a, k2x => v2a]  with TTL = Za
> > > > > >       where
> > > > > >           Za - left time from max Orchestrator TX duration after
> > > > Service
> > > > > A
> > > > > > end
> > > > > >           k1x, k2x - new temporary keys with field `otx` = x
> > > > > >           v2a has updated version `ver`
> > > > > >  - returns a set of updated keys and all the old versions to the
> > > > > > orchestrator
> > > > > >        or just stores it in some special atomic cache like
> > > > > >        [x => (k1 -> ver1, k2 -> ver2)] TTL = Za
> > > > > >
> > > > > > Service B:
> > > > > >  - retrieves the updated value k2x => v2a because it knows `otx`
> =
> > x
> > > > > >  - does computations
> > > > > >  - stores [k2x => v2ab, k3x => v3b] TTL = Zb
> > > > > >  - updates the set of updated keys like [x => (k1 -> ver1, k2 ->
> > > ver2,
> > > > k3
> > > > > > -> ver3)] TTL = Zb
> > > > > >
> > > > > > Service Committer (may be embedded into Orchestrator):
> > > > > >  - takes all the updated keys and versions for `otx` = x
> > > > > >        [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)]
> > > > > >  - in a single transaction checks value versions for all the old
> > > values
> > > > > >        and replaces them with calculated new ones
> > > > > >  - does cleanup of temporary keys and values
> > > > > >  - in case of version mismatch or TX timeout just rollbacks and
> > > signals
> > > > > >         to Orchestrator to restart the job with new `otx`
> > > > > >
> > > > > > PROFIT!!
> > > > > >
> > > > > > This approach even allows you to run independent parts of the
> graph
> > > in
> > > > > > parallel (with TX transfer you will always run only one at a
> time).
> > > > Also
> > > > > it
> > > > > > does not require inventing any special fault tolerance technics
> > > because
> > > > > > Ignite caches are already fault tolerant and all the intermediate
> > > > results
> > > > > > are virtually invisible and stored with TTL, thus in case of any
> > > crash
> > > > > you
> > > > > > will not have inconsistent state or garbage.
> > > > > >
> > > > > > Sergi
> > > > > >
> > > > > >
> > > > > > 2017-03-15 11:42 GMT+03:00 ALEKSEY KUZNETSOV <
> > > alkuznetsov.sb@gmail.com
> > > > >:
> > > > > >
> > > > > > > Okay, we are open for proposals on business task. I mean, we
> can
> > > make
> > > > > use
> > > > > > > of some other thing, not distributed transaction. Not
> transaction
> > > > yet.
> > > > > > >
> > > > > > > ср, 15 мар. 2017 г. в 11:24, Vladimir Ozerov <
> > vozerov@gridgain.com
> > > >:
> > > > > > >
> > > > > > > > IMO the use case makes sense. However, as Sergi already
> > > mentioned,
> > > > > the
> > > > > > > > problem is far more complex, than simply passing TX state
> over
> > a
> > > > > wire.
> > > > > > > Most
> > > > > > > > probably a kind of coordinator will be required still to
> manage
> > > all
> > > > > > kinds
> > > > > > > > of failures. This task should be started with clean design
> > > proposal
> > > > > > > > explaining how we handle all these concurrent events. And
> only
> > > > then,
> > > > > > when
> > > > > > > > we understand all implications, we should move to development
> > > > stage.
> > > > > > > >
> > > > > > > > On Wed, Mar 15, 2017 at 10:38 AM, ALEKSEY KUZNETSOV <
> > > > > > > > alkuznetsov.sb@gmail.com> wrote:
> > > > > > > >
> > > > > > > > > Right
> > > > > > > > >
> > > > > > > > > ср, 15 мар. 2017 г. в 10:35, Sergi Vladykin <
> > > > > > sergi.vladykin@gmail.com
> > > > > > > >:
> > > > > > > > >
> > > > > > > > > > Good! Basically your orchestrator just takes some
> > predefined
> > > > > graph
> > > > > > of
> > > > > > > > > > distributed services to be invoked, calls them by some
> kind
> > > of
> > > > > RPC
> > > > > > > and
> > > > > > > > > > passes the needed parameters between them, right?
> > > > > > > > > >
> > > > > > > > > > Sergi
> > > > > > > > > >
> > > > > > > > > > 2017-03-14 22:46 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > >:
> > > > > > > > > >
> > > > > > > > > > > orchestrator is a custom thing. He is responsible for
> > > > managing
> > > > > > > > business
> > > > > > > > > > > scenarios flows. Many nodes are involved in scenarios.
> > They
> > > > > > > exchange
> > > > > > > > > data
> > > > > > > > > > > and folow one another. If you acquinted with BPMN
> > > framework,
> > > > so
> > > > > > > > > > > orchestrator is like bpmn engine.
> > > > > > > > > > >
> > > > > > > > > > > вт, 14 Мар 2017 г., 18:56 Sergi Vladykin <
> > > > > > sergi.vladykin@gmail.com
> > > > > > > >:
> > > > > > > > > > >
> > > > > > > > > > > > What is Orchestrator for you? Is it a thing from
> > > Microsoft
> > > > or
> > > > > > > your
> > > > > > > > > > custom
> > > > > > > > > > > > in-house software?
> > > > > > > > > > > >
> > > > > > > > > > > > Sergi
> > > > > > > > > > > >
> > > > > > > > > > > > 2017-03-14 18:00 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > >:
> > > > > > > > > > > >
> > > > > > > > > > > > > Fine. Let's say we've got multiple servers which
> > > fulfills
> > > > > > > custom
> > > > > > > > > > logic.
> > > > > > > > > > > > > This servers compound oriented graph (BPMN process)
> > > which
> > > > > > > > > controlled
> > > > > > > > > > by
> > > > > > > > > > > > > Orchestrator.
> > > > > > > > > > > > > For instance, *server1  *creates *variable A *with
> > > value
> > > > 1,
> > > > > > > > > persists
> > > > > > > > > > it
> > > > > > > > > > > > to
> > > > > > > > > > > > > IGNITE cache and creates *variable B *and sends it
> > to*
> > > > > > server2.
> > > > > > > > > *The
> > > > > > > > > > > > > latests receives *variable B*, do some logic with
> it
> > > and
> > > > > > stores
> > > > > > > > to
> > > > > > > > > > > > IGNITE.
> > > > > > > > > > > > > All the work made by both servers must be fulfilled
> > in
> > > > > *one*
> > > > > > > > > > > transaction.
> > > > > > > > > > > > > Because we need all information done, or
> > > > > nothing(rollbacked).
> > > > > > > The
> > > > > > > > > > > > scenario
> > > > > > > > > > > > > is managed by orchestrator.
> > > > > > > > > > > > >
> > > > > > > > > > > > > вт, 14 мар. 2017 г. в 17:31, Sergi Vladykin <
> > > > > > > > > > sergi.vladykin@gmail.com
> > > > > > > > > > > >:
> > > > > > > > > > > > >
> > > > > > > > > > > > > > Ok, it is not a business case, it is your wrong
> > > > solution
> > > > > > for
> > > > > > > > it.
> > > > > > > > > > > > > > Lets try again, what is the business case?
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Sergi
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > 2017-03-14 16:42 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > >:
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > > The case is the following, One starts
> transaction
> > > in
> > > > > one
> > > > > > > > node,
> > > > > > > > > > and
> > > > > > > > > > > > > commit
> > > > > > > > > > > > > > > this transaction in another jvm node(or
> rollback
> > it
> > > > > > > > remotely).
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > вт, 14 мар. 2017 г. в 16:30, Sergi Vladykin <
> > > > > > > > > > > > sergi.vladykin@gmail.com
> > > > > > > > > > > > > >:
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > Because even if you make it work for some
> > > > simplistic
> > > > > > > > > scenario,
> > > > > > > > > > > get
> > > > > > > > > > > > > > ready
> > > > > > > > > > > > > > > to
> > > > > > > > > > > > > > > > write many fault tolerance tests and make
> sure
> > > that
> > > > > you
> > > > > > > TXs
> > > > > > > > > > work
> > > > > > > > > > > > > > > gracefully
> > > > > > > > > > > > > > > > in all modes in case of crashes. Also make
> sure
> > > > that
> > > > > we
> > > > > > > do
> > > > > > > > > not
> > > > > > > > > > > have
> > > > > > > > > > > > > any
> > > > > > > > > > > > > > > > performance drops after all your changes in
> > > > existing
> > > > > > > > > > benchmarks.
> > > > > > > > > > > > All
> > > > > > > > > > > > > in
> > > > > > > > > > > > > > > all
> > > > > > > > > > > > > > > > I don't believe these conditions will be met
> > and
> > > > your
> > > > > > > > > > > contribution
> > > > > > > > > > > > > will
> > > > > > > > > > > > > > > be
> > > > > > > > > > > > > > > > accepted.
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > Better solution to what problem? Sending TX
> to
> > > > > another
> > > > > > > > node?
> > > > > > > > > > The
> > > > > > > > > > > > > > problem
> > > > > > > > > > > > > > > > statement itself is already wrong. What
> > business
> > > > case
> > > > > > you
> > > > > > > > are
> > > > > > > > > > > > trying
> > > > > > > > > > > > > to
> > > > > > > > > > > > > > > > solve? I'm sure everything you need can be
> done
> > > in
> > > > a
> > > > > > much
> > > > > > > > > more
> > > > > > > > > > > > simple
> > > > > > > > > > > > > > and
> > > > > > > > > > > > > > > > efficient way at the application level.
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > Sergi
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > 2017-03-14 16:03 GMT+03:00 ALEKSEY KUZNETSOV
> <
> > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > Why wrong ? You know the better solution?
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > вт, 14 мар. 2017 г. в 15:46, Sergi
> Vladykin <
> > > > > > > > > > > > > > sergi.vladykin@gmail.com
> > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > Just serializing TX object and
> > deserializing
> > > it
> > > > > on
> > > > > > > > > another
> > > > > > > > > > > node
> > > > > > > > > > > > > is
> > > > > > > > > > > > > > > > > > meaningless, because other nodes
> > > participating
> > > > in
> > > > > > the
> > > > > > > > TX
> > > > > > > > > > have
> > > > > > > > > > > > to
> > > > > > > > > > > > > > know
> > > > > > > > > > > > > > > > > about
> > > > > > > > > > > > > > > > > > the new coordinator. This will require
> > > protocol
> > > > > > > > changes,
> > > > > > > > > we
> > > > > > > > > > > > > > > definitely
> > > > > > > > > > > > > > > > > will
> > > > > > > > > > > > > > > > > > have fault tolerance and performance
> > issues.
> > > > IMO
> > > > > > the
> > > > > > > > > whole
> > > > > > > > > > > idea
> > > > > > > > > > > > > is
> > > > > > > > > > > > > > > > wrong
> > > > > > > > > > > > > > > > > > and it makes no sense to waste time on
> it.
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > Sergi
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > 2017-03-14 10:57 GMT+03:00 ALEKSEY
> > KUZNETSOV
> > > <
> > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > IgniteTransactionState implememntation
> > > > contains
> > > > > > > > > > > > IgniteTxEntry's
> > > > > > > > > > > > > > > which
> > > > > > > > > > > > > > > > > is
> > > > > > > > > > > > > > > > > > > supposed to be transferable
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > пн, 13 мар. 2017 г. в 19:32, Dmitriy
> > > > Setrakyan
> > > > > <
> > > > > > > > > > > > > > > > dsetrakyan@apache.org
> > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > It sounds a little scary to me that
> we
> > > are
> > > > > > > passing
> > > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > > objects
> > > > > > > > > > > > > > > > > > > > around. Such object may contain all
> > sorts
> > > > of
> > > > > > > Ignite
> > > > > > > > > > > > context.
> > > > > > > > > > > > > If
> > > > > > > > > > > > > > > > some
> > > > > > > > > > > > > > > > > > data
> > > > > > > > > > > > > > > > > > > > needs to be passed across, we should
> > > > create a
> > > > > > > > special
> > > > > > > > > > > > > transfer
> > > > > > > > > > > > > > > > object
> > > > > > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > > > this case.
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > D.
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > On Mon, Mar 13, 2017 at 9:10 AM,
> > ALEKSEY
> > > > > > > KUZNETSOV
> > > > > > > > <
> > > > > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > well, there a couple of issues
> > > preventing
> > > > > > > > > transaction
> > > > > > > > > > > > > > > proceeding.
> > > > > > > > > > > > > > > > > > > > > At first, After transaction
> > > serialization
> > > > > and
> > > > > > > > > > > > > deserialization
> > > > > > > > > > > > > > > on
> > > > > > > > > > > > > > > > > the
> > > > > > > > > > > > > > > > > > > > remote
> > > > > > > > > > > > > > > > > > > > > server, there is no txState. So im
> > > going
> > > > to
> > > > > > put
> > > > > > > > it
> > > > > > > > > in
> > > > > > > > > > > > > > > > > > > > > writeExternal()\readExternal()
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > The last one is Deserialized
> > > transaction
> > > > > > lacks
> > > > > > > of
> > > > > > > > > > > shared
> > > > > > > > > > > > > > cache
> > > > > > > > > > > > > > > > > > context
> > > > > > > > > > > > > > > > > > > > > field at TransactionProxyImpl.
> > Perhaps,
> > > > it
> > > > > > must
> > > > > > > > be
> > > > > > > > > > > > injected
> > > > > > > > > > > > > > by
> > > > > > > > > > > > > > > > > > > > > GridResourceProcessor ?
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > пн, 13 мар. 2017 г. в 17:27,
> ALEKSEY
> > > > > > KUZNETSOV
> > > > > > > <
> > > > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > while starting and continuing
> > > > transaction
> > > > > > in
> > > > > > > > > > > different
> > > > > > > > > > > > > jvms
> > > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > run
> > > > > > > > > > > > > > > > > > > into
> > > > > > > > > > > > > > > > > > > > > > serialization exception in
> > > > > > writeExternalMeta
> > > > > > > :
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > @Override public void
> > > > > > > > writeExternal(ObjectOutput
> > > > > > > > > > out)
> > > > > > > > > > > > > > throws
> > > > > > > > > > > > > > > > > > > > IOException
> > > > > > > > > > > > > > > > > > > > > {
> > > > > > > > > > > > > > > > > > > > > >     writeExternalMeta(out);
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > some meta is cannot be
> serialized.
> > > > > > > > > > > > > > > > > > > > > > пт, 10 мар. 2017 г. в 17:25,
> Alexey
> > > > > > > Goncharuk <
> > > > > > > > > > > > > > > > > > > > > alexey.goncharuk@gmail.com
> > > > > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > Aleksey,
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > I think I am starting to get what
> > you
> > > > > want,
> > > > > > > > but I
> > > > > > > > > > > have
> > > > > > > > > > > > a
> > > > > > > > > > > > > > few
> > > > > > > > > > > > > > > > > > > concerns:
> > > > > > > > > > > > > > > > > > > > > >  - What is the API for the
> proposed
> > > > > change?
> > > > > > > In
> > > > > > > > > your
> > > > > > > > > > > > test,
> > > > > > > > > > > > > > you
> > > > > > > > > > > > > > > > > pass
> > > > > > > > > > > > > > > > > > an
> > > > > > > > > > > > > > > > > > > > > > instance of transaction created
> on
> > > > > > ignite(0)
> > > > > > > to
> > > > > > > > > the
> > > > > > > > > > > > > ignite
> > > > > > > > > > > > > > > > > instance
> > > > > > > > > > > > > > > > > > > > > > ignite(1). This is obviously not
> > > > possible
> > > > > > in
> > > > > > > a
> > > > > > > > > > truly
> > > > > > > > > > > > > > > > distributed
> > > > > > > > > > > > > > > > > > > > > > (multi-jvm) environment.
> > > > > > > > > > > > > > > > > > > > > > - How will you synchronize cache
> > > update
> > > > > > > actions
> > > > > > > > > and
> > > > > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > > > > commit?
> > > > > > > > > > > > > > > > > > > > > > Say, you have one node that
> decided
> > > to
> > > > > > > commit,
> > > > > > > > > but
> > > > > > > > > > > > > another
> > > > > > > > > > > > > > > node
> > > > > > > > > > > > > > > > > is
> > > > > > > > > > > > > > > > > > > > still
> > > > > > > > > > > > > > > > > > > > > > writing within this transaction.
> > How
> > > do
> > > > > you
> > > > > > > > make
> > > > > > > > > > sure
> > > > > > > > > > > > > that
> > > > > > > > > > > > > > > two
> > > > > > > > > > > > > > > > > > nodes
> > > > > > > > > > > > > > > > > > > > will
> > > > > > > > > > > > > > > > > > > > > > not call commit() and rollback()
> > > > > > > > simultaneously?
> > > > > > > > > > > > > > > > > > > > > >  - How do you make sure that
> either
> > > > > > commit()
> > > > > > > or
> > > > > > > > > > > > > rollback()
> > > > > > > > > > > > > > is
> > > > > > > > > > > > > > > > > > called
> > > > > > > > > > > > > > > > > > > if
> > > > > > > > > > > > > > > > > > > > > an
> > > > > > > > > > > > > > > > > > > > > > originator failed?
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > 2017-03-10 15:38 GMT+03:00
> Дмитрий
> > > > Рябов
> > > > > <
> > > > > > > > > > > > > > > > somefireone@gmail.com
> > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > Alexey Goncharuk, heh, my
> initial
> > > > > > > > understanding
> > > > > > > > > > was
> > > > > > > > > > > > > that
> > > > > > > > > > > > > > > > > > > transferring
> > > > > > > > > > > > > > > > > > > > > of
> > > > > > > > > > > > > > > > > > > > > > tx
> > > > > > > > > > > > > > > > > > > > > > > ownership from one node to
> > another
> > > > will
> > > > > > be
> > > > > > > > > > happened
> > > > > > > > > > > > > > > > > automatically
> > > > > > > > > > > > > > > > > > > > when
> > > > > > > > > > > > > > > > > > > > > > > originating node is gone down.
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > 2017-03-10 15:36 GMT+03:00
> > ALEKSEY
> > > > > > > KUZNETSOV
> > > > > > > > <
> > > > > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > Im aiming to span transaction
> > on
> > > > > > multiple
> > > > > > > > > > > threads,
> > > > > > > > > > > > > > nodes,
> > > > > > > > > > > > > > > > > > > > jvms(soon).
> > > > > > > > > > > > > > > > > > > > > > So
> > > > > > > > > > > > > > > > > > > > > > > > every node is able to
> rollback,
> > > or
> > > > > > commit
> > > > > > > > > > common
> > > > > > > > > > > > > > > > > transaction.It
> > > > > > > > > > > > > > > > > > > > > turned
> > > > > > > > > > > > > > > > > > > > > > > up i
> > > > > > > > > > > > > > > > > > > > > > > > need to transfer tx between
> > nodes
> > > > in
> > > > > > > order
> > > > > > > > to
> > > > > > > > > > > > commit
> > > > > > > > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > > > > > > > different node(in the same
> > jvm).
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > пт, 10 мар. 2017 г. в 15:20,
> > > Alexey
> > > > > > > > > Goncharuk <
> > > > > > > > > > > > > > > > > > > > > > > alexey.goncharuk@gmail.com
> > > > > > > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > Aleksey,
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > Do you mean that you want a
> > > > concept
> > > > > > of
> > > > > > > > > > > > transferring
> > > > > > > > > > > > > > of
> > > > > > > > > > > > > > > tx
> > > > > > > > > > > > > > > > > > > > ownership
> > > > > > > > > > > > > > > > > > > > > > > from
> > > > > > > > > > > > > > > > > > > > > > > > > one node to another? My
> > initial
> > > > > > > > > understanding
> > > > > > > > > > > was
> > > > > > > > > > > > > > that
> > > > > > > > > > > > > > > > you
> > > > > > > > > > > > > > > > > > want
> > > > > > > > > > > > > > > > > > > > to
> > > > > > > > > > > > > > > > > > > > > be
> > > > > > > > > > > > > > > > > > > > > > > > able
> > > > > > > > > > > > > > > > > > > > > > > > > to update keys in a
> > transaction
> > > > > from
> > > > > > > > > multiple
> > > > > > > > > > > > > threads
> > > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > > > parallel.
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > --AG
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > 2017-03-10 15:01 GMT+03:00
> > > > ALEKSEY
> > > > > > > > > KUZNETSOV
> > > > > > > > > > <
> > > > > > > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > Well. Consider
> transaction
> > > > > started
> > > > > > in
> > > > > > > > one
> > > > > > > > > > > node,
> > > > > > > > > > > > > and
> > > > > > > > > > > > > > > > > > continued
> > > > > > > > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > > > > > > > another
> > > > > > > > > > > > > > > > > > > > > > > > > > one.
> > > > > > > > > > > > > > > > > > > > > > > > > > The following test
> > describes
> > > my
> > > > > > idea:
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > Ignite ignite1 =
> ignite(0);
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > IgniteTransactions
> > > > transactions =
> > > > > > > > > > > > > > > > ignite1.transactions();
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > IgniteCache<String,
> > Integer>
> > > > > cache
> > > > > > =
> > > > > > > > > > > > > > > > > > > ignite1.getOrCreateCache("
> > > > > > > > > > > > > > > > > > > > > > > > > > testCache");
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > Transaction tx =
> > > > > > > transactions.txStart(
> > > > > > > > > > > > > concurrency,
> > > > > > > > > > > > > > > > > > > isolation);
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > cache.put("key1", 1);
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > cache.put("key2", 2);
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > tx.stop();
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > IgniteInternalFuture<Boolean>
> > > > > fut =
> > > > > > > > > > > > > > > > > > GridTestUtils.runAsync(()
> > > > > > > > > > > > > > > > > > > > ->
> > > > > > > > > > > > > > > > > > > > > {
> > > > > > > > > > > > > > > > > > > > > > > > > >     IgniteTransactions
> ts =
> > > > > > > > > > > > > > ignite(1).transactions();
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > >  Assert.assertNull(ts.tx());
> > > > > > > > > > > > > > > > > > > > > > > > > >     Assert.assertEquals(
> > > > > > > > > > > > > TransactionState.STOPPED,
> > > > > > > > > > > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > > > > > > > > > >     ts.txStart(tx);
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > >  Assert.assertEquals(TransactionState.ACTIVE,
> > > > > > > > > > > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > > > > > > > > > >     cache.put("key3", 3);
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > >  Assert.assertTrue(cache.remove("key2"));
> > > > > > > > > > > > > > > > > > > > > > > > > >     tx.commit();
> > > > > > > > > > > > > > > > > > > > > > > > > >     return true;
> > > > > > > > > > > > > > > > > > > > > > > > > > });
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > fut.get();
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > Assert.assertEquals(
> > > > > > > > > > > TransactionState.COMMITTED,
> > > > > > > > > > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > Assert.assertEquals((long)1,
> > > > > > > > > > > > > > > (long)cache.get("key1"));
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > Assert.assertEquals((long)3,
> > > > > > > > > > > > > > > (long)cache.get("key3"));
> > > > > > > > > > > > > > > > > > > > > > > > > > Assert.assertFalse(cache.
> > > > > > > > > > > containsKey("key2"));
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > In method
> *ts.txStart(...)*
> > > we
> > > > > just
> > > > > > > > > rebind
> > > > > > > > > > > *tx*
> > > > > > > > > > > > > to
> > > > > > > > > > > > > > > > > current
> > > > > > > > > > > > > > > > > > > > > thread:
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > public void
> > > txStart(Transaction
> > > > > > tx) {
> > > > > > > > > > > > > > > > > > > > > > > > > >     TransactionProxyImpl
> > > > > > > > > transactionProxy =
> > > > > > > > > > > > > > > > > > > > > > (TransactionProxyImpl)tx;
> > > > > > > > > > > > > > > > > > > > > > > > > >     cctx.tm().reopenTx(
> > > > > > > > > > > transactionProxy.tx());
> > > > > > > > > > > > > > > > > > > > > > > > > >     transactionProxy.
> > > > > > > > > bindToCurrentThread();
> > > > > > > > > > > > > > > > > > > > > > > > > > }
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > In method *reopenTx* we
> > alter
> > > > > > > > *threadMap*
> > > > > > > > > > so
> > > > > > > > > > > > that
> > > > > > > > > > > > > > it
> > > > > > > > > > > > > > > > > binds
> > > > > > > > > > > > > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > > > > > > > > > > > to current thread.
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > How do u think about it ?
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > вт, 7 мар. 2017 г. в
> 22:38,
> > > > Denis
> > > > > > > > Magda <
> > > > > > > > > > > > > > > > > dmagda@apache.org
> > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > Hi Alexey,
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > Please share the
> rational
> > > > > behind
> > > > > > > this
> > > > > > > > > and
> > > > > > > > > > > the
> > > > > > > > > > > > > > > > thoughts,
> > > > > > > > > > > > > > > > > > > > design
> > > > > > > > > > > > > > > > > > > > > > > ideas
> > > > > > > > > > > > > > > > > > > > > > > > > you
> > > > > > > > > > > > > > > > > > > > > > > > > > > have in mind.
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > —
> > > > > > > > > > > > > > > > > > > > > > > > > > > Denis
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > On Mar 7, 2017, at
> 3:19
> > > AM,
> > > > > > > ALEKSEY
> > > > > > > > > > > > > KUZNETSOV <
> > > > > > > > > > > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> >
> > > > > > > > > > > > > > > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > Hi all! Im designing
> > > > > > distributed
> > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > which
> > > > > > > > > > > > > > > > > can
> > > > > > > > > > > > > > > > > > be
> > > > > > > > > > > > > > > > > > > > > > started
> > > > > > > > > > > > > > > > > > > > > > > > at
> > > > > > > > > > > > > > > > > > > > > > > > > > one
> > > > > > > > > > > > > > > > > > > > > > > > > > > > node, and continued
> at
> > > > other
> > > > > > one.
> > > > > > > > Has
> > > > > > > > > > > > anybody
> > > > > > > > > > > > > > > > > thoughts
> > > > > > > > > > > > > > > > > > on
> > > > > > > > > > > > > > > > > > > > it
> > > > > > > > > > > > > > > > > > > > > ?
> > > > > > > > > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > --
> > > > > > > > > > > > >
> > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > >
> > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > --
> > > > > > > > > > >
> > > > > > > > > > > *Best Regards,*
> > > > > > > > > > >
> > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > --
> > > > > > > > >
> > > > > > > > > *Best Regards,*
> > > > > > > > >
> > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > >
> > > > > > > >
> > > > > > > --
> > > > > > >
> > > > > > > *Best Regards,*
> > > > > > >
> > > > > > > *Kuznetsov Aleksey*
> > > > > > >
> > > > > >
> > > > > --
> > > > >
> > > > > *Best Regards,*
> > > > >
> > > > > *Kuznetsov Aleksey*
> > > > >
> > > >
> > > > --
> > > >
> > > > *Best Regards,*
> > > >
> > > > *Kuznetsov Aleksey*
> > > >
> > > --
> > >
> > > *Best Regards,*
> > >
> > > *Kuznetsov Aleksey*
> > >
> >
> --
>
> *Best Regards,*
>
> *Kuznetsov Aleksey*
>

Re: distributed transaction of non-single coordinator

Posted by ALEKSEY KUZNETSOV <al...@gmail.com>.
hmm, If we have millions of entities, and millions of operations, would not
this approache lead to memory overflow and perfomance degradation

чт, 16 мар. 2017 г. в 15:42, Sergi Vladykin <se...@gmail.com>:

> 1. Actually you have to check versions on all the values you have read
> during the tx.
>
> For example if we have [k1 => v1, k2 => v2] and do:
>
> put(k1, get(k2) + 5)
>
> We have to remember the version for k2. This logic can be relatively easily
> encapsulated in a framework atop of Ignite. You need to implement one to
> make all this stuff usable.
>
> 2. I suggest to avoid any locking here, because you easily will end up with
> deadlocks. If you do not have too frequent updates for your keys,
> optimistic approach will work just fine.
>
> Theoretically in the Committer Service you can start a thread for the
> lifetime of the whole distributed transaction, take a lock on the key using
> IgniteCache.lock(K key) before executing any Services, wait for all the
> services to complete, execute optimistic commit in the same thread while
> keeping this lock and then release it. Notice that all the Ignite
> transactions inside of all Services must be optimistic here to be able to
> read this locked key.
>
> But again I do not recommend you using this approach until you have a
> reliable deadlock avoidance scheme.
>
> Sergi
>
>
>
>
>
>
>
> 2017-03-16 12:53 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:
>
> > Yeah, now i got it.
> > There are some doubts on this approach
> > 1) During optimistic commit phase, when you assure no one altered the
> > original values, you must check versions of other dependent keys. How
> could
> > we obtain those keys(in an automative manner, of course) ?
> > 2) How could we lock a key before some Service A introduce changes? So no
> > other service is allowed to change this key-value?(sort of pessimistic
> > blocking)
> > May be you know some implementations of such approach ?
> >
> > ср, 15 мар. 2017 г. в 17:54, ALEKSEY KUZNETSOV <alkuznetsov.sb@gmail.com
> >:
> >
> > >  Thank you very much for help.  I will answer later.
> > >
> > > ср, 15 мар. 2017 г. в 17:39, Sergi Vladykin <sergi.vladykin@gmail.com
> >:
> > >
> > > All the services do not update key in place, but only generate new keys
> > > augmented by otx and store the updated value in the same cache +
> remember
> > > the keys and versions participating in the transaction in some separate
> > > atomic cache.
> > >
> > > Follow this sequence of changes applied to cache contents by each
> > Service:
> > >
> > > Initial cache contents:
> > >             [k1 => v1]
> > >             [k2 => v2]
> > >             [k3 => v3]
> > >
> > > Cache contents after Service A:
> > >             [k1 => v1]
> > >             [k2 => v2]
> > >             [k3 => v3]
> > >             [k1x => v1a]
> > >             [k2x => v2a]
> > >
> > >          + [x => (k1 -> ver1, k2 -> ver2)] in some separate atomic
> cache
> > >
> > > Cache contents after Service B:
> > >             [k1 => v1]
> > >             [k2 => v2]
> > >             [k3 => v3]
> > >             [k1x => v1a]
> > >             [k2x => v2ab]
> > >             [k3x => v3b]
> > >
> > >         + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in some separate
> > > atomic cache
> > >
> > > Finally the Committer Service takes this map of updated keys and their
> > > versions from some separate atomic cache, starts Ignite transaction and
> > > replaces all the values for k* keys to values taken from k*x keys. The
> > > successful result must be the following:
> > >
> > >             [k1 => v1a]
> > >             [k2 => v2ab]
> > >             [k3 => v3b]
> > >             [k1x => v1a]
> > >             [k2x => v2ab]
> > >             [k3x => v3b]
> > >
> > >         + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in some separate
> > > atomic cache
> > >
> > > But Committer Service also has to check that no one updated the
> original
> > > values before us, because otherwise we can not give any serializability
> > > guarantee for these distributed transactions. Here we may need to check
> > not
> > > only versions of the updated keys, but also versions of any other keys
> > end
> > > result depends on.
> > >
> > > After that Committer Service has to do a cleanup (may be outside of the
> > > committing tx) to come to the following final state:
> > >
> > >             [k1 => v1a]
> > >             [k2 => v2ab]
> > >             [k3 => v3b]
> > >
> > > Makes sense?
> > >
> > > Sergi
> > >
> > >
> > > 2017-03-15 16:54 GMT+03:00 ALEKSEY KUZNETSOV <alkuznetsov.sb@gmail.com
> >:
> > >
> > > >    - what do u mean by saying "
> > > > *in a single transaction checks value versions for all the old values
> > > >     and replaces them with calculated new ones *"? Every time you
> > change
> > > >    value(in some service), you store it to *some special atomic
> cache*
> > ,
> > > so
> > > >    when all services ceased working, Service commiter got a values
> with
> > > the
> > > >    last versions.
> > > >    - After "*does cleanup of temporary keys and values*" Service
> > commiter
> > > >    persists them into permanent store, isn't it ?
> > > >    - I cant grasp your though, you say "*in case of version mismatch
> or
> > > TX
> > > >    timeout just rollbacks*". But what versions would it match?
> > > >
> > > >
> > > > ср, 15 мар. 2017 г. в 15:34, Sergi Vladykin <
> sergi.vladykin@gmail.com
> > >:
> > > >
> > > > > Ok, here is what you actually need to implement at the application
> > > level.
> > > > >
> > > > > Lets say we have to call 2 services in the following order:
> > > > >  - Service A: wants to update keys [k1 => v1,   k2 => v2]  to  [k1
> =>
> > > > v1a,
> > > > >   k2 => v2a]
> > > > >  - Service B: wants to update keys [k2 => v2a, k3 => v3]  to  [k2
> =>
> > > > v2ab,
> > > > > k3 => v3b]
> > > > >
> > > > > The change
> > > > >     from [ k1 => v1,   k2 => v2,     k3 => v3   ]
> > > > >     to     [ k1 => v1a, k2 => v2ab, k3 => v3b ]
> > > > > must happen in a single transaction.
> > > > >
> > > > >
> > > > > Optimistic protocol to solve this:
> > > > >
> > > > > Each cache key must have a field `otx`, which is a unique
> > orchestrator
> > > TX
> > > > > identifier - it must be a parameter passed to all the services. If
> > > `otx`
> > > > is
> > > > > set to some value it means that it is an intermediate key and is
> > > visible
> > > > > only inside of some transaction, for the finalized key `otx` must
> be
> > > > null -
> > > > > it means the key is committed and visible for everyone.
> > > > >
> > > > > Each cache value must have a field `ver` which is a version of that
> > > > value.
> > > > >
> > > > > For both fields (`otx` and `ver`) the safest way is to use UUID.
> > > > >
> > > > > Workflow is the following:
> > > > >
> > > > > Orchestrator starts the distributed transaction with `otx` = x and
> > > passes
> > > > > this parameter to all the services.
> > > > >
> > > > > Service A:
> > > > >  - does some computations
> > > > >  - stores [k1x => v1a, k2x => v2a]  with TTL = Za
> > > > >       where
> > > > >           Za - left time from max Orchestrator TX duration after
> > > Service
> > > > A
> > > > > end
> > > > >           k1x, k2x - new temporary keys with field `otx` = x
> > > > >           v2a has updated version `ver`
> > > > >  - returns a set of updated keys and all the old versions to the
> > > > > orchestrator
> > > > >        or just stores it in some special atomic cache like
> > > > >        [x => (k1 -> ver1, k2 -> ver2)] TTL = Za
> > > > >
> > > > > Service B:
> > > > >  - retrieves the updated value k2x => v2a because it knows `otx` =
> x
> > > > >  - does computations
> > > > >  - stores [k2x => v2ab, k3x => v3b] TTL = Zb
> > > > >  - updates the set of updated keys like [x => (k1 -> ver1, k2 ->
> > ver2,
> > > k3
> > > > > -> ver3)] TTL = Zb
> > > > >
> > > > > Service Committer (may be embedded into Orchestrator):
> > > > >  - takes all the updated keys and versions for `otx` = x
> > > > >        [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)]
> > > > >  - in a single transaction checks value versions for all the old
> > values
> > > > >        and replaces them with calculated new ones
> > > > >  - does cleanup of temporary keys and values
> > > > >  - in case of version mismatch or TX timeout just rollbacks and
> > signals
> > > > >         to Orchestrator to restart the job with new `otx`
> > > > >
> > > > > PROFIT!!
> > > > >
> > > > > This approach even allows you to run independent parts of the graph
> > in
> > > > > parallel (with TX transfer you will always run only one at a time).
> > > Also
> > > > it
> > > > > does not require inventing any special fault tolerance technics
> > because
> > > > > Ignite caches are already fault tolerant and all the intermediate
> > > results
> > > > > are virtually invisible and stored with TTL, thus in case of any
> > crash
> > > > you
> > > > > will not have inconsistent state or garbage.
> > > > >
> > > > > Sergi
> > > > >
> > > > >
> > > > > 2017-03-15 11:42 GMT+03:00 ALEKSEY KUZNETSOV <
> > alkuznetsov.sb@gmail.com
> > > >:
> > > > >
> > > > > > Okay, we are open for proposals on business task. I mean, we can
> > make
> > > > use
> > > > > > of some other thing, not distributed transaction. Not transaction
> > > yet.
> > > > > >
> > > > > > ср, 15 мар. 2017 г. в 11:24, Vladimir Ozerov <
> vozerov@gridgain.com
> > >:
> > > > > >
> > > > > > > IMO the use case makes sense. However, as Sergi already
> > mentioned,
> > > > the
> > > > > > > problem is far more complex, than simply passing TX state over
> a
> > > > wire.
> > > > > > Most
> > > > > > > probably a kind of coordinator will be required still to manage
> > all
> > > > > kinds
> > > > > > > of failures. This task should be started with clean design
> > proposal
> > > > > > > explaining how we handle all these concurrent events. And only
> > > then,
> > > > > when
> > > > > > > we understand all implications, we should move to development
> > > stage.
> > > > > > >
> > > > > > > On Wed, Mar 15, 2017 at 10:38 AM, ALEKSEY KUZNETSOV <
> > > > > > > alkuznetsov.sb@gmail.com> wrote:
> > > > > > >
> > > > > > > > Right
> > > > > > > >
> > > > > > > > ср, 15 мар. 2017 г. в 10:35, Sergi Vladykin <
> > > > > sergi.vladykin@gmail.com
> > > > > > >:
> > > > > > > >
> > > > > > > > > Good! Basically your orchestrator just takes some
> predefined
> > > > graph
> > > > > of
> > > > > > > > > distributed services to be invoked, calls them by some kind
> > of
> > > > RPC
> > > > > > and
> > > > > > > > > passes the needed parameters between them, right?
> > > > > > > > >
> > > > > > > > > Sergi
> > > > > > > > >
> > > > > > > > > 2017-03-14 22:46 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > alkuznetsov.sb@gmail.com
> > > > > > > >:
> > > > > > > > >
> > > > > > > > > > orchestrator is a custom thing. He is responsible for
> > > managing
> > > > > > > business
> > > > > > > > > > scenarios flows. Many nodes are involved in scenarios.
> They
> > > > > > exchange
> > > > > > > > data
> > > > > > > > > > and folow one another. If you acquinted with BPMN
> > framework,
> > > so
> > > > > > > > > > orchestrator is like bpmn engine.
> > > > > > > > > >
> > > > > > > > > > вт, 14 Мар 2017 г., 18:56 Sergi Vladykin <
> > > > > sergi.vladykin@gmail.com
> > > > > > >:
> > > > > > > > > >
> > > > > > > > > > > What is Orchestrator for you? Is it a thing from
> > Microsoft
> > > or
> > > > > > your
> > > > > > > > > custom
> > > > > > > > > > > in-house software?
> > > > > > > > > > >
> > > > > > > > > > > Sergi
> > > > > > > > > > >
> > > > > > > > > > > 2017-03-14 18:00 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > >:
> > > > > > > > > > >
> > > > > > > > > > > > Fine. Let's say we've got multiple servers which
> > fulfills
> > > > > > custom
> > > > > > > > > logic.
> > > > > > > > > > > > This servers compound oriented graph (BPMN process)
> > which
> > > > > > > > controlled
> > > > > > > > > by
> > > > > > > > > > > > Orchestrator.
> > > > > > > > > > > > For instance, *server1  *creates *variable A *with
> > value
> > > 1,
> > > > > > > > persists
> > > > > > > > > it
> > > > > > > > > > > to
> > > > > > > > > > > > IGNITE cache and creates *variable B *and sends it
> to*
> > > > > server2.
> > > > > > > > *The
> > > > > > > > > > > > latests receives *variable B*, do some logic with it
> > and
> > > > > stores
> > > > > > > to
> > > > > > > > > > > IGNITE.
> > > > > > > > > > > > All the work made by both servers must be fulfilled
> in
> > > > *one*
> > > > > > > > > > transaction.
> > > > > > > > > > > > Because we need all information done, or
> > > > nothing(rollbacked).
> > > > > > The
> > > > > > > > > > > scenario
> > > > > > > > > > > > is managed by orchestrator.
> > > > > > > > > > > >
> > > > > > > > > > > > вт, 14 мар. 2017 г. в 17:31, Sergi Vladykin <
> > > > > > > > > sergi.vladykin@gmail.com
> > > > > > > > > > >:
> > > > > > > > > > > >
> > > > > > > > > > > > > Ok, it is not a business case, it is your wrong
> > > solution
> > > > > for
> > > > > > > it.
> > > > > > > > > > > > > Lets try again, what is the business case?
> > > > > > > > > > > > >
> > > > > > > > > > > > > Sergi
> > > > > > > > > > > > >
> > > > > > > > > > > > > 2017-03-14 16:42 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > >:
> > > > > > > > > > > > >
> > > > > > > > > > > > > > The case is the following, One starts transaction
> > in
> > > > one
> > > > > > > node,
> > > > > > > > > and
> > > > > > > > > > > > commit
> > > > > > > > > > > > > > this transaction in another jvm node(or rollback
> it
> > > > > > > remotely).
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > вт, 14 мар. 2017 г. в 16:30, Sergi Vladykin <
> > > > > > > > > > > sergi.vladykin@gmail.com
> > > > > > > > > > > > >:
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Because even if you make it work for some
> > > simplistic
> > > > > > > > scenario,
> > > > > > > > > > get
> > > > > > > > > > > > > ready
> > > > > > > > > > > > > > to
> > > > > > > > > > > > > > > write many fault tolerance tests and make sure
> > that
> > > > you
> > > > > > TXs
> > > > > > > > > work
> > > > > > > > > > > > > > gracefully
> > > > > > > > > > > > > > > in all modes in case of crashes. Also make sure
> > > that
> > > > we
> > > > > > do
> > > > > > > > not
> > > > > > > > > > have
> > > > > > > > > > > > any
> > > > > > > > > > > > > > > performance drops after all your changes in
> > > existing
> > > > > > > > > benchmarks.
> > > > > > > > > > > All
> > > > > > > > > > > > in
> > > > > > > > > > > > > > all
> > > > > > > > > > > > > > > I don't believe these conditions will be met
> and
> > > your
> > > > > > > > > > contribution
> > > > > > > > > > > > will
> > > > > > > > > > > > > > be
> > > > > > > > > > > > > > > accepted.
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Better solution to what problem? Sending TX to
> > > > another
> > > > > > > node?
> > > > > > > > > The
> > > > > > > > > > > > > problem
> > > > > > > > > > > > > > > statement itself is already wrong. What
> business
> > > case
> > > > > you
> > > > > > > are
> > > > > > > > > > > trying
> > > > > > > > > > > > to
> > > > > > > > > > > > > > > solve? I'm sure everything you need can be done
> > in
> > > a
> > > > > much
> > > > > > > > more
> > > > > > > > > > > simple
> > > > > > > > > > > > > and
> > > > > > > > > > > > > > > efficient way at the application level.
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Sergi
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > 2017-03-14 16:03 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > >:
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > Why wrong ? You know the better solution?
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > вт, 14 мар. 2017 г. в 15:46, Sergi Vladykin <
> > > > > > > > > > > > > sergi.vladykin@gmail.com
> > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > Just serializing TX object and
> deserializing
> > it
> > > > on
> > > > > > > > another
> > > > > > > > > > node
> > > > > > > > > > > > is
> > > > > > > > > > > > > > > > > meaningless, because other nodes
> > participating
> > > in
> > > > > the
> > > > > > > TX
> > > > > > > > > have
> > > > > > > > > > > to
> > > > > > > > > > > > > know
> > > > > > > > > > > > > > > > about
> > > > > > > > > > > > > > > > > the new coordinator. This will require
> > protocol
> > > > > > > changes,
> > > > > > > > we
> > > > > > > > > > > > > > definitely
> > > > > > > > > > > > > > > > will
> > > > > > > > > > > > > > > > > have fault tolerance and performance
> issues.
> > > IMO
> > > > > the
> > > > > > > > whole
> > > > > > > > > > idea
> > > > > > > > > > > > is
> > > > > > > > > > > > > > > wrong
> > > > > > > > > > > > > > > > > and it makes no sense to waste time on it.
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > Sergi
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > 2017-03-14 10:57 GMT+03:00 ALEKSEY
> KUZNETSOV
> > <
> > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > IgniteTransactionState implememntation
> > > contains
> > > > > > > > > > > IgniteTxEntry's
> > > > > > > > > > > > > > which
> > > > > > > > > > > > > > > > is
> > > > > > > > > > > > > > > > > > supposed to be transferable
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > пн, 13 мар. 2017 г. в 19:32, Dmitriy
> > > Setrakyan
> > > > <
> > > > > > > > > > > > > > > dsetrakyan@apache.org
> > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > It sounds a little scary to me that we
> > are
> > > > > > passing
> > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > objects
> > > > > > > > > > > > > > > > > > > around. Such object may contain all
> sorts
> > > of
> > > > > > Ignite
> > > > > > > > > > > context.
> > > > > > > > > > > > If
> > > > > > > > > > > > > > > some
> > > > > > > > > > > > > > > > > data
> > > > > > > > > > > > > > > > > > > needs to be passed across, we should
> > > create a
> > > > > > > special
> > > > > > > > > > > > transfer
> > > > > > > > > > > > > > > object
> > > > > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > > this case.
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > D.
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > On Mon, Mar 13, 2017 at 9:10 AM,
> ALEKSEY
> > > > > > KUZNETSOV
> > > > > > > <
> > > > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > well, there a couple of issues
> > preventing
> > > > > > > > transaction
> > > > > > > > > > > > > > proceeding.
> > > > > > > > > > > > > > > > > > > > At first, After transaction
> > serialization
> > > > and
> > > > > > > > > > > > deserialization
> > > > > > > > > > > > > > on
> > > > > > > > > > > > > > > > the
> > > > > > > > > > > > > > > > > > > remote
> > > > > > > > > > > > > > > > > > > > server, there is no txState. So im
> > going
> > > to
> > > > > put
> > > > > > > it
> > > > > > > > in
> > > > > > > > > > > > > > > > > > > > writeExternal()\readExternal()
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > The last one is Deserialized
> > transaction
> > > > > lacks
> > > > > > of
> > > > > > > > > > shared
> > > > > > > > > > > > > cache
> > > > > > > > > > > > > > > > > context
> > > > > > > > > > > > > > > > > > > > field at TransactionProxyImpl.
> Perhaps,
> > > it
> > > > > must
> > > > > > > be
> > > > > > > > > > > injected
> > > > > > > > > > > > > by
> > > > > > > > > > > > > > > > > > > > GridResourceProcessor ?
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > пн, 13 мар. 2017 г. в 17:27, ALEKSEY
> > > > > KUZNETSOV
> > > > > > <
> > > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > while starting and continuing
> > > transaction
> > > > > in
> > > > > > > > > > different
> > > > > > > > > > > > jvms
> > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > run
> > > > > > > > > > > > > > > > > > into
> > > > > > > > > > > > > > > > > > > > > serialization exception in
> > > > > writeExternalMeta
> > > > > > :
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > @Override public void
> > > > > > > writeExternal(ObjectOutput
> > > > > > > > > out)
> > > > > > > > > > > > > throws
> > > > > > > > > > > > > > > > > > > IOException
> > > > > > > > > > > > > > > > > > > > {
> > > > > > > > > > > > > > > > > > > > >     writeExternalMeta(out);
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > some meta is cannot be serialized.
> > > > > > > > > > > > > > > > > > > > > пт, 10 мар. 2017 г. в 17:25, Alexey
> > > > > > Goncharuk <
> > > > > > > > > > > > > > > > > > > > alexey.goncharuk@gmail.com
> > > > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > Aleksey,
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > I think I am starting to get what
> you
> > > > want,
> > > > > > > but I
> > > > > > > > > > have
> > > > > > > > > > > a
> > > > > > > > > > > > > few
> > > > > > > > > > > > > > > > > > concerns:
> > > > > > > > > > > > > > > > > > > > >  - What is the API for the proposed
> > > > change?
> > > > > > In
> > > > > > > > your
> > > > > > > > > > > test,
> > > > > > > > > > > > > you
> > > > > > > > > > > > > > > > pass
> > > > > > > > > > > > > > > > > an
> > > > > > > > > > > > > > > > > > > > > instance of transaction created on
> > > > > ignite(0)
> > > > > > to
> > > > > > > > the
> > > > > > > > > > > > ignite
> > > > > > > > > > > > > > > > instance
> > > > > > > > > > > > > > > > > > > > > ignite(1). This is obviously not
> > > possible
> > > > > in
> > > > > > a
> > > > > > > > > truly
> > > > > > > > > > > > > > > distributed
> > > > > > > > > > > > > > > > > > > > > (multi-jvm) environment.
> > > > > > > > > > > > > > > > > > > > > - How will you synchronize cache
> > update
> > > > > > actions
> > > > > > > > and
> > > > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > > > commit?
> > > > > > > > > > > > > > > > > > > > > Say, you have one node that decided
> > to
> > > > > > commit,
> > > > > > > > but
> > > > > > > > > > > > another
> > > > > > > > > > > > > > node
> > > > > > > > > > > > > > > > is
> > > > > > > > > > > > > > > > > > > still
> > > > > > > > > > > > > > > > > > > > > writing within this transaction.
> How
> > do
> > > > you
> > > > > > > make
> > > > > > > > > sure
> > > > > > > > > > > > that
> > > > > > > > > > > > > > two
> > > > > > > > > > > > > > > > > nodes
> > > > > > > > > > > > > > > > > > > will
> > > > > > > > > > > > > > > > > > > > > not call commit() and rollback()
> > > > > > > simultaneously?
> > > > > > > > > > > > > > > > > > > > >  - How do you make sure that either
> > > > > commit()
> > > > > > or
> > > > > > > > > > > > rollback()
> > > > > > > > > > > > > is
> > > > > > > > > > > > > > > > > called
> > > > > > > > > > > > > > > > > > if
> > > > > > > > > > > > > > > > > > > > an
> > > > > > > > > > > > > > > > > > > > > originator failed?
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > 2017-03-10 15:38 GMT+03:00 Дмитрий
> > > Рябов
> > > > <
> > > > > > > > > > > > > > > somefireone@gmail.com
> > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > Alexey Goncharuk, heh, my initial
> > > > > > > understanding
> > > > > > > > > was
> > > > > > > > > > > > that
> > > > > > > > > > > > > > > > > > transferring
> > > > > > > > > > > > > > > > > > > > of
> > > > > > > > > > > > > > > > > > > > > tx
> > > > > > > > > > > > > > > > > > > > > > ownership from one node to
> another
> > > will
> > > > > be
> > > > > > > > > happened
> > > > > > > > > > > > > > > > automatically
> > > > > > > > > > > > > > > > > > > when
> > > > > > > > > > > > > > > > > > > > > > originating node is gone down.
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > 2017-03-10 15:36 GMT+03:00
> ALEKSEY
> > > > > > KUZNETSOV
> > > > > > > <
> > > > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > Im aiming to span transaction
> on
> > > > > multiple
> > > > > > > > > > threads,
> > > > > > > > > > > > > nodes,
> > > > > > > > > > > > > > > > > > > jvms(soon).
> > > > > > > > > > > > > > > > > > > > > So
> > > > > > > > > > > > > > > > > > > > > > > every node is able to rollback,
> > or
> > > > > commit
> > > > > > > > > common
> > > > > > > > > > > > > > > > transaction.It
> > > > > > > > > > > > > > > > > > > > turned
> > > > > > > > > > > > > > > > > > > > > > up i
> > > > > > > > > > > > > > > > > > > > > > > need to transfer tx between
> nodes
> > > in
> > > > > > order
> > > > > > > to
> > > > > > > > > > > commit
> > > > > > > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > > > > > > different node(in the same
> jvm).
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > пт, 10 мар. 2017 г. в 15:20,
> > Alexey
> > > > > > > > Goncharuk <
> > > > > > > > > > > > > > > > > > > > > > alexey.goncharuk@gmail.com
> > > > > > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > Aleksey,
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > Do you mean that you want a
> > > concept
> > > > > of
> > > > > > > > > > > transferring
> > > > > > > > > > > > > of
> > > > > > > > > > > > > > tx
> > > > > > > > > > > > > > > > > > > ownership
> > > > > > > > > > > > > > > > > > > > > > from
> > > > > > > > > > > > > > > > > > > > > > > > one node to another? My
> initial
> > > > > > > > understanding
> > > > > > > > > > was
> > > > > > > > > > > > > that
> > > > > > > > > > > > > > > you
> > > > > > > > > > > > > > > > > want
> > > > > > > > > > > > > > > > > > > to
> > > > > > > > > > > > > > > > > > > > be
> > > > > > > > > > > > > > > > > > > > > > > able
> > > > > > > > > > > > > > > > > > > > > > > > to update keys in a
> transaction
> > > > from
> > > > > > > > multiple
> > > > > > > > > > > > threads
> > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > > parallel.
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > --AG
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > 2017-03-10 15:01 GMT+03:00
> > > ALEKSEY
> > > > > > > > KUZNETSOV
> > > > > > > > > <
> > > > > > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > Well. Consider transaction
> > > > started
> > > > > in
> > > > > > > one
> > > > > > > > > > node,
> > > > > > > > > > > > and
> > > > > > > > > > > > > > > > > continued
> > > > > > > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > > > > > > another
> > > > > > > > > > > > > > > > > > > > > > > > > one.
> > > > > > > > > > > > > > > > > > > > > > > > > The following test
> describes
> > my
> > > > > idea:
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > Ignite ignite1 = ignite(0);
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > IgniteTransactions
> > > transactions =
> > > > > > > > > > > > > > > ignite1.transactions();
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > IgniteCache<String,
> Integer>
> > > > cache
> > > > > =
> > > > > > > > > > > > > > > > > > ignite1.getOrCreateCache("
> > > > > > > > > > > > > > > > > > > > > > > > > testCache");
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > Transaction tx =
> > > > > > transactions.txStart(
> > > > > > > > > > > > concurrency,
> > > > > > > > > > > > > > > > > > isolation);
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > cache.put("key1", 1);
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > cache.put("key2", 2);
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > tx.stop();
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > >
> IgniteInternalFuture<Boolean>
> > > > fut =
> > > > > > > > > > > > > > > > > GridTestUtils.runAsync(()
> > > > > > > > > > > > > > > > > > > ->
> > > > > > > > > > > > > > > > > > > > {
> > > > > > > > > > > > > > > > > > > > > > > > >     IgniteTransactions ts =
> > > > > > > > > > > > > ignite(1).transactions();
> > > > > > > > > > > > > > > > > > > > > > > > >
> >  Assert.assertNull(ts.tx());
> > > > > > > > > > > > > > > > > > > > > > > > >     Assert.assertEquals(
> > > > > > > > > > > > TransactionState.STOPPED,
> > > > > > > > > > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > > > > > > > > >     ts.txStart(tx);
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > >  Assert.assertEquals(TransactionState.ACTIVE,
> > > > > > > > > > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > > > > > > > > >     cache.put("key3", 3);
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > >  Assert.assertTrue(cache.remove("key2"));
> > > > > > > > > > > > > > > > > > > > > > > > >     tx.commit();
> > > > > > > > > > > > > > > > > > > > > > > > >     return true;
> > > > > > > > > > > > > > > > > > > > > > > > > });
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > fut.get();
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > Assert.assertEquals(
> > > > > > > > > > TransactionState.COMMITTED,
> > > > > > > > > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > > > > > > > > >
> Assert.assertEquals((long)1,
> > > > > > > > > > > > > > (long)cache.get("key1"));
> > > > > > > > > > > > > > > > > > > > > > > > >
> Assert.assertEquals((long)3,
> > > > > > > > > > > > > > (long)cache.get("key3"));
> > > > > > > > > > > > > > > > > > > > > > > > > Assert.assertFalse(cache.
> > > > > > > > > > containsKey("key2"));
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > In method *ts.txStart(...)*
> > we
> > > > just
> > > > > > > > rebind
> > > > > > > > > > *tx*
> > > > > > > > > > > > to
> > > > > > > > > > > > > > > > current
> > > > > > > > > > > > > > > > > > > > thread:
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > public void
> > txStart(Transaction
> > > > > tx) {
> > > > > > > > > > > > > > > > > > > > > > > > >     TransactionProxyImpl
> > > > > > > > transactionProxy =
> > > > > > > > > > > > > > > > > > > > > (TransactionProxyImpl)tx;
> > > > > > > > > > > > > > > > > > > > > > > > >     cctx.tm().reopenTx(
> > > > > > > > > > transactionProxy.tx());
> > > > > > > > > > > > > > > > > > > > > > > > >     transactionProxy.
> > > > > > > > bindToCurrentThread();
> > > > > > > > > > > > > > > > > > > > > > > > > }
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > In method *reopenTx* we
> alter
> > > > > > > *threadMap*
> > > > > > > > > so
> > > > > > > > > > > that
> > > > > > > > > > > > > it
> > > > > > > > > > > > > > > > binds
> > > > > > > > > > > > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > > > > > > > > > > to current thread.
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > How do u think about it ?
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > вт, 7 мар. 2017 г. в 22:38,
> > > Denis
> > > > > > > Magda <
> > > > > > > > > > > > > > > > dmagda@apache.org
> > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > Hi Alexey,
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > Please share the rational
> > > > behind
> > > > > > this
> > > > > > > > and
> > > > > > > > > > the
> > > > > > > > > > > > > > > thoughts,
> > > > > > > > > > > > > > > > > > > design
> > > > > > > > > > > > > > > > > > > > > > ideas
> > > > > > > > > > > > > > > > > > > > > > > > you
> > > > > > > > > > > > > > > > > > > > > > > > > > have in mind.
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > —
> > > > > > > > > > > > > > > > > > > > > > > > > > Denis
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > On Mar 7, 2017, at 3:19
> > AM,
> > > > > > ALEKSEY
> > > > > > > > > > > > KUZNETSOV <
> > > > > > > > > > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com>
> > > > > > > > > > > > > > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > Hi all! Im designing
> > > > > distributed
> > > > > > > > > > > transaction
> > > > > > > > > > > > > > which
> > > > > > > > > > > > > > > > can
> > > > > > > > > > > > > > > > > be
> > > > > > > > > > > > > > > > > > > > > started
> > > > > > > > > > > > > > > > > > > > > > > at
> > > > > > > > > > > > > > > > > > > > > > > > > one
> > > > > > > > > > > > > > > > > > > > > > > > > > > node, and continued at
> > > other
> > > > > one.
> > > > > > > Has
> > > > > > > > > > > anybody
> > > > > > > > > > > > > > > > thoughts
> > > > > > > > > > > > > > > > > on
> > > > > > > > > > > > > > > > > > > it
> > > > > > > > > > > > > > > > > > > > ?
> > > > > > > > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > --
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > --
> > > > > > > > > > > >
> > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > >
> > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > --
> > > > > > > > > >
> > > > > > > > > > *Best Regards,*
> > > > > > > > > >
> > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > --
> > > > > > > >
> > > > > > > > *Best Regards,*
> > > > > > > >
> > > > > > > > *Kuznetsov Aleksey*
> > > > > > > >
> > > > > > >
> > > > > > --
> > > > > >
> > > > > > *Best Regards,*
> > > > > >
> > > > > > *Kuznetsov Aleksey*
> > > > > >
> > > > >
> > > > --
> > > >
> > > > *Best Regards,*
> > > >
> > > > *Kuznetsov Aleksey*
> > > >
> > >
> > > --
> > >
> > > *Best Regards,*
> > >
> > > *Kuznetsov Aleksey*
> > >
> > --
> >
> > *Best Regards,*
> >
> > *Kuznetsov Aleksey*
> >
>
-- 

*Best Regards,*

*Kuznetsov Aleksey*

Re: distributed transaction of non-single coordinator

Posted by Sergi Vladykin <se...@gmail.com>.
1. Actually you have to check versions on all the values you have read
during the tx.

For example if we have [k1 => v1, k2 => v2] and do:

put(k1, get(k2) + 5)

We have to remember the version for k2. This logic can be relatively easily
encapsulated in a framework atop of Ignite. You need to implement one to
make all this stuff usable.

2. I suggest to avoid any locking here, because you easily will end up with
deadlocks. If you do not have too frequent updates for your keys,
optimistic approach will work just fine.

Theoretically in the Committer Service you can start a thread for the
lifetime of the whole distributed transaction, take a lock on the key using
IgniteCache.lock(K key) before executing any Services, wait for all the
services to complete, execute optimistic commit in the same thread while
keeping this lock and then release it. Notice that all the Ignite
transactions inside of all Services must be optimistic here to be able to
read this locked key.

But again I do not recommend you using this approach until you have a
reliable deadlock avoidance scheme.

Sergi







2017-03-16 12:53 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:

> Yeah, now i got it.
> There are some doubts on this approach
> 1) During optimistic commit phase, when you assure no one altered the
> original values, you must check versions of other dependent keys. How could
> we obtain those keys(in an automative manner, of course) ?
> 2) How could we lock a key before some Service A introduce changes? So no
> other service is allowed to change this key-value?(sort of pessimistic
> blocking)
> May be you know some implementations of such approach ?
>
> ср, 15 мар. 2017 г. в 17:54, ALEKSEY KUZNETSOV <al...@gmail.com>:
>
> >  Thank you very much for help.  I will answer later.
> >
> > ср, 15 мар. 2017 г. в 17:39, Sergi Vladykin <se...@gmail.com>:
> >
> > All the services do not update key in place, but only generate new keys
> > augmented by otx and store the updated value in the same cache + remember
> > the keys and versions participating in the transaction in some separate
> > atomic cache.
> >
> > Follow this sequence of changes applied to cache contents by each
> Service:
> >
> > Initial cache contents:
> >             [k1 => v1]
> >             [k2 => v2]
> >             [k3 => v3]
> >
> > Cache contents after Service A:
> >             [k1 => v1]
> >             [k2 => v2]
> >             [k3 => v3]
> >             [k1x => v1a]
> >             [k2x => v2a]
> >
> >          + [x => (k1 -> ver1, k2 -> ver2)] in some separate atomic cache
> >
> > Cache contents after Service B:
> >             [k1 => v1]
> >             [k2 => v2]
> >             [k3 => v3]
> >             [k1x => v1a]
> >             [k2x => v2ab]
> >             [k3x => v3b]
> >
> >         + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in some separate
> > atomic cache
> >
> > Finally the Committer Service takes this map of updated keys and their
> > versions from some separate atomic cache, starts Ignite transaction and
> > replaces all the values for k* keys to values taken from k*x keys. The
> > successful result must be the following:
> >
> >             [k1 => v1a]
> >             [k2 => v2ab]
> >             [k3 => v3b]
> >             [k1x => v1a]
> >             [k2x => v2ab]
> >             [k3x => v3b]
> >
> >         + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in some separate
> > atomic cache
> >
> > But Committer Service also has to check that no one updated the original
> > values before us, because otherwise we can not give any serializability
> > guarantee for these distributed transactions. Here we may need to check
> not
> > only versions of the updated keys, but also versions of any other keys
> end
> > result depends on.
> >
> > After that Committer Service has to do a cleanup (may be outside of the
> > committing tx) to come to the following final state:
> >
> >             [k1 => v1a]
> >             [k2 => v2ab]
> >             [k3 => v3b]
> >
> > Makes sense?
> >
> > Sergi
> >
> >
> > 2017-03-15 16:54 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:
> >
> > >    - what do u mean by saying "
> > > *in a single transaction checks value versions for all the old values
> > >     and replaces them with calculated new ones *"? Every time you
> change
> > >    value(in some service), you store it to *some special atomic cache*
> ,
> > so
> > >    when all services ceased working, Service commiter got a values with
> > the
> > >    last versions.
> > >    - After "*does cleanup of temporary keys and values*" Service
> commiter
> > >    persists them into permanent store, isn't it ?
> > >    - I cant grasp your though, you say "*in case of version mismatch or
> > TX
> > >    timeout just rollbacks*". But what versions would it match?
> > >
> > >
> > > ср, 15 мар. 2017 г. в 15:34, Sergi Vladykin <sergi.vladykin@gmail.com
> >:
> > >
> > > > Ok, here is what you actually need to implement at the application
> > level.
> > > >
> > > > Lets say we have to call 2 services in the following order:
> > > >  - Service A: wants to update keys [k1 => v1,   k2 => v2]  to  [k1 =>
> > > v1a,
> > > >   k2 => v2a]
> > > >  - Service B: wants to update keys [k2 => v2a, k3 => v3]  to  [k2 =>
> > > v2ab,
> > > > k3 => v3b]
> > > >
> > > > The change
> > > >     from [ k1 => v1,   k2 => v2,     k3 => v3   ]
> > > >     to     [ k1 => v1a, k2 => v2ab, k3 => v3b ]
> > > > must happen in a single transaction.
> > > >
> > > >
> > > > Optimistic protocol to solve this:
> > > >
> > > > Each cache key must have a field `otx`, which is a unique
> orchestrator
> > TX
> > > > identifier - it must be a parameter passed to all the services. If
> > `otx`
> > > is
> > > > set to some value it means that it is an intermediate key and is
> > visible
> > > > only inside of some transaction, for the finalized key `otx` must be
> > > null -
> > > > it means the key is committed and visible for everyone.
> > > >
> > > > Each cache value must have a field `ver` which is a version of that
> > > value.
> > > >
> > > > For both fields (`otx` and `ver`) the safest way is to use UUID.
> > > >
> > > > Workflow is the following:
> > > >
> > > > Orchestrator starts the distributed transaction with `otx` = x and
> > passes
> > > > this parameter to all the services.
> > > >
> > > > Service A:
> > > >  - does some computations
> > > >  - stores [k1x => v1a, k2x => v2a]  with TTL = Za
> > > >       where
> > > >           Za - left time from max Orchestrator TX duration after
> > Service
> > > A
> > > > end
> > > >           k1x, k2x - new temporary keys with field `otx` = x
> > > >           v2a has updated version `ver`
> > > >  - returns a set of updated keys and all the old versions to the
> > > > orchestrator
> > > >        or just stores it in some special atomic cache like
> > > >        [x => (k1 -> ver1, k2 -> ver2)] TTL = Za
> > > >
> > > > Service B:
> > > >  - retrieves the updated value k2x => v2a because it knows `otx` = x
> > > >  - does computations
> > > >  - stores [k2x => v2ab, k3x => v3b] TTL = Zb
> > > >  - updates the set of updated keys like [x => (k1 -> ver1, k2 ->
> ver2,
> > k3
> > > > -> ver3)] TTL = Zb
> > > >
> > > > Service Committer (may be embedded into Orchestrator):
> > > >  - takes all the updated keys and versions for `otx` = x
> > > >        [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)]
> > > >  - in a single transaction checks value versions for all the old
> values
> > > >        and replaces them with calculated new ones
> > > >  - does cleanup of temporary keys and values
> > > >  - in case of version mismatch or TX timeout just rollbacks and
> signals
> > > >         to Orchestrator to restart the job with new `otx`
> > > >
> > > > PROFIT!!
> > > >
> > > > This approach even allows you to run independent parts of the graph
> in
> > > > parallel (with TX transfer you will always run only one at a time).
> > Also
> > > it
> > > > does not require inventing any special fault tolerance technics
> because
> > > > Ignite caches are already fault tolerant and all the intermediate
> > results
> > > > are virtually invisible and stored with TTL, thus in case of any
> crash
> > > you
> > > > will not have inconsistent state or garbage.
> > > >
> > > > Sergi
> > > >
> > > >
> > > > 2017-03-15 11:42 GMT+03:00 ALEKSEY KUZNETSOV <
> alkuznetsov.sb@gmail.com
> > >:
> > > >
> > > > > Okay, we are open for proposals on business task. I mean, we can
> make
> > > use
> > > > > of some other thing, not distributed transaction. Not transaction
> > yet.
> > > > >
> > > > > ср, 15 мар. 2017 г. в 11:24, Vladimir Ozerov <vozerov@gridgain.com
> >:
> > > > >
> > > > > > IMO the use case makes sense. However, as Sergi already
> mentioned,
> > > the
> > > > > > problem is far more complex, than simply passing TX state over a
> > > wire.
> > > > > Most
> > > > > > probably a kind of coordinator will be required still to manage
> all
> > > > kinds
> > > > > > of failures. This task should be started with clean design
> proposal
> > > > > > explaining how we handle all these concurrent events. And only
> > then,
> > > > when
> > > > > > we understand all implications, we should move to development
> > stage.
> > > > > >
> > > > > > On Wed, Mar 15, 2017 at 10:38 AM, ALEKSEY KUZNETSOV <
> > > > > > alkuznetsov.sb@gmail.com> wrote:
> > > > > >
> > > > > > > Right
> > > > > > >
> > > > > > > ср, 15 мар. 2017 г. в 10:35, Sergi Vladykin <
> > > > sergi.vladykin@gmail.com
> > > > > >:
> > > > > > >
> > > > > > > > Good! Basically your orchestrator just takes some predefined
> > > graph
> > > > of
> > > > > > > > distributed services to be invoked, calls them by some kind
> of
> > > RPC
> > > > > and
> > > > > > > > passes the needed parameters between them, right?
> > > > > > > >
> > > > > > > > Sergi
> > > > > > > >
> > > > > > > > 2017-03-14 22:46 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > alkuznetsov.sb@gmail.com
> > > > > > >:
> > > > > > > >
> > > > > > > > > orchestrator is a custom thing. He is responsible for
> > managing
> > > > > > business
> > > > > > > > > scenarios flows. Many nodes are involved in scenarios. They
> > > > > exchange
> > > > > > > data
> > > > > > > > > and folow one another. If you acquinted with BPMN
> framework,
> > so
> > > > > > > > > orchestrator is like bpmn engine.
> > > > > > > > >
> > > > > > > > > вт, 14 Мар 2017 г., 18:56 Sergi Vladykin <
> > > > sergi.vladykin@gmail.com
> > > > > >:
> > > > > > > > >
> > > > > > > > > > What is Orchestrator for you? Is it a thing from
> Microsoft
> > or
> > > > > your
> > > > > > > > custom
> > > > > > > > > > in-house software?
> > > > > > > > > >
> > > > > > > > > > Sergi
> > > > > > > > > >
> > > > > > > > > > 2017-03-14 18:00 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > >:
> > > > > > > > > >
> > > > > > > > > > > Fine. Let's say we've got multiple servers which
> fulfills
> > > > > custom
> > > > > > > > logic.
> > > > > > > > > > > This servers compound oriented graph (BPMN process)
> which
> > > > > > > controlled
> > > > > > > > by
> > > > > > > > > > > Orchestrator.
> > > > > > > > > > > For instance, *server1  *creates *variable A *with
> value
> > 1,
> > > > > > > persists
> > > > > > > > it
> > > > > > > > > > to
> > > > > > > > > > > IGNITE cache and creates *variable B *and sends it to*
> > > > server2.
> > > > > > > *The
> > > > > > > > > > > latests receives *variable B*, do some logic with it
> and
> > > > stores
> > > > > > to
> > > > > > > > > > IGNITE.
> > > > > > > > > > > All the work made by both servers must be fulfilled in
> > > *one*
> > > > > > > > > transaction.
> > > > > > > > > > > Because we need all information done, or
> > > nothing(rollbacked).
> > > > > The
> > > > > > > > > > scenario
> > > > > > > > > > > is managed by orchestrator.
> > > > > > > > > > >
> > > > > > > > > > > вт, 14 мар. 2017 г. в 17:31, Sergi Vladykin <
> > > > > > > > sergi.vladykin@gmail.com
> > > > > > > > > >:
> > > > > > > > > > >
> > > > > > > > > > > > Ok, it is not a business case, it is your wrong
> > solution
> > > > for
> > > > > > it.
> > > > > > > > > > > > Lets try again, what is the business case?
> > > > > > > > > > > >
> > > > > > > > > > > > Sergi
> > > > > > > > > > > >
> > > > > > > > > > > > 2017-03-14 16:42 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > >:
> > > > > > > > > > > >
> > > > > > > > > > > > > The case is the following, One starts transaction
> in
> > > one
> > > > > > node,
> > > > > > > > and
> > > > > > > > > > > commit
> > > > > > > > > > > > > this transaction in another jvm node(or rollback it
> > > > > > remotely).
> > > > > > > > > > > > >
> > > > > > > > > > > > > вт, 14 мар. 2017 г. в 16:30, Sergi Vladykin <
> > > > > > > > > > sergi.vladykin@gmail.com
> > > > > > > > > > > >:
> > > > > > > > > > > > >
> > > > > > > > > > > > > > Because even if you make it work for some
> > simplistic
> > > > > > > scenario,
> > > > > > > > > get
> > > > > > > > > > > > ready
> > > > > > > > > > > > > to
> > > > > > > > > > > > > > write many fault tolerance tests and make sure
> that
> > > you
> > > > > TXs
> > > > > > > > work
> > > > > > > > > > > > > gracefully
> > > > > > > > > > > > > > in all modes in case of crashes. Also make sure
> > that
> > > we
> > > > > do
> > > > > > > not
> > > > > > > > > have
> > > > > > > > > > > any
> > > > > > > > > > > > > > performance drops after all your changes in
> > existing
> > > > > > > > benchmarks.
> > > > > > > > > > All
> > > > > > > > > > > in
> > > > > > > > > > > > > all
> > > > > > > > > > > > > > I don't believe these conditions will be met and
> > your
> > > > > > > > > contribution
> > > > > > > > > > > will
> > > > > > > > > > > > > be
> > > > > > > > > > > > > > accepted.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Better solution to what problem? Sending TX to
> > > another
> > > > > > node?
> > > > > > > > The
> > > > > > > > > > > > problem
> > > > > > > > > > > > > > statement itself is already wrong. What business
> > case
> > > > you
> > > > > > are
> > > > > > > > > > trying
> > > > > > > > > > > to
> > > > > > > > > > > > > > solve? I'm sure everything you need can be done
> in
> > a
> > > > much
> > > > > > > more
> > > > > > > > > > simple
> > > > > > > > > > > > and
> > > > > > > > > > > > > > efficient way at the application level.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Sergi
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > 2017-03-14 16:03 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > >:
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Why wrong ? You know the better solution?
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > вт, 14 мар. 2017 г. в 15:46, Sergi Vladykin <
> > > > > > > > > > > > sergi.vladykin@gmail.com
> > > > > > > > > > > > > >:
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > Just serializing TX object and deserializing
> it
> > > on
> > > > > > > another
> > > > > > > > > node
> > > > > > > > > > > is
> > > > > > > > > > > > > > > > meaningless, because other nodes
> participating
> > in
> > > > the
> > > > > > TX
> > > > > > > > have
> > > > > > > > > > to
> > > > > > > > > > > > know
> > > > > > > > > > > > > > > about
> > > > > > > > > > > > > > > > the new coordinator. This will require
> protocol
> > > > > > changes,
> > > > > > > we
> > > > > > > > > > > > > definitely
> > > > > > > > > > > > > > > will
> > > > > > > > > > > > > > > > have fault tolerance and performance issues.
> > IMO
> > > > the
> > > > > > > whole
> > > > > > > > > idea
> > > > > > > > > > > is
> > > > > > > > > > > > > > wrong
> > > > > > > > > > > > > > > > and it makes no sense to waste time on it.
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > Sergi
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > 2017-03-14 10:57 GMT+03:00 ALEKSEY KUZNETSOV
> <
> > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > IgniteTransactionState implememntation
> > contains
> > > > > > > > > > IgniteTxEntry's
> > > > > > > > > > > > > which
> > > > > > > > > > > > > > > is
> > > > > > > > > > > > > > > > > supposed to be transferable
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > пн, 13 мар. 2017 г. в 19:32, Dmitriy
> > Setrakyan
> > > <
> > > > > > > > > > > > > > dsetrakyan@apache.org
> > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > It sounds a little scary to me that we
> are
> > > > > passing
> > > > > > > > > > > transaction
> > > > > > > > > > > > > > > objects
> > > > > > > > > > > > > > > > > > around. Such object may contain all sorts
> > of
> > > > > Ignite
> > > > > > > > > > context.
> > > > > > > > > > > If
> > > > > > > > > > > > > > some
> > > > > > > > > > > > > > > > data
> > > > > > > > > > > > > > > > > > needs to be passed across, we should
> > create a
> > > > > > special
> > > > > > > > > > > transfer
> > > > > > > > > > > > > > object
> > > > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > this case.
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > D.
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > On Mon, Mar 13, 2017 at 9:10 AM, ALEKSEY
> > > > > KUZNETSOV
> > > > > > <
> > > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > well, there a couple of issues
> preventing
> > > > > > > transaction
> > > > > > > > > > > > > proceeding.
> > > > > > > > > > > > > > > > > > > At first, After transaction
> serialization
> > > and
> > > > > > > > > > > deserialization
> > > > > > > > > > > > > on
> > > > > > > > > > > > > > > the
> > > > > > > > > > > > > > > > > > remote
> > > > > > > > > > > > > > > > > > > server, there is no txState. So im
> going
> > to
> > > > put
> > > > > > it
> > > > > > > in
> > > > > > > > > > > > > > > > > > > writeExternal()\readExternal()
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > The last one is Deserialized
> transaction
> > > > lacks
> > > > > of
> > > > > > > > > shared
> > > > > > > > > > > > cache
> > > > > > > > > > > > > > > > context
> > > > > > > > > > > > > > > > > > > field at TransactionProxyImpl. Perhaps,
> > it
> > > > must
> > > > > > be
> > > > > > > > > > injected
> > > > > > > > > > > > by
> > > > > > > > > > > > > > > > > > > GridResourceProcessor ?
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > пн, 13 мар. 2017 г. в 17:27, ALEKSEY
> > > > KUZNETSOV
> > > > > <
> > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > while starting and continuing
> > transaction
> > > > in
> > > > > > > > > different
> > > > > > > > > > > jvms
> > > > > > > > > > > > > in
> > > > > > > > > > > > > > > run
> > > > > > > > > > > > > > > > > into
> > > > > > > > > > > > > > > > > > > > serialization exception in
> > > > writeExternalMeta
> > > > > :
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > @Override public void
> > > > > > writeExternal(ObjectOutput
> > > > > > > > out)
> > > > > > > > > > > > throws
> > > > > > > > > > > > > > > > > > IOException
> > > > > > > > > > > > > > > > > > > {
> > > > > > > > > > > > > > > > > > > >     writeExternalMeta(out);
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > some meta is cannot be serialized.
> > > > > > > > > > > > > > > > > > > > пт, 10 мар. 2017 г. в 17:25, Alexey
> > > > > Goncharuk <
> > > > > > > > > > > > > > > > > > > alexey.goncharuk@gmail.com
> > > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > Aleksey,
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > I think I am starting to get what you
> > > want,
> > > > > > but I
> > > > > > > > > have
> > > > > > > > > > a
> > > > > > > > > > > > few
> > > > > > > > > > > > > > > > > concerns:
> > > > > > > > > > > > > > > > > > > >  - What is the API for the proposed
> > > change?
> > > > > In
> > > > > > > your
> > > > > > > > > > test,
> > > > > > > > > > > > you
> > > > > > > > > > > > > > > pass
> > > > > > > > > > > > > > > > an
> > > > > > > > > > > > > > > > > > > > instance of transaction created on
> > > > ignite(0)
> > > > > to
> > > > > > > the
> > > > > > > > > > > ignite
> > > > > > > > > > > > > > > instance
> > > > > > > > > > > > > > > > > > > > ignite(1). This is obviously not
> > possible
> > > > in
> > > > > a
> > > > > > > > truly
> > > > > > > > > > > > > > distributed
> > > > > > > > > > > > > > > > > > > > (multi-jvm) environment.
> > > > > > > > > > > > > > > > > > > > - How will you synchronize cache
> update
> > > > > actions
> > > > > > > and
> > > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > > commit?
> > > > > > > > > > > > > > > > > > > > Say, you have one node that decided
> to
> > > > > commit,
> > > > > > > but
> > > > > > > > > > > another
> > > > > > > > > > > > > node
> > > > > > > > > > > > > > > is
> > > > > > > > > > > > > > > > > > still
> > > > > > > > > > > > > > > > > > > > writing within this transaction. How
> do
> > > you
> > > > > > make
> > > > > > > > sure
> > > > > > > > > > > that
> > > > > > > > > > > > > two
> > > > > > > > > > > > > > > > nodes
> > > > > > > > > > > > > > > > > > will
> > > > > > > > > > > > > > > > > > > > not call commit() and rollback()
> > > > > > simultaneously?
> > > > > > > > > > > > > > > > > > > >  - How do you make sure that either
> > > > commit()
> > > > > or
> > > > > > > > > > > rollback()
> > > > > > > > > > > > is
> > > > > > > > > > > > > > > > called
> > > > > > > > > > > > > > > > > if
> > > > > > > > > > > > > > > > > > > an
> > > > > > > > > > > > > > > > > > > > originator failed?
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > 2017-03-10 15:38 GMT+03:00 Дмитрий
> > Рябов
> > > <
> > > > > > > > > > > > > > somefireone@gmail.com
> > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > Alexey Goncharuk, heh, my initial
> > > > > > understanding
> > > > > > > > was
> > > > > > > > > > > that
> > > > > > > > > > > > > > > > > transferring
> > > > > > > > > > > > > > > > > > > of
> > > > > > > > > > > > > > > > > > > > tx
> > > > > > > > > > > > > > > > > > > > > ownership from one node to another
> > will
> > > > be
> > > > > > > > happened
> > > > > > > > > > > > > > > automatically
> > > > > > > > > > > > > > > > > > when
> > > > > > > > > > > > > > > > > > > > > originating node is gone down.
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > 2017-03-10 15:36 GMT+03:00 ALEKSEY
> > > > > KUZNETSOV
> > > > > > <
> > > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > Im aiming to span transaction on
> > > > multiple
> > > > > > > > > threads,
> > > > > > > > > > > > nodes,
> > > > > > > > > > > > > > > > > > jvms(soon).
> > > > > > > > > > > > > > > > > > > > So
> > > > > > > > > > > > > > > > > > > > > > every node is able to rollback,
> or
> > > > commit
> > > > > > > > common
> > > > > > > > > > > > > > > transaction.It
> > > > > > > > > > > > > > > > > > > turned
> > > > > > > > > > > > > > > > > > > > > up i
> > > > > > > > > > > > > > > > > > > > > > need to transfer tx between nodes
> > in
> > > > > order
> > > > > > to
> > > > > > > > > > commit
> > > > > > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > > > > > different node(in the same jvm).
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > пт, 10 мар. 2017 г. в 15:20,
> Alexey
> > > > > > > Goncharuk <
> > > > > > > > > > > > > > > > > > > > > alexey.goncharuk@gmail.com
> > > > > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > Aleksey,
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > Do you mean that you want a
> > concept
> > > > of
> > > > > > > > > > transferring
> > > > > > > > > > > > of
> > > > > > > > > > > > > tx
> > > > > > > > > > > > > > > > > > ownership
> > > > > > > > > > > > > > > > > > > > > from
> > > > > > > > > > > > > > > > > > > > > > > one node to another? My initial
> > > > > > > understanding
> > > > > > > > > was
> > > > > > > > > > > > that
> > > > > > > > > > > > > > you
> > > > > > > > > > > > > > > > want
> > > > > > > > > > > > > > > > > > to
> > > > > > > > > > > > > > > > > > > be
> > > > > > > > > > > > > > > > > > > > > > able
> > > > > > > > > > > > > > > > > > > > > > > to update keys in a transaction
> > > from
> > > > > > > multiple
> > > > > > > > > > > threads
> > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > parallel.
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > --AG
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > 2017-03-10 15:01 GMT+03:00
> > ALEKSEY
> > > > > > > KUZNETSOV
> > > > > > > > <
> > > > > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > Well. Consider transaction
> > > started
> > > > in
> > > > > > one
> > > > > > > > > node,
> > > > > > > > > > > and
> > > > > > > > > > > > > > > > continued
> > > > > > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > > > > > another
> > > > > > > > > > > > > > > > > > > > > > > > one.
> > > > > > > > > > > > > > > > > > > > > > > > The following test describes
> my
> > > > idea:
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > Ignite ignite1 = ignite(0);
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > IgniteTransactions
> > transactions =
> > > > > > > > > > > > > > ignite1.transactions();
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > IgniteCache<String, Integer>
> > > cache
> > > > =
> > > > > > > > > > > > > > > > > ignite1.getOrCreateCache("
> > > > > > > > > > > > > > > > > > > > > > > > testCache");
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > Transaction tx =
> > > > > transactions.txStart(
> > > > > > > > > > > concurrency,
> > > > > > > > > > > > > > > > > isolation);
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > cache.put("key1", 1);
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > cache.put("key2", 2);
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > tx.stop();
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > IgniteInternalFuture<Boolean>
> > > fut =
> > > > > > > > > > > > > > > > GridTestUtils.runAsync(()
> > > > > > > > > > > > > > > > > > ->
> > > > > > > > > > > > > > > > > > > {
> > > > > > > > > > > > > > > > > > > > > > > >     IgniteTransactions ts =
> > > > > > > > > > > > ignite(1).transactions();
> > > > > > > > > > > > > > > > > > > > > > > >
>  Assert.assertNull(ts.tx());
> > > > > > > > > > > > > > > > > > > > > > > >     Assert.assertEquals(
> > > > > > > > > > > TransactionState.STOPPED,
> > > > > > > > > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > > > > > > > >     ts.txStart(tx);
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > >  Assert.assertEquals(TransactionState.ACTIVE,
> > > > > > > > > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > > > > > > > >     cache.put("key3", 3);
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > >  Assert.assertTrue(cache.remove("key2"));
> > > > > > > > > > > > > > > > > > > > > > > >     tx.commit();
> > > > > > > > > > > > > > > > > > > > > > > >     return true;
> > > > > > > > > > > > > > > > > > > > > > > > });
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > fut.get();
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > Assert.assertEquals(
> > > > > > > > > TransactionState.COMMITTED,
> > > > > > > > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > > > > > > > > Assert.assertEquals((long)1,
> > > > > > > > > > > > > (long)cache.get("key1"));
> > > > > > > > > > > > > > > > > > > > > > > > Assert.assertEquals((long)3,
> > > > > > > > > > > > > (long)cache.get("key3"));
> > > > > > > > > > > > > > > > > > > > > > > > Assert.assertFalse(cache.
> > > > > > > > > containsKey("key2"));
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > In method *ts.txStart(...)*
> we
> > > just
> > > > > > > rebind
> > > > > > > > > *tx*
> > > > > > > > > > > to
> > > > > > > > > > > > > > > current
> > > > > > > > > > > > > > > > > > > thread:
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > public void
> txStart(Transaction
> > > > tx) {
> > > > > > > > > > > > > > > > > > > > > > > >     TransactionProxyImpl
> > > > > > > transactionProxy =
> > > > > > > > > > > > > > > > > > > > (TransactionProxyImpl)tx;
> > > > > > > > > > > > > > > > > > > > > > > >     cctx.tm().reopenTx(
> > > > > > > > > transactionProxy.tx());
> > > > > > > > > > > > > > > > > > > > > > > >     transactionProxy.
> > > > > > > bindToCurrentThread();
> > > > > > > > > > > > > > > > > > > > > > > > }
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > In method *reopenTx* we alter
> > > > > > *threadMap*
> > > > > > > > so
> > > > > > > > > > that
> > > > > > > > > > > > it
> > > > > > > > > > > > > > > binds
> > > > > > > > > > > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > > > > > > > > > to current thread.
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > How do u think about it ?
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > вт, 7 мар. 2017 г. в 22:38,
> > Denis
> > > > > > Magda <
> > > > > > > > > > > > > > > dmagda@apache.org
> > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > Hi Alexey,
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > Please share the rational
> > > behind
> > > > > this
> > > > > > > and
> > > > > > > > > the
> > > > > > > > > > > > > > thoughts,
> > > > > > > > > > > > > > > > > > design
> > > > > > > > > > > > > > > > > > > > > ideas
> > > > > > > > > > > > > > > > > > > > > > > you
> > > > > > > > > > > > > > > > > > > > > > > > > have in mind.
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > —
> > > > > > > > > > > > > > > > > > > > > > > > > Denis
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > On Mar 7, 2017, at 3:19
> AM,
> > > > > ALEKSEY
> > > > > > > > > > > KUZNETSOV <
> > > > > > > > > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com>
> > > > > > > > > > > > > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > Hi all! Im designing
> > > > distributed
> > > > > > > > > > transaction
> > > > > > > > > > > > > which
> > > > > > > > > > > > > > > can
> > > > > > > > > > > > > > > > be
> > > > > > > > > > > > > > > > > > > > started
> > > > > > > > > > > > > > > > > > > > > > at
> > > > > > > > > > > > > > > > > > > > > > > > one
> > > > > > > > > > > > > > > > > > > > > > > > > > node, and continued at
> > other
> > > > one.
> > > > > > Has
> > > > > > > > > > anybody
> > > > > > > > > > > > > > > thoughts
> > > > > > > > > > > > > > > > on
> > > > > > > > > > > > > > > > > > it
> > > > > > > > > > > > > > > > > > > ?
> > > > > > > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > --
> > > > > > > > > > > > >
> > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > >
> > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > --
> > > > > > > > > > >
> > > > > > > > > > > *Best Regards,*
> > > > > > > > > > >
> > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > --
> > > > > > > > >
> > > > > > > > > *Best Regards,*
> > > > > > > > >
> > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > >
> > > > > > > >
> > > > > > > --
> > > > > > >
> > > > > > > *Best Regards,*
> > > > > > >
> > > > > > > *Kuznetsov Aleksey*
> > > > > > >
> > > > > >
> > > > > --
> > > > >
> > > > > *Best Regards,*
> > > > >
> > > > > *Kuznetsov Aleksey*
> > > > >
> > > >
> > > --
> > >
> > > *Best Regards,*
> > >
> > > *Kuznetsov Aleksey*
> > >
> >
> > --
> >
> > *Best Regards,*
> >
> > *Kuznetsov Aleksey*
> >
> --
>
> *Best Regards,*
>
> *Kuznetsov Aleksey*
>

Re: distributed transaction of non-single coordinator

Posted by ALEKSEY KUZNETSOV <al...@gmail.com>.
Yeah, now i got it.
There are some doubts on this approach
1) During optimistic commit phase, when you assure no one altered the
original values, you must check versions of other dependent keys. How could
we obtain those keys(in an automative manner, of course) ?
2) How could we lock a key before some Service A introduce changes? So no
other service is allowed to change this key-value?(sort of pessimistic
blocking)
May be you know some implementations of such approach ?

ср, 15 мар. 2017 г. в 17:54, ALEKSEY KUZNETSOV <al...@gmail.com>:

>  Thank you very much for help.  I will answer later.
>
> ср, 15 мар. 2017 г. в 17:39, Sergi Vladykin <se...@gmail.com>:
>
> All the services do not update key in place, but only generate new keys
> augmented by otx and store the updated value in the same cache + remember
> the keys and versions participating in the transaction in some separate
> atomic cache.
>
> Follow this sequence of changes applied to cache contents by each Service:
>
> Initial cache contents:
>             [k1 => v1]
>             [k2 => v2]
>             [k3 => v3]
>
> Cache contents after Service A:
>             [k1 => v1]
>             [k2 => v2]
>             [k3 => v3]
>             [k1x => v1a]
>             [k2x => v2a]
>
>          + [x => (k1 -> ver1, k2 -> ver2)] in some separate atomic cache
>
> Cache contents after Service B:
>             [k1 => v1]
>             [k2 => v2]
>             [k3 => v3]
>             [k1x => v1a]
>             [k2x => v2ab]
>             [k3x => v3b]
>
>         + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in some separate
> atomic cache
>
> Finally the Committer Service takes this map of updated keys and their
> versions from some separate atomic cache, starts Ignite transaction and
> replaces all the values for k* keys to values taken from k*x keys. The
> successful result must be the following:
>
>             [k1 => v1a]
>             [k2 => v2ab]
>             [k3 => v3b]
>             [k1x => v1a]
>             [k2x => v2ab]
>             [k3x => v3b]
>
>         + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in some separate
> atomic cache
>
> But Committer Service also has to check that no one updated the original
> values before us, because otherwise we can not give any serializability
> guarantee for these distributed transactions. Here we may need to check not
> only versions of the updated keys, but also versions of any other keys end
> result depends on.
>
> After that Committer Service has to do a cleanup (may be outside of the
> committing tx) to come to the following final state:
>
>             [k1 => v1a]
>             [k2 => v2ab]
>             [k3 => v3b]
>
> Makes sense?
>
> Sergi
>
>
> 2017-03-15 16:54 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:
>
> >    - what do u mean by saying "
> > *in a single transaction checks value versions for all the old values
> >     and replaces them with calculated new ones *"? Every time you change
> >    value(in some service), you store it to *some special atomic cache* ,
> so
> >    when all services ceased working, Service commiter got a values with
> the
> >    last versions.
> >    - After "*does cleanup of temporary keys and values*" Service commiter
> >    persists them into permanent store, isn't it ?
> >    - I cant grasp your though, you say "*in case of version mismatch or
> TX
> >    timeout just rollbacks*". But what versions would it match?
> >
> >
> > ср, 15 мар. 2017 г. в 15:34, Sergi Vladykin <se...@gmail.com>:
> >
> > > Ok, here is what you actually need to implement at the application
> level.
> > >
> > > Lets say we have to call 2 services in the following order:
> > >  - Service A: wants to update keys [k1 => v1,   k2 => v2]  to  [k1 =>
> > v1a,
> > >   k2 => v2a]
> > >  - Service B: wants to update keys [k2 => v2a, k3 => v3]  to  [k2 =>
> > v2ab,
> > > k3 => v3b]
> > >
> > > The change
> > >     from [ k1 => v1,   k2 => v2,     k3 => v3   ]
> > >     to     [ k1 => v1a, k2 => v2ab, k3 => v3b ]
> > > must happen in a single transaction.
> > >
> > >
> > > Optimistic protocol to solve this:
> > >
> > > Each cache key must have a field `otx`, which is a unique orchestrator
> TX
> > > identifier - it must be a parameter passed to all the services. If
> `otx`
> > is
> > > set to some value it means that it is an intermediate key and is
> visible
> > > only inside of some transaction, for the finalized key `otx` must be
> > null -
> > > it means the key is committed and visible for everyone.
> > >
> > > Each cache value must have a field `ver` which is a version of that
> > value.
> > >
> > > For both fields (`otx` and `ver`) the safest way is to use UUID.
> > >
> > > Workflow is the following:
> > >
> > > Orchestrator starts the distributed transaction with `otx` = x and
> passes
> > > this parameter to all the services.
> > >
> > > Service A:
> > >  - does some computations
> > >  - stores [k1x => v1a, k2x => v2a]  with TTL = Za
> > >       where
> > >           Za - left time from max Orchestrator TX duration after
> Service
> > A
> > > end
> > >           k1x, k2x - new temporary keys with field `otx` = x
> > >           v2a has updated version `ver`
> > >  - returns a set of updated keys and all the old versions to the
> > > orchestrator
> > >        or just stores it in some special atomic cache like
> > >        [x => (k1 -> ver1, k2 -> ver2)] TTL = Za
> > >
> > > Service B:
> > >  - retrieves the updated value k2x => v2a because it knows `otx` = x
> > >  - does computations
> > >  - stores [k2x => v2ab, k3x => v3b] TTL = Zb
> > >  - updates the set of updated keys like [x => (k1 -> ver1, k2 -> ver2,
> k3
> > > -> ver3)] TTL = Zb
> > >
> > > Service Committer (may be embedded into Orchestrator):
> > >  - takes all the updated keys and versions for `otx` = x
> > >        [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)]
> > >  - in a single transaction checks value versions for all the old values
> > >        and replaces them with calculated new ones
> > >  - does cleanup of temporary keys and values
> > >  - in case of version mismatch or TX timeout just rollbacks and signals
> > >         to Orchestrator to restart the job with new `otx`
> > >
> > > PROFIT!!
> > >
> > > This approach even allows you to run independent parts of the graph in
> > > parallel (with TX transfer you will always run only one at a time).
> Also
> > it
> > > does not require inventing any special fault tolerance technics because
> > > Ignite caches are already fault tolerant and all the intermediate
> results
> > > are virtually invisible and stored with TTL, thus in case of any crash
> > you
> > > will not have inconsistent state or garbage.
> > >
> > > Sergi
> > >
> > >
> > > 2017-03-15 11:42 GMT+03:00 ALEKSEY KUZNETSOV <alkuznetsov.sb@gmail.com
> >:
> > >
> > > > Okay, we are open for proposals on business task. I mean, we can make
> > use
> > > > of some other thing, not distributed transaction. Not transaction
> yet.
> > > >
> > > > ср, 15 мар. 2017 г. в 11:24, Vladimir Ozerov <vo...@gridgain.com>:
> > > >
> > > > > IMO the use case makes sense. However, as Sergi already mentioned,
> > the
> > > > > problem is far more complex, than simply passing TX state over a
> > wire.
> > > > Most
> > > > > probably a kind of coordinator will be required still to manage all
> > > kinds
> > > > > of failures. This task should be started with clean design proposal
> > > > > explaining how we handle all these concurrent events. And only
> then,
> > > when
> > > > > we understand all implications, we should move to development
> stage.
> > > > >
> > > > > On Wed, Mar 15, 2017 at 10:38 AM, ALEKSEY KUZNETSOV <
> > > > > alkuznetsov.sb@gmail.com> wrote:
> > > > >
> > > > > > Right
> > > > > >
> > > > > > ср, 15 мар. 2017 г. в 10:35, Sergi Vladykin <
> > > sergi.vladykin@gmail.com
> > > > >:
> > > > > >
> > > > > > > Good! Basically your orchestrator just takes some predefined
> > graph
> > > of
> > > > > > > distributed services to be invoked, calls them by some kind of
> > RPC
> > > > and
> > > > > > > passes the needed parameters between them, right?
> > > > > > >
> > > > > > > Sergi
> > > > > > >
> > > > > > > 2017-03-14 22:46 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > alkuznetsov.sb@gmail.com
> > > > > >:
> > > > > > >
> > > > > > > > orchestrator is a custom thing. He is responsible for
> managing
> > > > > business
> > > > > > > > scenarios flows. Many nodes are involved in scenarios. They
> > > > exchange
> > > > > > data
> > > > > > > > and folow one another. If you acquinted with BPMN framework,
> so
> > > > > > > > orchestrator is like bpmn engine.
> > > > > > > >
> > > > > > > > вт, 14 Мар 2017 г., 18:56 Sergi Vladykin <
> > > sergi.vladykin@gmail.com
> > > > >:
> > > > > > > >
> > > > > > > > > What is Orchestrator for you? Is it a thing from Microsoft
> or
> > > > your
> > > > > > > custom
> > > > > > > > > in-house software?
> > > > > > > > >
> > > > > > > > > Sergi
> > > > > > > > >
> > > > > > > > > 2017-03-14 18:00 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > alkuznetsov.sb@gmail.com
> > > > > > > >:
> > > > > > > > >
> > > > > > > > > > Fine. Let's say we've got multiple servers which fulfills
> > > > custom
> > > > > > > logic.
> > > > > > > > > > This servers compound oriented graph (BPMN process) which
> > > > > > controlled
> > > > > > > by
> > > > > > > > > > Orchestrator.
> > > > > > > > > > For instance, *server1  *creates *variable A *with value
> 1,
> > > > > > persists
> > > > > > > it
> > > > > > > > > to
> > > > > > > > > > IGNITE cache and creates *variable B *and sends it to*
> > > server2.
> > > > > > *The
> > > > > > > > > > latests receives *variable B*, do some logic with it and
> > > stores
> > > > > to
> > > > > > > > > IGNITE.
> > > > > > > > > > All the work made by both servers must be fulfilled in
> > *one*
> > > > > > > > transaction.
> > > > > > > > > > Because we need all information done, or
> > nothing(rollbacked).
> > > > The
> > > > > > > > > scenario
> > > > > > > > > > is managed by orchestrator.
> > > > > > > > > >
> > > > > > > > > > вт, 14 мар. 2017 г. в 17:31, Sergi Vladykin <
> > > > > > > sergi.vladykin@gmail.com
> > > > > > > > >:
> > > > > > > > > >
> > > > > > > > > > > Ok, it is not a business case, it is your wrong
> solution
> > > for
> > > > > it.
> > > > > > > > > > > Lets try again, what is the business case?
> > > > > > > > > > >
> > > > > > > > > > > Sergi
> > > > > > > > > > >
> > > > > > > > > > > 2017-03-14 16:42 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > >:
> > > > > > > > > > >
> > > > > > > > > > > > The case is the following, One starts transaction in
> > one
> > > > > node,
> > > > > > > and
> > > > > > > > > > commit
> > > > > > > > > > > > this transaction in another jvm node(or rollback it
> > > > > remotely).
> > > > > > > > > > > >
> > > > > > > > > > > > вт, 14 мар. 2017 г. в 16:30, Sergi Vladykin <
> > > > > > > > > sergi.vladykin@gmail.com
> > > > > > > > > > >:
> > > > > > > > > > > >
> > > > > > > > > > > > > Because even if you make it work for some
> simplistic
> > > > > > scenario,
> > > > > > > > get
> > > > > > > > > > > ready
> > > > > > > > > > > > to
> > > > > > > > > > > > > write many fault tolerance tests and make sure that
> > you
> > > > TXs
> > > > > > > work
> > > > > > > > > > > > gracefully
> > > > > > > > > > > > > in all modes in case of crashes. Also make sure
> that
> > we
> > > > do
> > > > > > not
> > > > > > > > have
> > > > > > > > > > any
> > > > > > > > > > > > > performance drops after all your changes in
> existing
> > > > > > > benchmarks.
> > > > > > > > > All
> > > > > > > > > > in
> > > > > > > > > > > > all
> > > > > > > > > > > > > I don't believe these conditions will be met and
> your
> > > > > > > > contribution
> > > > > > > > > > will
> > > > > > > > > > > > be
> > > > > > > > > > > > > accepted.
> > > > > > > > > > > > >
> > > > > > > > > > > > > Better solution to what problem? Sending TX to
> > another
> > > > > node?
> > > > > > > The
> > > > > > > > > > > problem
> > > > > > > > > > > > > statement itself is already wrong. What business
> case
> > > you
> > > > > are
> > > > > > > > > trying
> > > > > > > > > > to
> > > > > > > > > > > > > solve? I'm sure everything you need can be done in
> a
> > > much
> > > > > > more
> > > > > > > > > simple
> > > > > > > > > > > and
> > > > > > > > > > > > > efficient way at the application level.
> > > > > > > > > > > > >
> > > > > > > > > > > > > Sergi
> > > > > > > > > > > > >
> > > > > > > > > > > > > 2017-03-14 16:03 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > >:
> > > > > > > > > > > > >
> > > > > > > > > > > > > > Why wrong ? You know the better solution?
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > вт, 14 мар. 2017 г. в 15:46, Sergi Vladykin <
> > > > > > > > > > > sergi.vladykin@gmail.com
> > > > > > > > > > > > >:
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Just serializing TX object and deserializing it
> > on
> > > > > > another
> > > > > > > > node
> > > > > > > > > > is
> > > > > > > > > > > > > > > meaningless, because other nodes participating
> in
> > > the
> > > > > TX
> > > > > > > have
> > > > > > > > > to
> > > > > > > > > > > know
> > > > > > > > > > > > > > about
> > > > > > > > > > > > > > > the new coordinator. This will require protocol
> > > > > changes,
> > > > > > we
> > > > > > > > > > > > definitely
> > > > > > > > > > > > > > will
> > > > > > > > > > > > > > > have fault tolerance and performance issues.
> IMO
> > > the
> > > > > > whole
> > > > > > > > idea
> > > > > > > > > > is
> > > > > > > > > > > > > wrong
> > > > > > > > > > > > > > > and it makes no sense to waste time on it.
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Sergi
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > 2017-03-14 10:57 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > >:
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > IgniteTransactionState implememntation
> contains
> > > > > > > > > IgniteTxEntry's
> > > > > > > > > > > > which
> > > > > > > > > > > > > > is
> > > > > > > > > > > > > > > > supposed to be transferable
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > пн, 13 мар. 2017 г. в 19:32, Dmitriy
> Setrakyan
> > <
> > > > > > > > > > > > > dsetrakyan@apache.org
> > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > It sounds a little scary to me that we are
> > > > passing
> > > > > > > > > > transaction
> > > > > > > > > > > > > > objects
> > > > > > > > > > > > > > > > > around. Such object may contain all sorts
> of
> > > > Ignite
> > > > > > > > > context.
> > > > > > > > > > If
> > > > > > > > > > > > > some
> > > > > > > > > > > > > > > data
> > > > > > > > > > > > > > > > > needs to be passed across, we should
> create a
> > > > > special
> > > > > > > > > > transfer
> > > > > > > > > > > > > object
> > > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > this case.
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > D.
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > On Mon, Mar 13, 2017 at 9:10 AM, ALEKSEY
> > > > KUZNETSOV
> > > > > <
> > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > well, there a couple of issues preventing
> > > > > > transaction
> > > > > > > > > > > > proceeding.
> > > > > > > > > > > > > > > > > > At first, After transaction serialization
> > and
> > > > > > > > > > deserialization
> > > > > > > > > > > > on
> > > > > > > > > > > > > > the
> > > > > > > > > > > > > > > > > remote
> > > > > > > > > > > > > > > > > > server, there is no txState. So im going
> to
> > > put
> > > > > it
> > > > > > in
> > > > > > > > > > > > > > > > > > writeExternal()\readExternal()
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > The last one is Deserialized transaction
> > > lacks
> > > > of
> > > > > > > > shared
> > > > > > > > > > > cache
> > > > > > > > > > > > > > > context
> > > > > > > > > > > > > > > > > > field at TransactionProxyImpl. Perhaps,
> it
> > > must
> > > > > be
> > > > > > > > > injected
> > > > > > > > > > > by
> > > > > > > > > > > > > > > > > > GridResourceProcessor ?
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > пн, 13 мар. 2017 г. в 17:27, ALEKSEY
> > > KUZNETSOV
> > > > <
> > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > while starting and continuing
> transaction
> > > in
> > > > > > > > different
> > > > > > > > > > jvms
> > > > > > > > > > > > in
> > > > > > > > > > > > > > run
> > > > > > > > > > > > > > > > into
> > > > > > > > > > > > > > > > > > > serialization exception in
> > > writeExternalMeta
> > > > :
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > @Override public void
> > > > > writeExternal(ObjectOutput
> > > > > > > out)
> > > > > > > > > > > throws
> > > > > > > > > > > > > > > > > IOException
> > > > > > > > > > > > > > > > > > {
> > > > > > > > > > > > > > > > > > >     writeExternalMeta(out);
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > some meta is cannot be serialized.
> > > > > > > > > > > > > > > > > > > пт, 10 мар. 2017 г. в 17:25, Alexey
> > > > Goncharuk <
> > > > > > > > > > > > > > > > > > alexey.goncharuk@gmail.com
> > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > Aleksey,
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > I think I am starting to get what you
> > want,
> > > > > but I
> > > > > > > > have
> > > > > > > > > a
> > > > > > > > > > > few
> > > > > > > > > > > > > > > > concerns:
> > > > > > > > > > > > > > > > > > >  - What is the API for the proposed
> > change?
> > > > In
> > > > > > your
> > > > > > > > > test,
> > > > > > > > > > > you
> > > > > > > > > > > > > > pass
> > > > > > > > > > > > > > > an
> > > > > > > > > > > > > > > > > > > instance of transaction created on
> > > ignite(0)
> > > > to
> > > > > > the
> > > > > > > > > > ignite
> > > > > > > > > > > > > > instance
> > > > > > > > > > > > > > > > > > > ignite(1). This is obviously not
> possible
> > > in
> > > > a
> > > > > > > truly
> > > > > > > > > > > > > distributed
> > > > > > > > > > > > > > > > > > > (multi-jvm) environment.
> > > > > > > > > > > > > > > > > > > - How will you synchronize cache update
> > > > actions
> > > > > > and
> > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > commit?
> > > > > > > > > > > > > > > > > > > Say, you have one node that decided to
> > > > commit,
> > > > > > but
> > > > > > > > > > another
> > > > > > > > > > > > node
> > > > > > > > > > > > > > is
> > > > > > > > > > > > > > > > > still
> > > > > > > > > > > > > > > > > > > writing within this transaction. How do
> > you
> > > > > make
> > > > > > > sure
> > > > > > > > > > that
> > > > > > > > > > > > two
> > > > > > > > > > > > > > > nodes
> > > > > > > > > > > > > > > > > will
> > > > > > > > > > > > > > > > > > > not call commit() and rollback()
> > > > > simultaneously?
> > > > > > > > > > > > > > > > > > >  - How do you make sure that either
> > > commit()
> > > > or
> > > > > > > > > > rollback()
> > > > > > > > > > > is
> > > > > > > > > > > > > > > called
> > > > > > > > > > > > > > > > if
> > > > > > > > > > > > > > > > > > an
> > > > > > > > > > > > > > > > > > > originator failed?
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > 2017-03-10 15:38 GMT+03:00 Дмитрий
> Рябов
> > <
> > > > > > > > > > > > > somefireone@gmail.com
> > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > Alexey Goncharuk, heh, my initial
> > > > > understanding
> > > > > > > was
> > > > > > > > > > that
> > > > > > > > > > > > > > > > transferring
> > > > > > > > > > > > > > > > > > of
> > > > > > > > > > > > > > > > > > > tx
> > > > > > > > > > > > > > > > > > > > ownership from one node to another
> will
> > > be
> > > > > > > happened
> > > > > > > > > > > > > > automatically
> > > > > > > > > > > > > > > > > when
> > > > > > > > > > > > > > > > > > > > originating node is gone down.
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > 2017-03-10 15:36 GMT+03:00 ALEKSEY
> > > > KUZNETSOV
> > > > > <
> > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > Im aiming to span transaction on
> > > multiple
> > > > > > > > threads,
> > > > > > > > > > > nodes,
> > > > > > > > > > > > > > > > > jvms(soon).
> > > > > > > > > > > > > > > > > > > So
> > > > > > > > > > > > > > > > > > > > > every node is able to rollback, or
> > > commit
> > > > > > > common
> > > > > > > > > > > > > > transaction.It
> > > > > > > > > > > > > > > > > > turned
> > > > > > > > > > > > > > > > > > > > up i
> > > > > > > > > > > > > > > > > > > > > need to transfer tx between nodes
> in
> > > > order
> > > > > to
> > > > > > > > > commit
> > > > > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > > > > different node(in the same jvm).
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > пт, 10 мар. 2017 г. в 15:20, Alexey
> > > > > > Goncharuk <
> > > > > > > > > > > > > > > > > > > > alexey.goncharuk@gmail.com
> > > > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > Aleksey,
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > Do you mean that you want a
> concept
> > > of
> > > > > > > > > transferring
> > > > > > > > > > > of
> > > > > > > > > > > > tx
> > > > > > > > > > > > > > > > > ownership
> > > > > > > > > > > > > > > > > > > > from
> > > > > > > > > > > > > > > > > > > > > > one node to another? My initial
> > > > > > understanding
> > > > > > > > was
> > > > > > > > > > > that
> > > > > > > > > > > > > you
> > > > > > > > > > > > > > > want
> > > > > > > > > > > > > > > > > to
> > > > > > > > > > > > > > > > > > be
> > > > > > > > > > > > > > > > > > > > > able
> > > > > > > > > > > > > > > > > > > > > > to update keys in a transaction
> > from
> > > > > > multiple
> > > > > > > > > > threads
> > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > parallel.
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > --AG
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > 2017-03-10 15:01 GMT+03:00
> ALEKSEY
> > > > > > KUZNETSOV
> > > > > > > <
> > > > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > Well. Consider transaction
> > started
> > > in
> > > > > one
> > > > > > > > node,
> > > > > > > > > > and
> > > > > > > > > > > > > > > continued
> > > > > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > > > > another
> > > > > > > > > > > > > > > > > > > > > > > one.
> > > > > > > > > > > > > > > > > > > > > > > The following test describes my
> > > idea:
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > Ignite ignite1 = ignite(0);
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > IgniteTransactions
> transactions =
> > > > > > > > > > > > > ignite1.transactions();
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > IgniteCache<String, Integer>
> > cache
> > > =
> > > > > > > > > > > > > > > > ignite1.getOrCreateCache("
> > > > > > > > > > > > > > > > > > > > > > > testCache");
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > Transaction tx =
> > > > transactions.txStart(
> > > > > > > > > > concurrency,
> > > > > > > > > > > > > > > > isolation);
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > cache.put("key1", 1);
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > cache.put("key2", 2);
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > tx.stop();
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > IgniteInternalFuture<Boolean>
> > fut =
> > > > > > > > > > > > > > > GridTestUtils.runAsync(()
> > > > > > > > > > > > > > > > > ->
> > > > > > > > > > > > > > > > > > {
> > > > > > > > > > > > > > > > > > > > > > >     IgniteTransactions ts =
> > > > > > > > > > > ignite(1).transactions();
> > > > > > > > > > > > > > > > > > > > > > >     Assert.assertNull(ts.tx());
> > > > > > > > > > > > > > > > > > > > > > >     Assert.assertEquals(
> > > > > > > > > > TransactionState.STOPPED,
> > > > > > > > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > > > > > > >     ts.txStart(tx);
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > >  Assert.assertEquals(TransactionState.ACTIVE,
> > > > > > > > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > > > > > > >     cache.put("key3", 3);
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > >  Assert.assertTrue(cache.remove("key2"));
> > > > > > > > > > > > > > > > > > > > > > >     tx.commit();
> > > > > > > > > > > > > > > > > > > > > > >     return true;
> > > > > > > > > > > > > > > > > > > > > > > });
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > fut.get();
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > Assert.assertEquals(
> > > > > > > > TransactionState.COMMITTED,
> > > > > > > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > > > > > > > Assert.assertEquals((long)1,
> > > > > > > > > > > > (long)cache.get("key1"));
> > > > > > > > > > > > > > > > > > > > > > > Assert.assertEquals((long)3,
> > > > > > > > > > > > (long)cache.get("key3"));
> > > > > > > > > > > > > > > > > > > > > > > Assert.assertFalse(cache.
> > > > > > > > containsKey("key2"));
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > In method *ts.txStart(...)* we
> > just
> > > > > > rebind
> > > > > > > > *tx*
> > > > > > > > > > to
> > > > > > > > > > > > > > current
> > > > > > > > > > > > > > > > > > thread:
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > public void txStart(Transaction
> > > tx) {
> > > > > > > > > > > > > > > > > > > > > > >     TransactionProxyImpl
> > > > > > transactionProxy =
> > > > > > > > > > > > > > > > > > > (TransactionProxyImpl)tx;
> > > > > > > > > > > > > > > > > > > > > > >     cctx.tm().reopenTx(
> > > > > > > > transactionProxy.tx());
> > > > > > > > > > > > > > > > > > > > > > >     transactionProxy.
> > > > > > bindToCurrentThread();
> > > > > > > > > > > > > > > > > > > > > > > }
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > In method *reopenTx* we alter
> > > > > *threadMap*
> > > > > > > so
> > > > > > > > > that
> > > > > > > > > > > it
> > > > > > > > > > > > > > binds
> > > > > > > > > > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > > > > > > > > to current thread.
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > How do u think about it ?
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > вт, 7 мар. 2017 г. в 22:38,
> Denis
> > > > > Magda <
> > > > > > > > > > > > > > dmagda@apache.org
> > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > Hi Alexey,
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > Please share the rational
> > behind
> > > > this
> > > > > > and
> > > > > > > > the
> > > > > > > > > > > > > thoughts,
> > > > > > > > > > > > > > > > > design
> > > > > > > > > > > > > > > > > > > > ideas
> > > > > > > > > > > > > > > > > > > > > > you
> > > > > > > > > > > > > > > > > > > > > > > > have in mind.
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > —
> > > > > > > > > > > > > > > > > > > > > > > > Denis
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > On Mar 7, 2017, at 3:19 AM,
> > > > ALEKSEY
> > > > > > > > > > KUZNETSOV <
> > > > > > > > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com>
> > > > > > > > > > > > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > Hi all! Im designing
> > > distributed
> > > > > > > > > transaction
> > > > > > > > > > > > which
> > > > > > > > > > > > > > can
> > > > > > > > > > > > > > > be
> > > > > > > > > > > > > > > > > > > started
> > > > > > > > > > > > > > > > > > > > > at
> > > > > > > > > > > > > > > > > > > > > > > one
> > > > > > > > > > > > > > > > > > > > > > > > > node, and continued at
> other
> > > one.
> > > > > Has
> > > > > > > > > anybody
> > > > > > > > > > > > > > thoughts
> > > > > > > > > > > > > > > on
> > > > > > > > > > > > > > > > > it
> > > > > > > > > > > > > > > > > > ?
> > > > > > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > --
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > --
> > > > > > > > > > > >
> > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > >
> > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > --
> > > > > > > > > >
> > > > > > > > > > *Best Regards,*
> > > > > > > > > >
> > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > --
> > > > > > > >
> > > > > > > > *Best Regards,*
> > > > > > > >
> > > > > > > > *Kuznetsov Aleksey*
> > > > > > > >
> > > > > > >
> > > > > > --
> > > > > >
> > > > > > *Best Regards,*
> > > > > >
> > > > > > *Kuznetsov Aleksey*
> > > > > >
> > > > >
> > > > --
> > > >
> > > > *Best Regards,*
> > > >
> > > > *Kuznetsov Aleksey*
> > > >
> > >
> > --
> >
> > *Best Regards,*
> >
> > *Kuznetsov Aleksey*
> >
>
> --
>
> *Best Regards,*
>
> *Kuznetsov Aleksey*
>
-- 

*Best Regards,*

*Kuznetsov Aleksey*

Re: distributed transaction of non-single coordinator

Posted by ALEKSEY KUZNETSOV <al...@gmail.com>.
 Thank you very much for help.  I will answer later.

ср, 15 мар. 2017 г. в 17:39, Sergi Vladykin <se...@gmail.com>:

> All the services do not update key in place, but only generate new keys
> augmented by otx and store the updated value in the same cache + remember
> the keys and versions participating in the transaction in some separate
> atomic cache.
>
> Follow this sequence of changes applied to cache contents by each Service:
>
> Initial cache contents:
>             [k1 => v1]
>             [k2 => v2]
>             [k3 => v3]
>
> Cache contents after Service A:
>             [k1 => v1]
>             [k2 => v2]
>             [k3 => v3]
>             [k1x => v1a]
>             [k2x => v2a]
>
>          + [x => (k1 -> ver1, k2 -> ver2)] in some separate atomic cache
>
> Cache contents after Service B:
>             [k1 => v1]
>             [k2 => v2]
>             [k3 => v3]
>             [k1x => v1a]
>             [k2x => v2ab]
>             [k3x => v3b]
>
>         + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in some separate
> atomic cache
>
> Finally the Committer Service takes this map of updated keys and their
> versions from some separate atomic cache, starts Ignite transaction and
> replaces all the values for k* keys to values taken from k*x keys. The
> successful result must be the following:
>
>             [k1 => v1a]
>             [k2 => v2ab]
>             [k3 => v3b]
>             [k1x => v1a]
>             [k2x => v2ab]
>             [k3x => v3b]
>
>         + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in some separate
> atomic cache
>
> But Committer Service also has to check that no one updated the original
> values before us, because otherwise we can not give any serializability
> guarantee for these distributed transactions. Here we may need to check not
> only versions of the updated keys, but also versions of any other keys end
> result depends on.
>
> After that Committer Service has to do a cleanup (may be outside of the
> committing tx) to come to the following final state:
>
>             [k1 => v1a]
>             [k2 => v2ab]
>             [k3 => v3b]
>
> Makes sense?
>
> Sergi
>
>
> 2017-03-15 16:54 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:
>
> >    - what do u mean by saying "
> > *in a single transaction checks value versions for all the old values
> >     and replaces them with calculated new ones *"? Every time you change
> >    value(in some service), you store it to *some special atomic cache* ,
> so
> >    when all services ceased working, Service commiter got a values with
> the
> >    last versions.
> >    - After "*does cleanup of temporary keys and values*" Service commiter
> >    persists them into permanent store, isn't it ?
> >    - I cant grasp your though, you say "*in case of version mismatch or
> TX
> >    timeout just rollbacks*". But what versions would it match?
> >
> >
> > ср, 15 мар. 2017 г. в 15:34, Sergi Vladykin <se...@gmail.com>:
> >
> > > Ok, here is what you actually need to implement at the application
> level.
> > >
> > > Lets say we have to call 2 services in the following order:
> > >  - Service A: wants to update keys [k1 => v1,   k2 => v2]  to  [k1 =>
> > v1a,
> > >   k2 => v2a]
> > >  - Service B: wants to update keys [k2 => v2a, k3 => v3]  to  [k2 =>
> > v2ab,
> > > k3 => v3b]
> > >
> > > The change
> > >     from [ k1 => v1,   k2 => v2,     k3 => v3   ]
> > >     to     [ k1 => v1a, k2 => v2ab, k3 => v3b ]
> > > must happen in a single transaction.
> > >
> > >
> > > Optimistic protocol to solve this:
> > >
> > > Each cache key must have a field `otx`, which is a unique orchestrator
> TX
> > > identifier - it must be a parameter passed to all the services. If
> `otx`
> > is
> > > set to some value it means that it is an intermediate key and is
> visible
> > > only inside of some transaction, for the finalized key `otx` must be
> > null -
> > > it means the key is committed and visible for everyone.
> > >
> > > Each cache value must have a field `ver` which is a version of that
> > value.
> > >
> > > For both fields (`otx` and `ver`) the safest way is to use UUID.
> > >
> > > Workflow is the following:
> > >
> > > Orchestrator starts the distributed transaction with `otx` = x and
> passes
> > > this parameter to all the services.
> > >
> > > Service A:
> > >  - does some computations
> > >  - stores [k1x => v1a, k2x => v2a]  with TTL = Za
> > >       where
> > >           Za - left time from max Orchestrator TX duration after
> Service
> > A
> > > end
> > >           k1x, k2x - new temporary keys with field `otx` = x
> > >           v2a has updated version `ver`
> > >  - returns a set of updated keys and all the old versions to the
> > > orchestrator
> > >        or just stores it in some special atomic cache like
> > >        [x => (k1 -> ver1, k2 -> ver2)] TTL = Za
> > >
> > > Service B:
> > >  - retrieves the updated value k2x => v2a because it knows `otx` = x
> > >  - does computations
> > >  - stores [k2x => v2ab, k3x => v3b] TTL = Zb
> > >  - updates the set of updated keys like [x => (k1 -> ver1, k2 -> ver2,
> k3
> > > -> ver3)] TTL = Zb
> > >
> > > Service Committer (may be embedded into Orchestrator):
> > >  - takes all the updated keys and versions for `otx` = x
> > >        [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)]
> > >  - in a single transaction checks value versions for all the old values
> > >        and replaces them with calculated new ones
> > >  - does cleanup of temporary keys and values
> > >  - in case of version mismatch or TX timeout just rollbacks and signals
> > >         to Orchestrator to restart the job with new `otx`
> > >
> > > PROFIT!!
> > >
> > > This approach even allows you to run independent parts of the graph in
> > > parallel (with TX transfer you will always run only one at a time).
> Also
> > it
> > > does not require inventing any special fault tolerance technics because
> > > Ignite caches are already fault tolerant and all the intermediate
> results
> > > are virtually invisible and stored with TTL, thus in case of any crash
> > you
> > > will not have inconsistent state or garbage.
> > >
> > > Sergi
> > >
> > >
> > > 2017-03-15 11:42 GMT+03:00 ALEKSEY KUZNETSOV <alkuznetsov.sb@gmail.com
> >:
> > >
> > > > Okay, we are open for proposals on business task. I mean, we can make
> > use
> > > > of some other thing, not distributed transaction. Not transaction
> yet.
> > > >
> > > > ср, 15 мар. 2017 г. в 11:24, Vladimir Ozerov <vo...@gridgain.com>:
> > > >
> > > > > IMO the use case makes sense. However, as Sergi already mentioned,
> > the
> > > > > problem is far more complex, than simply passing TX state over a
> > wire.
> > > > Most
> > > > > probably a kind of coordinator will be required still to manage all
> > > kinds
> > > > > of failures. This task should be started with clean design proposal
> > > > > explaining how we handle all these concurrent events. And only
> then,
> > > when
> > > > > we understand all implications, we should move to development
> stage.
> > > > >
> > > > > On Wed, Mar 15, 2017 at 10:38 AM, ALEKSEY KUZNETSOV <
> > > > > alkuznetsov.sb@gmail.com> wrote:
> > > > >
> > > > > > Right
> > > > > >
> > > > > > ср, 15 мар. 2017 г. в 10:35, Sergi Vladykin <
> > > sergi.vladykin@gmail.com
> > > > >:
> > > > > >
> > > > > > > Good! Basically your orchestrator just takes some predefined
> > graph
> > > of
> > > > > > > distributed services to be invoked, calls them by some kind of
> > RPC
> > > > and
> > > > > > > passes the needed parameters between them, right?
> > > > > > >
> > > > > > > Sergi
> > > > > > >
> > > > > > > 2017-03-14 22:46 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > alkuznetsov.sb@gmail.com
> > > > > >:
> > > > > > >
> > > > > > > > orchestrator is a custom thing. He is responsible for
> managing
> > > > > business
> > > > > > > > scenarios flows. Many nodes are involved in scenarios. They
> > > > exchange
> > > > > > data
> > > > > > > > and folow one another. If you acquinted with BPMN framework,
> so
> > > > > > > > orchestrator is like bpmn engine.
> > > > > > > >
> > > > > > > > вт, 14 Мар 2017 г., 18:56 Sergi Vladykin <
> > > sergi.vladykin@gmail.com
> > > > >:
> > > > > > > >
> > > > > > > > > What is Orchestrator for you? Is it a thing from Microsoft
> or
> > > > your
> > > > > > > custom
> > > > > > > > > in-house software?
> > > > > > > > >
> > > > > > > > > Sergi
> > > > > > > > >
> > > > > > > > > 2017-03-14 18:00 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > alkuznetsov.sb@gmail.com
> > > > > > > >:
> > > > > > > > >
> > > > > > > > > > Fine. Let's say we've got multiple servers which fulfills
> > > > custom
> > > > > > > logic.
> > > > > > > > > > This servers compound oriented graph (BPMN process) which
> > > > > > controlled
> > > > > > > by
> > > > > > > > > > Orchestrator.
> > > > > > > > > > For instance, *server1  *creates *variable A *with value
> 1,
> > > > > > persists
> > > > > > > it
> > > > > > > > > to
> > > > > > > > > > IGNITE cache and creates *variable B *and sends it to*
> > > server2.
> > > > > > *The
> > > > > > > > > > latests receives *variable B*, do some logic with it and
> > > stores
> > > > > to
> > > > > > > > > IGNITE.
> > > > > > > > > > All the work made by both servers must be fulfilled in
> > *one*
> > > > > > > > transaction.
> > > > > > > > > > Because we need all information done, or
> > nothing(rollbacked).
> > > > The
> > > > > > > > > scenario
> > > > > > > > > > is managed by orchestrator.
> > > > > > > > > >
> > > > > > > > > > вт, 14 мар. 2017 г. в 17:31, Sergi Vladykin <
> > > > > > > sergi.vladykin@gmail.com
> > > > > > > > >:
> > > > > > > > > >
> > > > > > > > > > > Ok, it is not a business case, it is your wrong
> solution
> > > for
> > > > > it.
> > > > > > > > > > > Lets try again, what is the business case?
> > > > > > > > > > >
> > > > > > > > > > > Sergi
> > > > > > > > > > >
> > > > > > > > > > > 2017-03-14 16:42 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > >:
> > > > > > > > > > >
> > > > > > > > > > > > The case is the following, One starts transaction in
> > one
> > > > > node,
> > > > > > > and
> > > > > > > > > > commit
> > > > > > > > > > > > this transaction in another jvm node(or rollback it
> > > > > remotely).
> > > > > > > > > > > >
> > > > > > > > > > > > вт, 14 мар. 2017 г. в 16:30, Sergi Vladykin <
> > > > > > > > > sergi.vladykin@gmail.com
> > > > > > > > > > >:
> > > > > > > > > > > >
> > > > > > > > > > > > > Because even if you make it work for some
> simplistic
> > > > > > scenario,
> > > > > > > > get
> > > > > > > > > > > ready
> > > > > > > > > > > > to
> > > > > > > > > > > > > write many fault tolerance tests and make sure that
> > you
> > > > TXs
> > > > > > > work
> > > > > > > > > > > > gracefully
> > > > > > > > > > > > > in all modes in case of crashes. Also make sure
> that
> > we
> > > > do
> > > > > > not
> > > > > > > > have
> > > > > > > > > > any
> > > > > > > > > > > > > performance drops after all your changes in
> existing
> > > > > > > benchmarks.
> > > > > > > > > All
> > > > > > > > > > in
> > > > > > > > > > > > all
> > > > > > > > > > > > > I don't believe these conditions will be met and
> your
> > > > > > > > contribution
> > > > > > > > > > will
> > > > > > > > > > > > be
> > > > > > > > > > > > > accepted.
> > > > > > > > > > > > >
> > > > > > > > > > > > > Better solution to what problem? Sending TX to
> > another
> > > > > node?
> > > > > > > The
> > > > > > > > > > > problem
> > > > > > > > > > > > > statement itself is already wrong. What business
> case
> > > you
> > > > > are
> > > > > > > > > trying
> > > > > > > > > > to
> > > > > > > > > > > > > solve? I'm sure everything you need can be done in
> a
> > > much
> > > > > > more
> > > > > > > > > simple
> > > > > > > > > > > and
> > > > > > > > > > > > > efficient way at the application level.
> > > > > > > > > > > > >
> > > > > > > > > > > > > Sergi
> > > > > > > > > > > > >
> > > > > > > > > > > > > 2017-03-14 16:03 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > >:
> > > > > > > > > > > > >
> > > > > > > > > > > > > > Why wrong ? You know the better solution?
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > вт, 14 мар. 2017 г. в 15:46, Sergi Vladykin <
> > > > > > > > > > > sergi.vladykin@gmail.com
> > > > > > > > > > > > >:
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Just serializing TX object and deserializing it
> > on
> > > > > > another
> > > > > > > > node
> > > > > > > > > > is
> > > > > > > > > > > > > > > meaningless, because other nodes participating
> in
> > > the
> > > > > TX
> > > > > > > have
> > > > > > > > > to
> > > > > > > > > > > know
> > > > > > > > > > > > > > about
> > > > > > > > > > > > > > > the new coordinator. This will require protocol
> > > > > changes,
> > > > > > we
> > > > > > > > > > > > definitely
> > > > > > > > > > > > > > will
> > > > > > > > > > > > > > > have fault tolerance and performance issues.
> IMO
> > > the
> > > > > > whole
> > > > > > > > idea
> > > > > > > > > > is
> > > > > > > > > > > > > wrong
> > > > > > > > > > > > > > > and it makes no sense to waste time on it.
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Sergi
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > 2017-03-14 10:57 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > >:
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > IgniteTransactionState implememntation
> contains
> > > > > > > > > IgniteTxEntry's
> > > > > > > > > > > > which
> > > > > > > > > > > > > > is
> > > > > > > > > > > > > > > > supposed to be transferable
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > пн, 13 мар. 2017 г. в 19:32, Dmitriy
> Setrakyan
> > <
> > > > > > > > > > > > > dsetrakyan@apache.org
> > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > It sounds a little scary to me that we are
> > > > passing
> > > > > > > > > > transaction
> > > > > > > > > > > > > > objects
> > > > > > > > > > > > > > > > > around. Such object may contain all sorts
> of
> > > > Ignite
> > > > > > > > > context.
> > > > > > > > > > If
> > > > > > > > > > > > > some
> > > > > > > > > > > > > > > data
> > > > > > > > > > > > > > > > > needs to be passed across, we should
> create a
> > > > > special
> > > > > > > > > > transfer
> > > > > > > > > > > > > object
> > > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > this case.
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > D.
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > On Mon, Mar 13, 2017 at 9:10 AM, ALEKSEY
> > > > KUZNETSOV
> > > > > <
> > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > well, there a couple of issues preventing
> > > > > > transaction
> > > > > > > > > > > > proceeding.
> > > > > > > > > > > > > > > > > > At first, After transaction serialization
> > and
> > > > > > > > > > deserialization
> > > > > > > > > > > > on
> > > > > > > > > > > > > > the
> > > > > > > > > > > > > > > > > remote
> > > > > > > > > > > > > > > > > > server, there is no txState. So im going
> to
> > > put
> > > > > it
> > > > > > in
> > > > > > > > > > > > > > > > > > writeExternal()\readExternal()
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > The last one is Deserialized transaction
> > > lacks
> > > > of
> > > > > > > > shared
> > > > > > > > > > > cache
> > > > > > > > > > > > > > > context
> > > > > > > > > > > > > > > > > > field at TransactionProxyImpl. Perhaps,
> it
> > > must
> > > > > be
> > > > > > > > > injected
> > > > > > > > > > > by
> > > > > > > > > > > > > > > > > > GridResourceProcessor ?
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > пн, 13 мар. 2017 г. в 17:27, ALEKSEY
> > > KUZNETSOV
> > > > <
> > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > while starting and continuing
> transaction
> > > in
> > > > > > > > different
> > > > > > > > > > jvms
> > > > > > > > > > > > in
> > > > > > > > > > > > > > run
> > > > > > > > > > > > > > > > into
> > > > > > > > > > > > > > > > > > > serialization exception in
> > > writeExternalMeta
> > > > :
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > @Override public void
> > > > > writeExternal(ObjectOutput
> > > > > > > out)
> > > > > > > > > > > throws
> > > > > > > > > > > > > > > > > IOException
> > > > > > > > > > > > > > > > > > {
> > > > > > > > > > > > > > > > > > >     writeExternalMeta(out);
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > some meta is cannot be serialized.
> > > > > > > > > > > > > > > > > > > пт, 10 мар. 2017 г. в 17:25, Alexey
> > > > Goncharuk <
> > > > > > > > > > > > > > > > > > alexey.goncharuk@gmail.com
> > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > Aleksey,
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > I think I am starting to get what you
> > want,
> > > > > but I
> > > > > > > > have
> > > > > > > > > a
> > > > > > > > > > > few
> > > > > > > > > > > > > > > > concerns:
> > > > > > > > > > > > > > > > > > >  - What is the API for the proposed
> > change?
> > > > In
> > > > > > your
> > > > > > > > > test,
> > > > > > > > > > > you
> > > > > > > > > > > > > > pass
> > > > > > > > > > > > > > > an
> > > > > > > > > > > > > > > > > > > instance of transaction created on
> > > ignite(0)
> > > > to
> > > > > > the
> > > > > > > > > > ignite
> > > > > > > > > > > > > > instance
> > > > > > > > > > > > > > > > > > > ignite(1). This is obviously not
> possible
> > > in
> > > > a
> > > > > > > truly
> > > > > > > > > > > > > distributed
> > > > > > > > > > > > > > > > > > > (multi-jvm) environment.
> > > > > > > > > > > > > > > > > > > - How will you synchronize cache update
> > > > actions
> > > > > > and
> > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > commit?
> > > > > > > > > > > > > > > > > > > Say, you have one node that decided to
> > > > commit,
> > > > > > but
> > > > > > > > > > another
> > > > > > > > > > > > node
> > > > > > > > > > > > > > is
> > > > > > > > > > > > > > > > > still
> > > > > > > > > > > > > > > > > > > writing within this transaction. How do
> > you
> > > > > make
> > > > > > > sure
> > > > > > > > > > that
> > > > > > > > > > > > two
> > > > > > > > > > > > > > > nodes
> > > > > > > > > > > > > > > > > will
> > > > > > > > > > > > > > > > > > > not call commit() and rollback()
> > > > > simultaneously?
> > > > > > > > > > > > > > > > > > >  - How do you make sure that either
> > > commit()
> > > > or
> > > > > > > > > > rollback()
> > > > > > > > > > > is
> > > > > > > > > > > > > > > called
> > > > > > > > > > > > > > > > if
> > > > > > > > > > > > > > > > > > an
> > > > > > > > > > > > > > > > > > > originator failed?
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > 2017-03-10 15:38 GMT+03:00 Дмитрий
> Рябов
> > <
> > > > > > > > > > > > > somefireone@gmail.com
> > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > Alexey Goncharuk, heh, my initial
> > > > > understanding
> > > > > > > was
> > > > > > > > > > that
> > > > > > > > > > > > > > > > transferring
> > > > > > > > > > > > > > > > > > of
> > > > > > > > > > > > > > > > > > > tx
> > > > > > > > > > > > > > > > > > > > ownership from one node to another
> will
> > > be
> > > > > > > happened
> > > > > > > > > > > > > > automatically
> > > > > > > > > > > > > > > > > when
> > > > > > > > > > > > > > > > > > > > originating node is gone down.
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > 2017-03-10 15:36 GMT+03:00 ALEKSEY
> > > > KUZNETSOV
> > > > > <
> > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > Im aiming to span transaction on
> > > multiple
> > > > > > > > threads,
> > > > > > > > > > > nodes,
> > > > > > > > > > > > > > > > > jvms(soon).
> > > > > > > > > > > > > > > > > > > So
> > > > > > > > > > > > > > > > > > > > > every node is able to rollback, or
> > > commit
> > > > > > > common
> > > > > > > > > > > > > > transaction.It
> > > > > > > > > > > > > > > > > > turned
> > > > > > > > > > > > > > > > > > > > up i
> > > > > > > > > > > > > > > > > > > > > need to transfer tx between nodes
> in
> > > > order
> > > > > to
> > > > > > > > > commit
> > > > > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > > > > different node(in the same jvm).
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > пт, 10 мар. 2017 г. в 15:20, Alexey
> > > > > > Goncharuk <
> > > > > > > > > > > > > > > > > > > > alexey.goncharuk@gmail.com
> > > > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > Aleksey,
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > Do you mean that you want a
> concept
> > > of
> > > > > > > > > transferring
> > > > > > > > > > > of
> > > > > > > > > > > > tx
> > > > > > > > > > > > > > > > > ownership
> > > > > > > > > > > > > > > > > > > > from
> > > > > > > > > > > > > > > > > > > > > > one node to another? My initial
> > > > > > understanding
> > > > > > > > was
> > > > > > > > > > > that
> > > > > > > > > > > > > you
> > > > > > > > > > > > > > > want
> > > > > > > > > > > > > > > > > to
> > > > > > > > > > > > > > > > > > be
> > > > > > > > > > > > > > > > > > > > > able
> > > > > > > > > > > > > > > > > > > > > > to update keys in a transaction
> > from
> > > > > > multiple
> > > > > > > > > > threads
> > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > parallel.
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > --AG
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > 2017-03-10 15:01 GMT+03:00
> ALEKSEY
> > > > > > KUZNETSOV
> > > > > > > <
> > > > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > Well. Consider transaction
> > started
> > > in
> > > > > one
> > > > > > > > node,
> > > > > > > > > > and
> > > > > > > > > > > > > > > continued
> > > > > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > > > > another
> > > > > > > > > > > > > > > > > > > > > > > one.
> > > > > > > > > > > > > > > > > > > > > > > The following test describes my
> > > idea:
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > Ignite ignite1 = ignite(0);
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > IgniteTransactions
> transactions =
> > > > > > > > > > > > > ignite1.transactions();
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > IgniteCache<String, Integer>
> > cache
> > > =
> > > > > > > > > > > > > > > > ignite1.getOrCreateCache("
> > > > > > > > > > > > > > > > > > > > > > > testCache");
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > Transaction tx =
> > > > transactions.txStart(
> > > > > > > > > > concurrency,
> > > > > > > > > > > > > > > > isolation);
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > cache.put("key1", 1);
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > cache.put("key2", 2);
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > tx.stop();
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > IgniteInternalFuture<Boolean>
> > fut =
> > > > > > > > > > > > > > > GridTestUtils.runAsync(()
> > > > > > > > > > > > > > > > > ->
> > > > > > > > > > > > > > > > > > {
> > > > > > > > > > > > > > > > > > > > > > >     IgniteTransactions ts =
> > > > > > > > > > > ignite(1).transactions();
> > > > > > > > > > > > > > > > > > > > > > >     Assert.assertNull(ts.tx());
> > > > > > > > > > > > > > > > > > > > > > >     Assert.assertEquals(
> > > > > > > > > > TransactionState.STOPPED,
> > > > > > > > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > > > > > > >     ts.txStart(tx);
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > >  Assert.assertEquals(TransactionState.ACTIVE,
> > > > > > > > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > > > > > > >     cache.put("key3", 3);
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > >  Assert.assertTrue(cache.remove("key2"));
> > > > > > > > > > > > > > > > > > > > > > >     tx.commit();
> > > > > > > > > > > > > > > > > > > > > > >     return true;
> > > > > > > > > > > > > > > > > > > > > > > });
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > fut.get();
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > Assert.assertEquals(
> > > > > > > > TransactionState.COMMITTED,
> > > > > > > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > > > > > > > Assert.assertEquals((long)1,
> > > > > > > > > > > > (long)cache.get("key1"));
> > > > > > > > > > > > > > > > > > > > > > > Assert.assertEquals((long)3,
> > > > > > > > > > > > (long)cache.get("key3"));
> > > > > > > > > > > > > > > > > > > > > > > Assert.assertFalse(cache.
> > > > > > > > containsKey("key2"));
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > In method *ts.txStart(...)* we
> > just
> > > > > > rebind
> > > > > > > > *tx*
> > > > > > > > > > to
> > > > > > > > > > > > > > current
> > > > > > > > > > > > > > > > > > thread:
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > public void txStart(Transaction
> > > tx) {
> > > > > > > > > > > > > > > > > > > > > > >     TransactionProxyImpl
> > > > > > transactionProxy =
> > > > > > > > > > > > > > > > > > > (TransactionProxyImpl)tx;
> > > > > > > > > > > > > > > > > > > > > > >     cctx.tm().reopenTx(
> > > > > > > > transactionProxy.tx());
> > > > > > > > > > > > > > > > > > > > > > >     transactionProxy.
> > > > > > bindToCurrentThread();
> > > > > > > > > > > > > > > > > > > > > > > }
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > In method *reopenTx* we alter
> > > > > *threadMap*
> > > > > > > so
> > > > > > > > > that
> > > > > > > > > > > it
> > > > > > > > > > > > > > binds
> > > > > > > > > > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > > > > > > > > to current thread.
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > How do u think about it ?
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > вт, 7 мар. 2017 г. в 22:38,
> Denis
> > > > > Magda <
> > > > > > > > > > > > > > dmagda@apache.org
> > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > Hi Alexey,
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > Please share the rational
> > behind
> > > > this
> > > > > > and
> > > > > > > > the
> > > > > > > > > > > > > thoughts,
> > > > > > > > > > > > > > > > > design
> > > > > > > > > > > > > > > > > > > > ideas
> > > > > > > > > > > > > > > > > > > > > > you
> > > > > > > > > > > > > > > > > > > > > > > > have in mind.
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > —
> > > > > > > > > > > > > > > > > > > > > > > > Denis
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > On Mar 7, 2017, at 3:19 AM,
> > > > ALEKSEY
> > > > > > > > > > KUZNETSOV <
> > > > > > > > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com>
> > > > > > > > > > > > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > Hi all! Im designing
> > > distributed
> > > > > > > > > transaction
> > > > > > > > > > > > which
> > > > > > > > > > > > > > can
> > > > > > > > > > > > > > > be
> > > > > > > > > > > > > > > > > > > started
> > > > > > > > > > > > > > > > > > > > > at
> > > > > > > > > > > > > > > > > > > > > > > one
> > > > > > > > > > > > > > > > > > > > > > > > > node, and continued at
> other
> > > one.
> > > > > Has
> > > > > > > > > anybody
> > > > > > > > > > > > > > thoughts
> > > > > > > > > > > > > > > on
> > > > > > > > > > > > > > > > > it
> > > > > > > > > > > > > > > > > > ?
> > > > > > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > --
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > --
> > > > > > > > > > > >
> > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > >
> > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > --
> > > > > > > > > >
> > > > > > > > > > *Best Regards,*
> > > > > > > > > >
> > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > --
> > > > > > > >
> > > > > > > > *Best Regards,*
> > > > > > > >
> > > > > > > > *Kuznetsov Aleksey*
> > > > > > > >
> > > > > > >
> > > > > > --
> > > > > >
> > > > > > *Best Regards,*
> > > > > >
> > > > > > *Kuznetsov Aleksey*
> > > > > >
> > > > >
> > > > --
> > > >
> > > > *Best Regards,*
> > > >
> > > > *Kuznetsov Aleksey*
> > > >
> > >
> > --
> >
> > *Best Regards,*
> >
> > *Kuznetsov Aleksey*
> >
>
-- 

*Best Regards,*

*Kuznetsov Aleksey*

Re: distributed transaction of non-single coordinator

Posted by Sergi Vladykin <se...@gmail.com>.
All the services do not update key in place, but only generate new keys
augmented by otx and store the updated value in the same cache + remember
the keys and versions participating in the transaction in some separate
atomic cache.

Follow this sequence of changes applied to cache contents by each Service:

Initial cache contents:
            [k1 => v1]
            [k2 => v2]
            [k3 => v3]

Cache contents after Service A:
            [k1 => v1]
            [k2 => v2]
            [k3 => v3]
            [k1x => v1a]
            [k2x => v2a]

         + [x => (k1 -> ver1, k2 -> ver2)] in some separate atomic cache

Cache contents after Service B:
            [k1 => v1]
            [k2 => v2]
            [k3 => v3]
            [k1x => v1a]
            [k2x => v2ab]
            [k3x => v3b]

        + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in some separate
atomic cache

Finally the Committer Service takes this map of updated keys and their
versions from some separate atomic cache, starts Ignite transaction and
replaces all the values for k* keys to values taken from k*x keys. The
successful result must be the following:

            [k1 => v1a]
            [k2 => v2ab]
            [k3 => v3b]
            [k1x => v1a]
            [k2x => v2ab]
            [k3x => v3b]

        + [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)] in some separate
atomic cache

But Committer Service also has to check that no one updated the original
values before us, because otherwise we can not give any serializability
guarantee for these distributed transactions. Here we may need to check not
only versions of the updated keys, but also versions of any other keys end
result depends on.

After that Committer Service has to do a cleanup (may be outside of the
committing tx) to come to the following final state:

            [k1 => v1a]
            [k2 => v2ab]
            [k3 => v3b]

Makes sense?

Sergi


2017-03-15 16:54 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:

>    - what do u mean by saying "
> *in a single transaction checks value versions for all the old values
>     and replaces them with calculated new ones *"? Every time you change
>    value(in some service), you store it to *some special atomic cache* , so
>    when all services ceased working, Service commiter got a values with the
>    last versions.
>    - After "*does cleanup of temporary keys and values*" Service commiter
>    persists them into permanent store, isn't it ?
>    - I cant grasp your though, you say "*in case of version mismatch or TX
>    timeout just rollbacks*". But what versions would it match?
>
>
> ср, 15 мар. 2017 г. в 15:34, Sergi Vladykin <se...@gmail.com>:
>
> > Ok, here is what you actually need to implement at the application level.
> >
> > Lets say we have to call 2 services in the following order:
> >  - Service A: wants to update keys [k1 => v1,   k2 => v2]  to  [k1 =>
> v1a,
> >   k2 => v2a]
> >  - Service B: wants to update keys [k2 => v2a, k3 => v3]  to  [k2 =>
> v2ab,
> > k3 => v3b]
> >
> > The change
> >     from [ k1 => v1,   k2 => v2,     k3 => v3   ]
> >     to     [ k1 => v1a, k2 => v2ab, k3 => v3b ]
> > must happen in a single transaction.
> >
> >
> > Optimistic protocol to solve this:
> >
> > Each cache key must have a field `otx`, which is a unique orchestrator TX
> > identifier - it must be a parameter passed to all the services. If `otx`
> is
> > set to some value it means that it is an intermediate key and is visible
> > only inside of some transaction, for the finalized key `otx` must be
> null -
> > it means the key is committed and visible for everyone.
> >
> > Each cache value must have a field `ver` which is a version of that
> value.
> >
> > For both fields (`otx` and `ver`) the safest way is to use UUID.
> >
> > Workflow is the following:
> >
> > Orchestrator starts the distributed transaction with `otx` = x and passes
> > this parameter to all the services.
> >
> > Service A:
> >  - does some computations
> >  - stores [k1x => v1a, k2x => v2a]  with TTL = Za
> >       where
> >           Za - left time from max Orchestrator TX duration after Service
> A
> > end
> >           k1x, k2x - new temporary keys with field `otx` = x
> >           v2a has updated version `ver`
> >  - returns a set of updated keys and all the old versions to the
> > orchestrator
> >        or just stores it in some special atomic cache like
> >        [x => (k1 -> ver1, k2 -> ver2)] TTL = Za
> >
> > Service B:
> >  - retrieves the updated value k2x => v2a because it knows `otx` = x
> >  - does computations
> >  - stores [k2x => v2ab, k3x => v3b] TTL = Zb
> >  - updates the set of updated keys like [x => (k1 -> ver1, k2 -> ver2, k3
> > -> ver3)] TTL = Zb
> >
> > Service Committer (may be embedded into Orchestrator):
> >  - takes all the updated keys and versions for `otx` = x
> >        [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)]
> >  - in a single transaction checks value versions for all the old values
> >        and replaces them with calculated new ones
> >  - does cleanup of temporary keys and values
> >  - in case of version mismatch or TX timeout just rollbacks and signals
> >         to Orchestrator to restart the job with new `otx`
> >
> > PROFIT!!
> >
> > This approach even allows you to run independent parts of the graph in
> > parallel (with TX transfer you will always run only one at a time). Also
> it
> > does not require inventing any special fault tolerance technics because
> > Ignite caches are already fault tolerant and all the intermediate results
> > are virtually invisible and stored with TTL, thus in case of any crash
> you
> > will not have inconsistent state or garbage.
> >
> > Sergi
> >
> >
> > 2017-03-15 11:42 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:
> >
> > > Okay, we are open for proposals on business task. I mean, we can make
> use
> > > of some other thing, not distributed transaction. Not transaction yet.
> > >
> > > ср, 15 мар. 2017 г. в 11:24, Vladimir Ozerov <vo...@gridgain.com>:
> > >
> > > > IMO the use case makes sense. However, as Sergi already mentioned,
> the
> > > > problem is far more complex, than simply passing TX state over a
> wire.
> > > Most
> > > > probably a kind of coordinator will be required still to manage all
> > kinds
> > > > of failures. This task should be started with clean design proposal
> > > > explaining how we handle all these concurrent events. And only then,
> > when
> > > > we understand all implications, we should move to development stage.
> > > >
> > > > On Wed, Mar 15, 2017 at 10:38 AM, ALEKSEY KUZNETSOV <
> > > > alkuznetsov.sb@gmail.com> wrote:
> > > >
> > > > > Right
> > > > >
> > > > > ср, 15 мар. 2017 г. в 10:35, Sergi Vladykin <
> > sergi.vladykin@gmail.com
> > > >:
> > > > >
> > > > > > Good! Basically your orchestrator just takes some predefined
> graph
> > of
> > > > > > distributed services to be invoked, calls them by some kind of
> RPC
> > > and
> > > > > > passes the needed parameters between them, right?
> > > > > >
> > > > > > Sergi
> > > > > >
> > > > > > 2017-03-14 22:46 GMT+03:00 ALEKSEY KUZNETSOV <
> > > alkuznetsov.sb@gmail.com
> > > > >:
> > > > > >
> > > > > > > orchestrator is a custom thing. He is responsible for managing
> > > > business
> > > > > > > scenarios flows. Many nodes are involved in scenarios. They
> > > exchange
> > > > > data
> > > > > > > and folow one another. If you acquinted with BPMN framework, so
> > > > > > > orchestrator is like bpmn engine.
> > > > > > >
> > > > > > > вт, 14 Мар 2017 г., 18:56 Sergi Vladykin <
> > sergi.vladykin@gmail.com
> > > >:
> > > > > > >
> > > > > > > > What is Orchestrator for you? Is it a thing from Microsoft or
> > > your
> > > > > > custom
> > > > > > > > in-house software?
> > > > > > > >
> > > > > > > > Sergi
> > > > > > > >
> > > > > > > > 2017-03-14 18:00 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > alkuznetsov.sb@gmail.com
> > > > > > >:
> > > > > > > >
> > > > > > > > > Fine. Let's say we've got multiple servers which fulfills
> > > custom
> > > > > > logic.
> > > > > > > > > This servers compound oriented graph (BPMN process) which
> > > > > controlled
> > > > > > by
> > > > > > > > > Orchestrator.
> > > > > > > > > For instance, *server1  *creates *variable A *with value 1,
> > > > > persists
> > > > > > it
> > > > > > > > to
> > > > > > > > > IGNITE cache and creates *variable B *and sends it to*
> > server2.
> > > > > *The
> > > > > > > > > latests receives *variable B*, do some logic with it and
> > stores
> > > > to
> > > > > > > > IGNITE.
> > > > > > > > > All the work made by both servers must be fulfilled in
> *one*
> > > > > > > transaction.
> > > > > > > > > Because we need all information done, or
> nothing(rollbacked).
> > > The
> > > > > > > > scenario
> > > > > > > > > is managed by orchestrator.
> > > > > > > > >
> > > > > > > > > вт, 14 мар. 2017 г. в 17:31, Sergi Vladykin <
> > > > > > sergi.vladykin@gmail.com
> > > > > > > >:
> > > > > > > > >
> > > > > > > > > > Ok, it is not a business case, it is your wrong solution
> > for
> > > > it.
> > > > > > > > > > Lets try again, what is the business case?
> > > > > > > > > >
> > > > > > > > > > Sergi
> > > > > > > > > >
> > > > > > > > > > 2017-03-14 16:42 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > >:
> > > > > > > > > >
> > > > > > > > > > > The case is the following, One starts transaction in
> one
> > > > node,
> > > > > > and
> > > > > > > > > commit
> > > > > > > > > > > this transaction in another jvm node(or rollback it
> > > > remotely).
> > > > > > > > > > >
> > > > > > > > > > > вт, 14 мар. 2017 г. в 16:30, Sergi Vladykin <
> > > > > > > > sergi.vladykin@gmail.com
> > > > > > > > > >:
> > > > > > > > > > >
> > > > > > > > > > > > Because even if you make it work for some simplistic
> > > > > scenario,
> > > > > > > get
> > > > > > > > > > ready
> > > > > > > > > > > to
> > > > > > > > > > > > write many fault tolerance tests and make sure that
> you
> > > TXs
> > > > > > work
> > > > > > > > > > > gracefully
> > > > > > > > > > > > in all modes in case of crashes. Also make sure that
> we
> > > do
> > > > > not
> > > > > > > have
> > > > > > > > > any
> > > > > > > > > > > > performance drops after all your changes in existing
> > > > > > benchmarks.
> > > > > > > > All
> > > > > > > > > in
> > > > > > > > > > > all
> > > > > > > > > > > > I don't believe these conditions will be met and your
> > > > > > > contribution
> > > > > > > > > will
> > > > > > > > > > > be
> > > > > > > > > > > > accepted.
> > > > > > > > > > > >
> > > > > > > > > > > > Better solution to what problem? Sending TX to
> another
> > > > node?
> > > > > > The
> > > > > > > > > > problem
> > > > > > > > > > > > statement itself is already wrong. What business case
> > you
> > > > are
> > > > > > > > trying
> > > > > > > > > to
> > > > > > > > > > > > solve? I'm sure everything you need can be done in a
> > much
> > > > > more
> > > > > > > > simple
> > > > > > > > > > and
> > > > > > > > > > > > efficient way at the application level.
> > > > > > > > > > > >
> > > > > > > > > > > > Sergi
> > > > > > > > > > > >
> > > > > > > > > > > > 2017-03-14 16:03 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > >:
> > > > > > > > > > > >
> > > > > > > > > > > > > Why wrong ? You know the better solution?
> > > > > > > > > > > > >
> > > > > > > > > > > > > вт, 14 мар. 2017 г. в 15:46, Sergi Vladykin <
> > > > > > > > > > sergi.vladykin@gmail.com
> > > > > > > > > > > >:
> > > > > > > > > > > > >
> > > > > > > > > > > > > > Just serializing TX object and deserializing it
> on
> > > > > another
> > > > > > > node
> > > > > > > > > is
> > > > > > > > > > > > > > meaningless, because other nodes participating in
> > the
> > > > TX
> > > > > > have
> > > > > > > > to
> > > > > > > > > > know
> > > > > > > > > > > > > about
> > > > > > > > > > > > > > the new coordinator. This will require protocol
> > > > changes,
> > > > > we
> > > > > > > > > > > definitely
> > > > > > > > > > > > > will
> > > > > > > > > > > > > > have fault tolerance and performance issues. IMO
> > the
> > > > > whole
> > > > > > > idea
> > > > > > > > > is
> > > > > > > > > > > > wrong
> > > > > > > > > > > > > > and it makes no sense to waste time on it.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Sergi
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > 2017-03-14 10:57 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > >:
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > > IgniteTransactionState implememntation contains
> > > > > > > > IgniteTxEntry's
> > > > > > > > > > > which
> > > > > > > > > > > > > is
> > > > > > > > > > > > > > > supposed to be transferable
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > пн, 13 мар. 2017 г. в 19:32, Dmitriy Setrakyan
> <
> > > > > > > > > > > > dsetrakyan@apache.org
> > > > > > > > > > > > > >:
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > It sounds a little scary to me that we are
> > > passing
> > > > > > > > > transaction
> > > > > > > > > > > > > objects
> > > > > > > > > > > > > > > > around. Such object may contain all sorts of
> > > Ignite
> > > > > > > > context.
> > > > > > > > > If
> > > > > > > > > > > > some
> > > > > > > > > > > > > > data
> > > > > > > > > > > > > > > > needs to be passed across, we should create a
> > > > special
> > > > > > > > > transfer
> > > > > > > > > > > > object
> > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > this case.
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > D.
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > On Mon, Mar 13, 2017 at 9:10 AM, ALEKSEY
> > > KUZNETSOV
> > > > <
> > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > well, there a couple of issues preventing
> > > > > transaction
> > > > > > > > > > > proceeding.
> > > > > > > > > > > > > > > > > At first, After transaction serialization
> and
> > > > > > > > > deserialization
> > > > > > > > > > > on
> > > > > > > > > > > > > the
> > > > > > > > > > > > > > > > remote
> > > > > > > > > > > > > > > > > server, there is no txState. So im going to
> > put
> > > > it
> > > > > in
> > > > > > > > > > > > > > > > > writeExternal()\readExternal()
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > The last one is Deserialized transaction
> > lacks
> > > of
> > > > > > > shared
> > > > > > > > > > cache
> > > > > > > > > > > > > > context
> > > > > > > > > > > > > > > > > field at TransactionProxyImpl. Perhaps, it
> > must
> > > > be
> > > > > > > > injected
> > > > > > > > > > by
> > > > > > > > > > > > > > > > > GridResourceProcessor ?
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > пн, 13 мар. 2017 г. в 17:27, ALEKSEY
> > KUZNETSOV
> > > <
> > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > while starting and continuing transaction
> > in
> > > > > > > different
> > > > > > > > > jvms
> > > > > > > > > > > in
> > > > > > > > > > > > > run
> > > > > > > > > > > > > > > into
> > > > > > > > > > > > > > > > > > serialization exception in
> > writeExternalMeta
> > > :
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > @Override public void
> > > > writeExternal(ObjectOutput
> > > > > > out)
> > > > > > > > > > throws
> > > > > > > > > > > > > > > > IOException
> > > > > > > > > > > > > > > > > {
> > > > > > > > > > > > > > > > > >     writeExternalMeta(out);
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > some meta is cannot be serialized.
> > > > > > > > > > > > > > > > > > пт, 10 мар. 2017 г. в 17:25, Alexey
> > > Goncharuk <
> > > > > > > > > > > > > > > > > alexey.goncharuk@gmail.com
> > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > Aleksey,
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > I think I am starting to get what you
> want,
> > > > but I
> > > > > > > have
> > > > > > > > a
> > > > > > > > > > few
> > > > > > > > > > > > > > > concerns:
> > > > > > > > > > > > > > > > > >  - What is the API for the proposed
> change?
> > > In
> > > > > your
> > > > > > > > test,
> > > > > > > > > > you
> > > > > > > > > > > > > pass
> > > > > > > > > > > > > > an
> > > > > > > > > > > > > > > > > > instance of transaction created on
> > ignite(0)
> > > to
> > > > > the
> > > > > > > > > ignite
> > > > > > > > > > > > > instance
> > > > > > > > > > > > > > > > > > ignite(1). This is obviously not possible
> > in
> > > a
> > > > > > truly
> > > > > > > > > > > > distributed
> > > > > > > > > > > > > > > > > > (multi-jvm) environment.
> > > > > > > > > > > > > > > > > > - How will you synchronize cache update
> > > actions
> > > > > and
> > > > > > > > > > > transaction
> > > > > > > > > > > > > > > commit?
> > > > > > > > > > > > > > > > > > Say, you have one node that decided to
> > > commit,
> > > > > but
> > > > > > > > > another
> > > > > > > > > > > node
> > > > > > > > > > > > > is
> > > > > > > > > > > > > > > > still
> > > > > > > > > > > > > > > > > > writing within this transaction. How do
> you
> > > > make
> > > > > > sure
> > > > > > > > > that
> > > > > > > > > > > two
> > > > > > > > > > > > > > nodes
> > > > > > > > > > > > > > > > will
> > > > > > > > > > > > > > > > > > not call commit() and rollback()
> > > > simultaneously?
> > > > > > > > > > > > > > > > > >  - How do you make sure that either
> > commit()
> > > or
> > > > > > > > > rollback()
> > > > > > > > > > is
> > > > > > > > > > > > > > called
> > > > > > > > > > > > > > > if
> > > > > > > > > > > > > > > > > an
> > > > > > > > > > > > > > > > > > originator failed?
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > 2017-03-10 15:38 GMT+03:00 Дмитрий Рябов
> <
> > > > > > > > > > > > somefireone@gmail.com
> > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > Alexey Goncharuk, heh, my initial
> > > > understanding
> > > > > > was
> > > > > > > > > that
> > > > > > > > > > > > > > > transferring
> > > > > > > > > > > > > > > > > of
> > > > > > > > > > > > > > > > > > tx
> > > > > > > > > > > > > > > > > > > ownership from one node to another will
> > be
> > > > > > happened
> > > > > > > > > > > > > automatically
> > > > > > > > > > > > > > > > when
> > > > > > > > > > > > > > > > > > > originating node is gone down.
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > 2017-03-10 15:36 GMT+03:00 ALEKSEY
> > > KUZNETSOV
> > > > <
> > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > Im aiming to span transaction on
> > multiple
> > > > > > > threads,
> > > > > > > > > > nodes,
> > > > > > > > > > > > > > > > jvms(soon).
> > > > > > > > > > > > > > > > > > So
> > > > > > > > > > > > > > > > > > > > every node is able to rollback, or
> > commit
> > > > > > common
> > > > > > > > > > > > > transaction.It
> > > > > > > > > > > > > > > > > turned
> > > > > > > > > > > > > > > > > > > up i
> > > > > > > > > > > > > > > > > > > > need to transfer tx between nodes in
> > > order
> > > > to
> > > > > > > > commit
> > > > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > > > different node(in the same jvm).
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > пт, 10 мар. 2017 г. в 15:20, Alexey
> > > > > Goncharuk <
> > > > > > > > > > > > > > > > > > > alexey.goncharuk@gmail.com
> > > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > Aleksey,
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > Do you mean that you want a concept
> > of
> > > > > > > > transferring
> > > > > > > > > > of
> > > > > > > > > > > tx
> > > > > > > > > > > > > > > > ownership
> > > > > > > > > > > > > > > > > > > from
> > > > > > > > > > > > > > > > > > > > > one node to another? My initial
> > > > > understanding
> > > > > > > was
> > > > > > > > > > that
> > > > > > > > > > > > you
> > > > > > > > > > > > > > want
> > > > > > > > > > > > > > > > to
> > > > > > > > > > > > > > > > > be
> > > > > > > > > > > > > > > > > > > > able
> > > > > > > > > > > > > > > > > > > > > to update keys in a transaction
> from
> > > > > multiple
> > > > > > > > > threads
> > > > > > > > > > > in
> > > > > > > > > > > > > > > > parallel.
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > --AG
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > 2017-03-10 15:01 GMT+03:00 ALEKSEY
> > > > > KUZNETSOV
> > > > > > <
> > > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > Well. Consider transaction
> started
> > in
> > > > one
> > > > > > > node,
> > > > > > > > > and
> > > > > > > > > > > > > > continued
> > > > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > > > another
> > > > > > > > > > > > > > > > > > > > > > one.
> > > > > > > > > > > > > > > > > > > > > > The following test describes my
> > idea:
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > Ignite ignite1 = ignite(0);
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > IgniteTransactions transactions =
> > > > > > > > > > > > ignite1.transactions();
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > IgniteCache<String, Integer>
> cache
> > =
> > > > > > > > > > > > > > > ignite1.getOrCreateCache("
> > > > > > > > > > > > > > > > > > > > > > testCache");
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > Transaction tx =
> > > transactions.txStart(
> > > > > > > > > concurrency,
> > > > > > > > > > > > > > > isolation);
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > cache.put("key1", 1);
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > cache.put("key2", 2);
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > tx.stop();
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > IgniteInternalFuture<Boolean>
> fut =
> > > > > > > > > > > > > > GridTestUtils.runAsync(()
> > > > > > > > > > > > > > > > ->
> > > > > > > > > > > > > > > > > {
> > > > > > > > > > > > > > > > > > > > > >     IgniteTransactions ts =
> > > > > > > > > > ignite(1).transactions();
> > > > > > > > > > > > > > > > > > > > > >     Assert.assertNull(ts.tx());
> > > > > > > > > > > > > > > > > > > > > >     Assert.assertEquals(
> > > > > > > > > TransactionState.STOPPED,
> > > > > > > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > > > > > >     ts.txStart(tx);
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > >  Assert.assertEquals(TransactionState.ACTIVE,
> > > > > > > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > > > > > >     cache.put("key3", 3);
> > > > > > > > > > > > > > > > > > > > > >
> > > > > >  Assert.assertTrue(cache.remove("key2"));
> > > > > > > > > > > > > > > > > > > > > >     tx.commit();
> > > > > > > > > > > > > > > > > > > > > >     return true;
> > > > > > > > > > > > > > > > > > > > > > });
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > fut.get();
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > Assert.assertEquals(
> > > > > > > TransactionState.COMMITTED,
> > > > > > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > > > > > > Assert.assertEquals((long)1,
> > > > > > > > > > > (long)cache.get("key1"));
> > > > > > > > > > > > > > > > > > > > > > Assert.assertEquals((long)3,
> > > > > > > > > > > (long)cache.get("key3"));
> > > > > > > > > > > > > > > > > > > > > > Assert.assertFalse(cache.
> > > > > > > containsKey("key2"));
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > In method *ts.txStart(...)* we
> just
> > > > > rebind
> > > > > > > *tx*
> > > > > > > > > to
> > > > > > > > > > > > > current
> > > > > > > > > > > > > > > > > thread:
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > public void txStart(Transaction
> > tx) {
> > > > > > > > > > > > > > > > > > > > > >     TransactionProxyImpl
> > > > > transactionProxy =
> > > > > > > > > > > > > > > > > > (TransactionProxyImpl)tx;
> > > > > > > > > > > > > > > > > > > > > >     cctx.tm().reopenTx(
> > > > > > > transactionProxy.tx());
> > > > > > > > > > > > > > > > > > > > > >     transactionProxy.
> > > > > bindToCurrentThread();
> > > > > > > > > > > > > > > > > > > > > > }
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > In method *reopenTx* we alter
> > > > *threadMap*
> > > > > > so
> > > > > > > > that
> > > > > > > > > > it
> > > > > > > > > > > > > binds
> > > > > > > > > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > > > > > > > to current thread.
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > How do u think about it ?
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > вт, 7 мар. 2017 г. в 22:38, Denis
> > > > Magda <
> > > > > > > > > > > > > dmagda@apache.org
> > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > Hi Alexey,
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > Please share the rational
> behind
> > > this
> > > > > and
> > > > > > > the
> > > > > > > > > > > > thoughts,
> > > > > > > > > > > > > > > > design
> > > > > > > > > > > > > > > > > > > ideas
> > > > > > > > > > > > > > > > > > > > > you
> > > > > > > > > > > > > > > > > > > > > > > have in mind.
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > —
> > > > > > > > > > > > > > > > > > > > > > > Denis
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > On Mar 7, 2017, at 3:19 AM,
> > > ALEKSEY
> > > > > > > > > KUZNETSOV <
> > > > > > > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com>
> > > > > > > > > > > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > Hi all! Im designing
> > distributed
> > > > > > > > transaction
> > > > > > > > > > > which
> > > > > > > > > > > > > can
> > > > > > > > > > > > > > be
> > > > > > > > > > > > > > > > > > started
> > > > > > > > > > > > > > > > > > > > at
> > > > > > > > > > > > > > > > > > > > > > one
> > > > > > > > > > > > > > > > > > > > > > > > node, and continued at other
> > one.
> > > > Has
> > > > > > > > anybody
> > > > > > > > > > > > > thoughts
> > > > > > > > > > > > > > on
> > > > > > > > > > > > > > > > it
> > > > > > > > > > > > > > > > > ?
> > > > > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > --
> > > > > > > > > > > > >
> > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > >
> > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > --
> > > > > > > > > > >
> > > > > > > > > > > *Best Regards,*
> > > > > > > > > > >
> > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > --
> > > > > > > > >
> > > > > > > > > *Best Regards,*
> > > > > > > > >
> > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > >
> > > > > > > >
> > > > > > > --
> > > > > > >
> > > > > > > *Best Regards,*
> > > > > > >
> > > > > > > *Kuznetsov Aleksey*
> > > > > > >
> > > > > >
> > > > > --
> > > > >
> > > > > *Best Regards,*
> > > > >
> > > > > *Kuznetsov Aleksey*
> > > > >
> > > >
> > > --
> > >
> > > *Best Regards,*
> > >
> > > *Kuznetsov Aleksey*
> > >
> >
> --
>
> *Best Regards,*
>
> *Kuznetsov Aleksey*
>

Re: distributed transaction of non-single coordinator

Posted by ALEKSEY KUZNETSOV <al...@gmail.com>.
   - what do u mean by saying "
*in a single transaction checks value versions for all the old values
    and replaces them with calculated new ones *"? Every time you change
   value(in some service), you store it to *some special atomic cache* , so
   when all services ceased working, Service commiter got a values with the
   last versions.
   - After "*does cleanup of temporary keys and values*" Service commiter
   persists them into permanent store, isn't it ?
   - I cant grasp your though, you say "*in case of version mismatch or TX
   timeout just rollbacks*". But what versions would it match?


ср, 15 мар. 2017 г. в 15:34, Sergi Vladykin <se...@gmail.com>:

> Ok, here is what you actually need to implement at the application level.
>
> Lets say we have to call 2 services in the following order:
>  - Service A: wants to update keys [k1 => v1,   k2 => v2]  to  [k1 => v1a,
>   k2 => v2a]
>  - Service B: wants to update keys [k2 => v2a, k3 => v3]  to  [k2 => v2ab,
> k3 => v3b]
>
> The change
>     from [ k1 => v1,   k2 => v2,     k3 => v3   ]
>     to     [ k1 => v1a, k2 => v2ab, k3 => v3b ]
> must happen in a single transaction.
>
>
> Optimistic protocol to solve this:
>
> Each cache key must have a field `otx`, which is a unique orchestrator TX
> identifier - it must be a parameter passed to all the services. If `otx` is
> set to some value it means that it is an intermediate key and is visible
> only inside of some transaction, for the finalized key `otx` must be null -
> it means the key is committed and visible for everyone.
>
> Each cache value must have a field `ver` which is a version of that value.
>
> For both fields (`otx` and `ver`) the safest way is to use UUID.
>
> Workflow is the following:
>
> Orchestrator starts the distributed transaction with `otx` = x and passes
> this parameter to all the services.
>
> Service A:
>  - does some computations
>  - stores [k1x => v1a, k2x => v2a]  with TTL = Za
>       where
>           Za - left time from max Orchestrator TX duration after Service A
> end
>           k1x, k2x - new temporary keys with field `otx` = x
>           v2a has updated version `ver`
>  - returns a set of updated keys and all the old versions to the
> orchestrator
>        or just stores it in some special atomic cache like
>        [x => (k1 -> ver1, k2 -> ver2)] TTL = Za
>
> Service B:
>  - retrieves the updated value k2x => v2a because it knows `otx` = x
>  - does computations
>  - stores [k2x => v2ab, k3x => v3b] TTL = Zb
>  - updates the set of updated keys like [x => (k1 -> ver1, k2 -> ver2, k3
> -> ver3)] TTL = Zb
>
> Service Committer (may be embedded into Orchestrator):
>  - takes all the updated keys and versions for `otx` = x
>        [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)]
>  - in a single transaction checks value versions for all the old values
>        and replaces them with calculated new ones
>  - does cleanup of temporary keys and values
>  - in case of version mismatch or TX timeout just rollbacks and signals
>         to Orchestrator to restart the job with new `otx`
>
> PROFIT!!
>
> This approach even allows you to run independent parts of the graph in
> parallel (with TX transfer you will always run only one at a time). Also it
> does not require inventing any special fault tolerance technics because
> Ignite caches are already fault tolerant and all the intermediate results
> are virtually invisible and stored with TTL, thus in case of any crash you
> will not have inconsistent state or garbage.
>
> Sergi
>
>
> 2017-03-15 11:42 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:
>
> > Okay, we are open for proposals on business task. I mean, we can make use
> > of some other thing, not distributed transaction. Not transaction yet.
> >
> > ср, 15 мар. 2017 г. в 11:24, Vladimir Ozerov <vo...@gridgain.com>:
> >
> > > IMO the use case makes sense. However, as Sergi already mentioned, the
> > > problem is far more complex, than simply passing TX state over a wire.
> > Most
> > > probably a kind of coordinator will be required still to manage all
> kinds
> > > of failures. This task should be started with clean design proposal
> > > explaining how we handle all these concurrent events. And only then,
> when
> > > we understand all implications, we should move to development stage.
> > >
> > > On Wed, Mar 15, 2017 at 10:38 AM, ALEKSEY KUZNETSOV <
> > > alkuznetsov.sb@gmail.com> wrote:
> > >
> > > > Right
> > > >
> > > > ср, 15 мар. 2017 г. в 10:35, Sergi Vladykin <
> sergi.vladykin@gmail.com
> > >:
> > > >
> > > > > Good! Basically your orchestrator just takes some predefined graph
> of
> > > > > distributed services to be invoked, calls them by some kind of RPC
> > and
> > > > > passes the needed parameters between them, right?
> > > > >
> > > > > Sergi
> > > > >
> > > > > 2017-03-14 22:46 GMT+03:00 ALEKSEY KUZNETSOV <
> > alkuznetsov.sb@gmail.com
> > > >:
> > > > >
> > > > > > orchestrator is a custom thing. He is responsible for managing
> > > business
> > > > > > scenarios flows. Many nodes are involved in scenarios. They
> > exchange
> > > > data
> > > > > > and folow one another. If you acquinted with BPMN framework, so
> > > > > > orchestrator is like bpmn engine.
> > > > > >
> > > > > > вт, 14 Мар 2017 г., 18:56 Sergi Vladykin <
> sergi.vladykin@gmail.com
> > >:
> > > > > >
> > > > > > > What is Orchestrator for you? Is it a thing from Microsoft or
> > your
> > > > > custom
> > > > > > > in-house software?
> > > > > > >
> > > > > > > Sergi
> > > > > > >
> > > > > > > 2017-03-14 18:00 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > alkuznetsov.sb@gmail.com
> > > > > >:
> > > > > > >
> > > > > > > > Fine. Let's say we've got multiple servers which fulfills
> > custom
> > > > > logic.
> > > > > > > > This servers compound oriented graph (BPMN process) which
> > > > controlled
> > > > > by
> > > > > > > > Orchestrator.
> > > > > > > > For instance, *server1  *creates *variable A *with value 1,
> > > > persists
> > > > > it
> > > > > > > to
> > > > > > > > IGNITE cache and creates *variable B *and sends it to*
> server2.
> > > > *The
> > > > > > > > latests receives *variable B*, do some logic with it and
> stores
> > > to
> > > > > > > IGNITE.
> > > > > > > > All the work made by both servers must be fulfilled in *one*
> > > > > > transaction.
> > > > > > > > Because we need all information done, or nothing(rollbacked).
> > The
> > > > > > > scenario
> > > > > > > > is managed by orchestrator.
> > > > > > > >
> > > > > > > > вт, 14 мар. 2017 г. в 17:31, Sergi Vladykin <
> > > > > sergi.vladykin@gmail.com
> > > > > > >:
> > > > > > > >
> > > > > > > > > Ok, it is not a business case, it is your wrong solution
> for
> > > it.
> > > > > > > > > Lets try again, what is the business case?
> > > > > > > > >
> > > > > > > > > Sergi
> > > > > > > > >
> > > > > > > > > 2017-03-14 16:42 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > alkuznetsov.sb@gmail.com
> > > > > > > >:
> > > > > > > > >
> > > > > > > > > > The case is the following, One starts transaction in one
> > > node,
> > > > > and
> > > > > > > > commit
> > > > > > > > > > this transaction in another jvm node(or rollback it
> > > remotely).
> > > > > > > > > >
> > > > > > > > > > вт, 14 мар. 2017 г. в 16:30, Sergi Vladykin <
> > > > > > > sergi.vladykin@gmail.com
> > > > > > > > >:
> > > > > > > > > >
> > > > > > > > > > > Because even if you make it work for some simplistic
> > > > scenario,
> > > > > > get
> > > > > > > > > ready
> > > > > > > > > > to
> > > > > > > > > > > write many fault tolerance tests and make sure that you
> > TXs
> > > > > work
> > > > > > > > > > gracefully
> > > > > > > > > > > in all modes in case of crashes. Also make sure that we
> > do
> > > > not
> > > > > > have
> > > > > > > > any
> > > > > > > > > > > performance drops after all your changes in existing
> > > > > benchmarks.
> > > > > > > All
> > > > > > > > in
> > > > > > > > > > all
> > > > > > > > > > > I don't believe these conditions will be met and your
> > > > > > contribution
> > > > > > > > will
> > > > > > > > > > be
> > > > > > > > > > > accepted.
> > > > > > > > > > >
> > > > > > > > > > > Better solution to what problem? Sending TX to another
> > > node?
> > > > > The
> > > > > > > > > problem
> > > > > > > > > > > statement itself is already wrong. What business case
> you
> > > are
> > > > > > > trying
> > > > > > > > to
> > > > > > > > > > > solve? I'm sure everything you need can be done in a
> much
> > > > more
> > > > > > > simple
> > > > > > > > > and
> > > > > > > > > > > efficient way at the application level.
> > > > > > > > > > >
> > > > > > > > > > > Sergi
> > > > > > > > > > >
> > > > > > > > > > > 2017-03-14 16:03 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > >:
> > > > > > > > > > >
> > > > > > > > > > > > Why wrong ? You know the better solution?
> > > > > > > > > > > >
> > > > > > > > > > > > вт, 14 мар. 2017 г. в 15:46, Sergi Vladykin <
> > > > > > > > > sergi.vladykin@gmail.com
> > > > > > > > > > >:
> > > > > > > > > > > >
> > > > > > > > > > > > > Just serializing TX object and deserializing it on
> > > > another
> > > > > > node
> > > > > > > > is
> > > > > > > > > > > > > meaningless, because other nodes participating in
> the
> > > TX
> > > > > have
> > > > > > > to
> > > > > > > > > know
> > > > > > > > > > > > about
> > > > > > > > > > > > > the new coordinator. This will require protocol
> > > changes,
> > > > we
> > > > > > > > > > definitely
> > > > > > > > > > > > will
> > > > > > > > > > > > > have fault tolerance and performance issues. IMO
> the
> > > > whole
> > > > > > idea
> > > > > > > > is
> > > > > > > > > > > wrong
> > > > > > > > > > > > > and it makes no sense to waste time on it.
> > > > > > > > > > > > >
> > > > > > > > > > > > > Sergi
> > > > > > > > > > > > >
> > > > > > > > > > > > > 2017-03-14 10:57 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > >:
> > > > > > > > > > > > >
> > > > > > > > > > > > > > IgniteTransactionState implememntation contains
> > > > > > > IgniteTxEntry's
> > > > > > > > > > which
> > > > > > > > > > > > is
> > > > > > > > > > > > > > supposed to be transferable
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > пн, 13 мар. 2017 г. в 19:32, Dmitriy Setrakyan <
> > > > > > > > > > > dsetrakyan@apache.org
> > > > > > > > > > > > >:
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > > It sounds a little scary to me that we are
> > passing
> > > > > > > > transaction
> > > > > > > > > > > > objects
> > > > > > > > > > > > > > > around. Such object may contain all sorts of
> > Ignite
> > > > > > > context.
> > > > > > > > If
> > > > > > > > > > > some
> > > > > > > > > > > > > data
> > > > > > > > > > > > > > > needs to be passed across, we should create a
> > > special
> > > > > > > > transfer
> > > > > > > > > > > object
> > > > > > > > > > > > > in
> > > > > > > > > > > > > > > this case.
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > D.
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > On Mon, Mar 13, 2017 at 9:10 AM, ALEKSEY
> > KUZNETSOV
> > > <
> > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > well, there a couple of issues preventing
> > > > transaction
> > > > > > > > > > proceeding.
> > > > > > > > > > > > > > > > At first, After transaction serialization and
> > > > > > > > deserialization
> > > > > > > > > > on
> > > > > > > > > > > > the
> > > > > > > > > > > > > > > remote
> > > > > > > > > > > > > > > > server, there is no txState. So im going to
> put
> > > it
> > > > in
> > > > > > > > > > > > > > > > writeExternal()\readExternal()
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > The last one is Deserialized transaction
> lacks
> > of
> > > > > > shared
> > > > > > > > > cache
> > > > > > > > > > > > > context
> > > > > > > > > > > > > > > > field at TransactionProxyImpl. Perhaps, it
> must
> > > be
> > > > > > > injected
> > > > > > > > > by
> > > > > > > > > > > > > > > > GridResourceProcessor ?
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > пн, 13 мар. 2017 г. в 17:27, ALEKSEY
> KUZNETSOV
> > <
> > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > while starting and continuing transaction
> in
> > > > > > different
> > > > > > > > jvms
> > > > > > > > > > in
> > > > > > > > > > > > run
> > > > > > > > > > > > > > into
> > > > > > > > > > > > > > > > > serialization exception in
> writeExternalMeta
> > :
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > @Override public void
> > > writeExternal(ObjectOutput
> > > > > out)
> > > > > > > > > throws
> > > > > > > > > > > > > > > IOException
> > > > > > > > > > > > > > > > {
> > > > > > > > > > > > > > > > >     writeExternalMeta(out);
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > some meta is cannot be serialized.
> > > > > > > > > > > > > > > > > пт, 10 мар. 2017 г. в 17:25, Alexey
> > Goncharuk <
> > > > > > > > > > > > > > > > alexey.goncharuk@gmail.com
> > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > Aleksey,
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > I think I am starting to get what you want,
> > > but I
> > > > > > have
> > > > > > > a
> > > > > > > > > few
> > > > > > > > > > > > > > concerns:
> > > > > > > > > > > > > > > > >  - What is the API for the proposed change?
> > In
> > > > your
> > > > > > > test,
> > > > > > > > > you
> > > > > > > > > > > > pass
> > > > > > > > > > > > > an
> > > > > > > > > > > > > > > > > instance of transaction created on
> ignite(0)
> > to
> > > > the
> > > > > > > > ignite
> > > > > > > > > > > > instance
> > > > > > > > > > > > > > > > > ignite(1). This is obviously not possible
> in
> > a
> > > > > truly
> > > > > > > > > > > distributed
> > > > > > > > > > > > > > > > > (multi-jvm) environment.
> > > > > > > > > > > > > > > > > - How will you synchronize cache update
> > actions
> > > > and
> > > > > > > > > > transaction
> > > > > > > > > > > > > > commit?
> > > > > > > > > > > > > > > > > Say, you have one node that decided to
> > commit,
> > > > but
> > > > > > > > another
> > > > > > > > > > node
> > > > > > > > > > > > is
> > > > > > > > > > > > > > > still
> > > > > > > > > > > > > > > > > writing within this transaction. How do you
> > > make
> > > > > sure
> > > > > > > > that
> > > > > > > > > > two
> > > > > > > > > > > > > nodes
> > > > > > > > > > > > > > > will
> > > > > > > > > > > > > > > > > not call commit() and rollback()
> > > simultaneously?
> > > > > > > > > > > > > > > > >  - How do you make sure that either
> commit()
> > or
> > > > > > > > rollback()
> > > > > > > > > is
> > > > > > > > > > > > > called
> > > > > > > > > > > > > > if
> > > > > > > > > > > > > > > > an
> > > > > > > > > > > > > > > > > originator failed?
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > 2017-03-10 15:38 GMT+03:00 Дмитрий Рябов <
> > > > > > > > > > > somefireone@gmail.com
> > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > Alexey Goncharuk, heh, my initial
> > > understanding
> > > > > was
> > > > > > > > that
> > > > > > > > > > > > > > transferring
> > > > > > > > > > > > > > > > of
> > > > > > > > > > > > > > > > > tx
> > > > > > > > > > > > > > > > > > ownership from one node to another will
> be
> > > > > happened
> > > > > > > > > > > > automatically
> > > > > > > > > > > > > > > when
> > > > > > > > > > > > > > > > > > originating node is gone down.
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > 2017-03-10 15:36 GMT+03:00 ALEKSEY
> > KUZNETSOV
> > > <
> > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > Im aiming to span transaction on
> multiple
> > > > > > threads,
> > > > > > > > > nodes,
> > > > > > > > > > > > > > > jvms(soon).
> > > > > > > > > > > > > > > > > So
> > > > > > > > > > > > > > > > > > > every node is able to rollback, or
> commit
> > > > > common
> > > > > > > > > > > > transaction.It
> > > > > > > > > > > > > > > > turned
> > > > > > > > > > > > > > > > > > up i
> > > > > > > > > > > > > > > > > > > need to transfer tx between nodes in
> > order
> > > to
> > > > > > > commit
> > > > > > > > > > > > > transaction
> > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > > different node(in the same jvm).
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > пт, 10 мар. 2017 г. в 15:20, Alexey
> > > > Goncharuk <
> > > > > > > > > > > > > > > > > > alexey.goncharuk@gmail.com
> > > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > Aleksey,
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > Do you mean that you want a concept
> of
> > > > > > > transferring
> > > > > > > > > of
> > > > > > > > > > tx
> > > > > > > > > > > > > > > ownership
> > > > > > > > > > > > > > > > > > from
> > > > > > > > > > > > > > > > > > > > one node to another? My initial
> > > > understanding
> > > > > > was
> > > > > > > > > that
> > > > > > > > > > > you
> > > > > > > > > > > > > want
> > > > > > > > > > > > > > > to
> > > > > > > > > > > > > > > > be
> > > > > > > > > > > > > > > > > > > able
> > > > > > > > > > > > > > > > > > > > to update keys in a transaction from
> > > > multiple
> > > > > > > > threads
> > > > > > > > > > in
> > > > > > > > > > > > > > > parallel.
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > --AG
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > 2017-03-10 15:01 GMT+03:00 ALEKSEY
> > > > KUZNETSOV
> > > > > <
> > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > Well. Consider transaction started
> in
> > > one
> > > > > > node,
> > > > > > > > and
> > > > > > > > > > > > > continued
> > > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > > another
> > > > > > > > > > > > > > > > > > > > > one.
> > > > > > > > > > > > > > > > > > > > > The following test describes my
> idea:
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > Ignite ignite1 = ignite(0);
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > IgniteTransactions transactions =
> > > > > > > > > > > ignite1.transactions();
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > IgniteCache<String, Integer> cache
> =
> > > > > > > > > > > > > > ignite1.getOrCreateCache("
> > > > > > > > > > > > > > > > > > > > > testCache");
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > Transaction tx =
> > transactions.txStart(
> > > > > > > > concurrency,
> > > > > > > > > > > > > > isolation);
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > cache.put("key1", 1);
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > cache.put("key2", 2);
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > tx.stop();
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > IgniteInternalFuture<Boolean> fut =
> > > > > > > > > > > > > GridTestUtils.runAsync(()
> > > > > > > > > > > > > > > ->
> > > > > > > > > > > > > > > > {
> > > > > > > > > > > > > > > > > > > > >     IgniteTransactions ts =
> > > > > > > > > ignite(1).transactions();
> > > > > > > > > > > > > > > > > > > > >     Assert.assertNull(ts.tx());
> > > > > > > > > > > > > > > > > > > > >     Assert.assertEquals(
> > > > > > > > TransactionState.STOPPED,
> > > > > > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > > > > >     ts.txStart(tx);
> > > > > > > > > > > > > > > > > > > > >
> > > > > > >  Assert.assertEquals(TransactionState.ACTIVE,
> > > > > > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > > > > >     cache.put("key3", 3);
> > > > > > > > > > > > > > > > > > > > >
> > > > >  Assert.assertTrue(cache.remove("key2"));
> > > > > > > > > > > > > > > > > > > > >     tx.commit();
> > > > > > > > > > > > > > > > > > > > >     return true;
> > > > > > > > > > > > > > > > > > > > > });
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > fut.get();
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > Assert.assertEquals(
> > > > > > TransactionState.COMMITTED,
> > > > > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > > > > > Assert.assertEquals((long)1,
> > > > > > > > > > (long)cache.get("key1"));
> > > > > > > > > > > > > > > > > > > > > Assert.assertEquals((long)3,
> > > > > > > > > > (long)cache.get("key3"));
> > > > > > > > > > > > > > > > > > > > > Assert.assertFalse(cache.
> > > > > > containsKey("key2"));
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > In method *ts.txStart(...)* we just
> > > > rebind
> > > > > > *tx*
> > > > > > > > to
> > > > > > > > > > > > current
> > > > > > > > > > > > > > > > thread:
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > public void txStart(Transaction
> tx) {
> > > > > > > > > > > > > > > > > > > > >     TransactionProxyImpl
> > > > transactionProxy =
> > > > > > > > > > > > > > > > > (TransactionProxyImpl)tx;
> > > > > > > > > > > > > > > > > > > > >     cctx.tm().reopenTx(
> > > > > > transactionProxy.tx());
> > > > > > > > > > > > > > > > > > > > >     transactionProxy.
> > > > bindToCurrentThread();
> > > > > > > > > > > > > > > > > > > > > }
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > In method *reopenTx* we alter
> > > *threadMap*
> > > > > so
> > > > > > > that
> > > > > > > > > it
> > > > > > > > > > > > binds
> > > > > > > > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > > > > > > to current thread.
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > How do u think about it ?
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > вт, 7 мар. 2017 г. в 22:38, Denis
> > > Magda <
> > > > > > > > > > > > dmagda@apache.org
> > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > Hi Alexey,
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > Please share the rational behind
> > this
> > > > and
> > > > > > the
> > > > > > > > > > > thoughts,
> > > > > > > > > > > > > > > design
> > > > > > > > > > > > > > > > > > ideas
> > > > > > > > > > > > > > > > > > > > you
> > > > > > > > > > > > > > > > > > > > > > have in mind.
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > —
> > > > > > > > > > > > > > > > > > > > > > Denis
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > On Mar 7, 2017, at 3:19 AM,
> > ALEKSEY
> > > > > > > > KUZNETSOV <
> > > > > > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com>
> > > > > > > > > > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > Hi all! Im designing
> distributed
> > > > > > > transaction
> > > > > > > > > > which
> > > > > > > > > > > > can
> > > > > > > > > > > > > be
> > > > > > > > > > > > > > > > > started
> > > > > > > > > > > > > > > > > > > at
> > > > > > > > > > > > > > > > > > > > > one
> > > > > > > > > > > > > > > > > > > > > > > node, and continued at other
> one.
> > > Has
> > > > > > > anybody
> > > > > > > > > > > > thoughts
> > > > > > > > > > > > > on
> > > > > > > > > > > > > > > it
> > > > > > > > > > > > > > > > ?
> > > > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > --
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > --
> > > > > > > > > > > >
> > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > >
> > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > --
> > > > > > > > > >
> > > > > > > > > > *Best Regards,*
> > > > > > > > > >
> > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > --
> > > > > > > >
> > > > > > > > *Best Regards,*
> > > > > > > >
> > > > > > > > *Kuznetsov Aleksey*
> > > > > > > >
> > > > > > >
> > > > > > --
> > > > > >
> > > > > > *Best Regards,*
> > > > > >
> > > > > > *Kuznetsov Aleksey*
> > > > > >
> > > > >
> > > > --
> > > >
> > > > *Best Regards,*
> > > >
> > > > *Kuznetsov Aleksey*
> > > >
> > >
> > --
> >
> > *Best Regards,*
> >
> > *Kuznetsov Aleksey*
> >
>
-- 

*Best Regards,*

*Kuznetsov Aleksey*

Re: distributed transaction of non-single coordinator

Posted by Sergi Vladykin <se...@gmail.com>.
Ok, here is what you actually need to implement at the application level.

Lets say we have to call 2 services in the following order:
 - Service A: wants to update keys [k1 => v1,   k2 => v2]  to  [k1 => v1a,
  k2 => v2a]
 - Service B: wants to update keys [k2 => v2a, k3 => v3]  to  [k2 => v2ab,
k3 => v3b]

The change
    from [ k1 => v1,   k2 => v2,     k3 => v3   ]
    to     [ k1 => v1a, k2 => v2ab, k3 => v3b ]
must happen in a single transaction.


Optimistic protocol to solve this:

Each cache key must have a field `otx`, which is a unique orchestrator TX
identifier - it must be a parameter passed to all the services. If `otx` is
set to some value it means that it is an intermediate key and is visible
only inside of some transaction, for the finalized key `otx` must be null -
it means the key is committed and visible for everyone.

Each cache value must have a field `ver` which is a version of that value.

For both fields (`otx` and `ver`) the safest way is to use UUID.

Workflow is the following:

Orchestrator starts the distributed transaction with `otx` = x and passes
this parameter to all the services.

Service A:
 - does some computations
 - stores [k1x => v1a, k2x => v2a]  with TTL = Za
      where
          Za - left time from max Orchestrator TX duration after Service A
end
          k1x, k2x - new temporary keys with field `otx` = x
          v2a has updated version `ver`
 - returns a set of updated keys and all the old versions to the
orchestrator
       or just stores it in some special atomic cache like
       [x => (k1 -> ver1, k2 -> ver2)] TTL = Za

Service B:
 - retrieves the updated value k2x => v2a because it knows `otx` = x
 - does computations
 - stores [k2x => v2ab, k3x => v3b] TTL = Zb
 - updates the set of updated keys like [x => (k1 -> ver1, k2 -> ver2, k3
-> ver3)] TTL = Zb

Service Committer (may be embedded into Orchestrator):
 - takes all the updated keys and versions for `otx` = x
       [x => (k1 -> ver1, k2 -> ver2, k3 -> ver3)]
 - in a single transaction checks value versions for all the old values
       and replaces them with calculated new ones
 - does cleanup of temporary keys and values
 - in case of version mismatch or TX timeout just rollbacks and signals
        to Orchestrator to restart the job with new `otx`

PROFIT!!

This approach even allows you to run independent parts of the graph in
parallel (with TX transfer you will always run only one at a time). Also it
does not require inventing any special fault tolerance technics because
Ignite caches are already fault tolerant and all the intermediate results
are virtually invisible and stored with TTL, thus in case of any crash you
will not have inconsistent state or garbage.

Sergi


2017-03-15 11:42 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:

> Okay, we are open for proposals on business task. I mean, we can make use
> of some other thing, not distributed transaction. Not transaction yet.
>
> ср, 15 мар. 2017 г. в 11:24, Vladimir Ozerov <vo...@gridgain.com>:
>
> > IMO the use case makes sense. However, as Sergi already mentioned, the
> > problem is far more complex, than simply passing TX state over a wire.
> Most
> > probably a kind of coordinator will be required still to manage all kinds
> > of failures. This task should be started with clean design proposal
> > explaining how we handle all these concurrent events. And only then, when
> > we understand all implications, we should move to development stage.
> >
> > On Wed, Mar 15, 2017 at 10:38 AM, ALEKSEY KUZNETSOV <
> > alkuznetsov.sb@gmail.com> wrote:
> >
> > > Right
> > >
> > > ср, 15 мар. 2017 г. в 10:35, Sergi Vladykin <sergi.vladykin@gmail.com
> >:
> > >
> > > > Good! Basically your orchestrator just takes some predefined graph of
> > > > distributed services to be invoked, calls them by some kind of RPC
> and
> > > > passes the needed parameters between them, right?
> > > >
> > > > Sergi
> > > >
> > > > 2017-03-14 22:46 GMT+03:00 ALEKSEY KUZNETSOV <
> alkuznetsov.sb@gmail.com
> > >:
> > > >
> > > > > orchestrator is a custom thing. He is responsible for managing
> > business
> > > > > scenarios flows. Many nodes are involved in scenarios. They
> exchange
> > > data
> > > > > and folow one another. If you acquinted with BPMN framework, so
> > > > > orchestrator is like bpmn engine.
> > > > >
> > > > > вт, 14 Мар 2017 г., 18:56 Sergi Vladykin <sergi.vladykin@gmail.com
> >:
> > > > >
> > > > > > What is Orchestrator for you? Is it a thing from Microsoft or
> your
> > > > custom
> > > > > > in-house software?
> > > > > >
> > > > > > Sergi
> > > > > >
> > > > > > 2017-03-14 18:00 GMT+03:00 ALEKSEY KUZNETSOV <
> > > alkuznetsov.sb@gmail.com
> > > > >:
> > > > > >
> > > > > > > Fine. Let's say we've got multiple servers which fulfills
> custom
> > > > logic.
> > > > > > > This servers compound oriented graph (BPMN process) which
> > > controlled
> > > > by
> > > > > > > Orchestrator.
> > > > > > > For instance, *server1  *creates *variable A *with value 1,
> > > persists
> > > > it
> > > > > > to
> > > > > > > IGNITE cache and creates *variable B *and sends it to* server2.
> > > *The
> > > > > > > latests receives *variable B*, do some logic with it and stores
> > to
> > > > > > IGNITE.
> > > > > > > All the work made by both servers must be fulfilled in *one*
> > > > > transaction.
> > > > > > > Because we need all information done, or nothing(rollbacked).
> The
> > > > > > scenario
> > > > > > > is managed by orchestrator.
> > > > > > >
> > > > > > > вт, 14 мар. 2017 г. в 17:31, Sergi Vladykin <
> > > > sergi.vladykin@gmail.com
> > > > > >:
> > > > > > >
> > > > > > > > Ok, it is not a business case, it is your wrong solution for
> > it.
> > > > > > > > Lets try again, what is the business case?
> > > > > > > >
> > > > > > > > Sergi
> > > > > > > >
> > > > > > > > 2017-03-14 16:42 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > alkuznetsov.sb@gmail.com
> > > > > > >:
> > > > > > > >
> > > > > > > > > The case is the following, One starts transaction in one
> > node,
> > > > and
> > > > > > > commit
> > > > > > > > > this transaction in another jvm node(or rollback it
> > remotely).
> > > > > > > > >
> > > > > > > > > вт, 14 мар. 2017 г. в 16:30, Sergi Vladykin <
> > > > > > sergi.vladykin@gmail.com
> > > > > > > >:
> > > > > > > > >
> > > > > > > > > > Because even if you make it work for some simplistic
> > > scenario,
> > > > > get
> > > > > > > > ready
> > > > > > > > > to
> > > > > > > > > > write many fault tolerance tests and make sure that you
> TXs
> > > > work
> > > > > > > > > gracefully
> > > > > > > > > > in all modes in case of crashes. Also make sure that we
> do
> > > not
> > > > > have
> > > > > > > any
> > > > > > > > > > performance drops after all your changes in existing
> > > > benchmarks.
> > > > > > All
> > > > > > > in
> > > > > > > > > all
> > > > > > > > > > I don't believe these conditions will be met and your
> > > > > contribution
> > > > > > > will
> > > > > > > > > be
> > > > > > > > > > accepted.
> > > > > > > > > >
> > > > > > > > > > Better solution to what problem? Sending TX to another
> > node?
> > > > The
> > > > > > > > problem
> > > > > > > > > > statement itself is already wrong. What business case you
> > are
> > > > > > trying
> > > > > > > to
> > > > > > > > > > solve? I'm sure everything you need can be done in a much
> > > more
> > > > > > simple
> > > > > > > > and
> > > > > > > > > > efficient way at the application level.
> > > > > > > > > >
> > > > > > > > > > Sergi
> > > > > > > > > >
> > > > > > > > > > 2017-03-14 16:03 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > >:
> > > > > > > > > >
> > > > > > > > > > > Why wrong ? You know the better solution?
> > > > > > > > > > >
> > > > > > > > > > > вт, 14 мар. 2017 г. в 15:46, Sergi Vladykin <
> > > > > > > > sergi.vladykin@gmail.com
> > > > > > > > > >:
> > > > > > > > > > >
> > > > > > > > > > > > Just serializing TX object and deserializing it on
> > > another
> > > > > node
> > > > > > > is
> > > > > > > > > > > > meaningless, because other nodes participating in the
> > TX
> > > > have
> > > > > > to
> > > > > > > > know
> > > > > > > > > > > about
> > > > > > > > > > > > the new coordinator. This will require protocol
> > changes,
> > > we
> > > > > > > > > definitely
> > > > > > > > > > > will
> > > > > > > > > > > > have fault tolerance and performance issues. IMO the
> > > whole
> > > > > idea
> > > > > > > is
> > > > > > > > > > wrong
> > > > > > > > > > > > and it makes no sense to waste time on it.
> > > > > > > > > > > >
> > > > > > > > > > > > Sergi
> > > > > > > > > > > >
> > > > > > > > > > > > 2017-03-14 10:57 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > >:
> > > > > > > > > > > >
> > > > > > > > > > > > > IgniteTransactionState implememntation contains
> > > > > > IgniteTxEntry's
> > > > > > > > > which
> > > > > > > > > > > is
> > > > > > > > > > > > > supposed to be transferable
> > > > > > > > > > > > >
> > > > > > > > > > > > > пн, 13 мар. 2017 г. в 19:32, Dmitriy Setrakyan <
> > > > > > > > > > dsetrakyan@apache.org
> > > > > > > > > > > >:
> > > > > > > > > > > > >
> > > > > > > > > > > > > > It sounds a little scary to me that we are
> passing
> > > > > > > transaction
> > > > > > > > > > > objects
> > > > > > > > > > > > > > around. Such object may contain all sorts of
> Ignite
> > > > > > context.
> > > > > > > If
> > > > > > > > > > some
> > > > > > > > > > > > data
> > > > > > > > > > > > > > needs to be passed across, we should create a
> > special
> > > > > > > transfer
> > > > > > > > > > object
> > > > > > > > > > > > in
> > > > > > > > > > > > > > this case.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > D.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > On Mon, Mar 13, 2017 at 9:10 AM, ALEKSEY
> KUZNETSOV
> > <
> > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > > well, there a couple of issues preventing
> > > transaction
> > > > > > > > > proceeding.
> > > > > > > > > > > > > > > At first, After transaction serialization and
> > > > > > > deserialization
> > > > > > > > > on
> > > > > > > > > > > the
> > > > > > > > > > > > > > remote
> > > > > > > > > > > > > > > server, there is no txState. So im going to put
> > it
> > > in
> > > > > > > > > > > > > > > writeExternal()\readExternal()
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > The last one is Deserialized transaction lacks
> of
> > > > > shared
> > > > > > > > cache
> > > > > > > > > > > > context
> > > > > > > > > > > > > > > field at TransactionProxyImpl. Perhaps, it must
> > be
> > > > > > injected
> > > > > > > > by
> > > > > > > > > > > > > > > GridResourceProcessor ?
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > пн, 13 мар. 2017 г. в 17:27, ALEKSEY KUZNETSOV
> <
> > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > while starting and continuing transaction in
> > > > > different
> > > > > > > jvms
> > > > > > > > > in
> > > > > > > > > > > run
> > > > > > > > > > > > > into
> > > > > > > > > > > > > > > > serialization exception in writeExternalMeta
> :
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > @Override public void
> > writeExternal(ObjectOutput
> > > > out)
> > > > > > > > throws
> > > > > > > > > > > > > > IOException
> > > > > > > > > > > > > > > {
> > > > > > > > > > > > > > > >     writeExternalMeta(out);
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > some meta is cannot be serialized.
> > > > > > > > > > > > > > > > пт, 10 мар. 2017 г. в 17:25, Alexey
> Goncharuk <
> > > > > > > > > > > > > > > alexey.goncharuk@gmail.com
> > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > Aleksey,
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > I think I am starting to get what you want,
> > but I
> > > > > have
> > > > > > a
> > > > > > > > few
> > > > > > > > > > > > > concerns:
> > > > > > > > > > > > > > > >  - What is the API for the proposed change?
> In
> > > your
> > > > > > test,
> > > > > > > > you
> > > > > > > > > > > pass
> > > > > > > > > > > > an
> > > > > > > > > > > > > > > > instance of transaction created on ignite(0)
> to
> > > the
> > > > > > > ignite
> > > > > > > > > > > instance
> > > > > > > > > > > > > > > > ignite(1). This is obviously not possible in
> a
> > > > truly
> > > > > > > > > > distributed
> > > > > > > > > > > > > > > > (multi-jvm) environment.
> > > > > > > > > > > > > > > > - How will you synchronize cache update
> actions
> > > and
> > > > > > > > > transaction
> > > > > > > > > > > > > commit?
> > > > > > > > > > > > > > > > Say, you have one node that decided to
> commit,
> > > but
> > > > > > > another
> > > > > > > > > node
> > > > > > > > > > > is
> > > > > > > > > > > > > > still
> > > > > > > > > > > > > > > > writing within this transaction. How do you
> > make
> > > > sure
> > > > > > > that
> > > > > > > > > two
> > > > > > > > > > > > nodes
> > > > > > > > > > > > > > will
> > > > > > > > > > > > > > > > not call commit() and rollback()
> > simultaneously?
> > > > > > > > > > > > > > > >  - How do you make sure that either commit()
> or
> > > > > > > rollback()
> > > > > > > > is
> > > > > > > > > > > > called
> > > > > > > > > > > > > if
> > > > > > > > > > > > > > > an
> > > > > > > > > > > > > > > > originator failed?
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > 2017-03-10 15:38 GMT+03:00 Дмитрий Рябов <
> > > > > > > > > > somefireone@gmail.com
> > > > > > > > > > > >:
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > Alexey Goncharuk, heh, my initial
> > understanding
> > > > was
> > > > > > > that
> > > > > > > > > > > > > transferring
> > > > > > > > > > > > > > > of
> > > > > > > > > > > > > > > > tx
> > > > > > > > > > > > > > > > > ownership from one node to another will be
> > > > happened
> > > > > > > > > > > automatically
> > > > > > > > > > > > > > when
> > > > > > > > > > > > > > > > > originating node is gone down.
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > 2017-03-10 15:36 GMT+03:00 ALEKSEY
> KUZNETSOV
> > <
> > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > Im aiming to span transaction on multiple
> > > > > threads,
> > > > > > > > nodes,
> > > > > > > > > > > > > > jvms(soon).
> > > > > > > > > > > > > > > > So
> > > > > > > > > > > > > > > > > > every node is able to rollback, or commit
> > > > common
> > > > > > > > > > > transaction.It
> > > > > > > > > > > > > > > turned
> > > > > > > > > > > > > > > > > up i
> > > > > > > > > > > > > > > > > > need to transfer tx between nodes in
> order
> > to
> > > > > > commit
> > > > > > > > > > > > transaction
> > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > different node(in the same jvm).
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > пт, 10 мар. 2017 г. в 15:20, Alexey
> > > Goncharuk <
> > > > > > > > > > > > > > > > > alexey.goncharuk@gmail.com
> > > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > Aleksey,
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > Do you mean that you want a concept of
> > > > > > transferring
> > > > > > > > of
> > > > > > > > > tx
> > > > > > > > > > > > > > ownership
> > > > > > > > > > > > > > > > > from
> > > > > > > > > > > > > > > > > > > one node to another? My initial
> > > understanding
> > > > > was
> > > > > > > > that
> > > > > > > > > > you
> > > > > > > > > > > > want
> > > > > > > > > > > > > > to
> > > > > > > > > > > > > > > be
> > > > > > > > > > > > > > > > > > able
> > > > > > > > > > > > > > > > > > > to update keys in a transaction from
> > > multiple
> > > > > > > threads
> > > > > > > > > in
> > > > > > > > > > > > > > parallel.
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > --AG
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > 2017-03-10 15:01 GMT+03:00 ALEKSEY
> > > KUZNETSOV
> > > > <
> > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > Well. Consider transaction started in
> > one
> > > > > node,
> > > > > > > and
> > > > > > > > > > > > continued
> > > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > > another
> > > > > > > > > > > > > > > > > > > > one.
> > > > > > > > > > > > > > > > > > > > The following test describes my idea:
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > Ignite ignite1 = ignite(0);
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > IgniteTransactions transactions =
> > > > > > > > > > ignite1.transactions();
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > IgniteCache<String, Integer> cache =
> > > > > > > > > > > > > ignite1.getOrCreateCache("
> > > > > > > > > > > > > > > > > > > > testCache");
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > Transaction tx =
> transactions.txStart(
> > > > > > > concurrency,
> > > > > > > > > > > > > isolation);
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > cache.put("key1", 1);
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > cache.put("key2", 2);
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > tx.stop();
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > IgniteInternalFuture<Boolean> fut =
> > > > > > > > > > > > GridTestUtils.runAsync(()
> > > > > > > > > > > > > > ->
> > > > > > > > > > > > > > > {
> > > > > > > > > > > > > > > > > > > >     IgniteTransactions ts =
> > > > > > > > ignite(1).transactions();
> > > > > > > > > > > > > > > > > > > >     Assert.assertNull(ts.tx());
> > > > > > > > > > > > > > > > > > > >     Assert.assertEquals(
> > > > > > > TransactionState.STOPPED,
> > > > > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > > > >     ts.txStart(tx);
> > > > > > > > > > > > > > > > > > > >
> > > > > >  Assert.assertEquals(TransactionState.ACTIVE,
> > > > > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > > > >     cache.put("key3", 3);
> > > > > > > > > > > > > > > > > > > >
> > > >  Assert.assertTrue(cache.remove("key2"));
> > > > > > > > > > > > > > > > > > > >     tx.commit();
> > > > > > > > > > > > > > > > > > > >     return true;
> > > > > > > > > > > > > > > > > > > > });
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > fut.get();
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > Assert.assertEquals(
> > > > > TransactionState.COMMITTED,
> > > > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > > > > Assert.assertEquals((long)1,
> > > > > > > > > (long)cache.get("key1"));
> > > > > > > > > > > > > > > > > > > > Assert.assertEquals((long)3,
> > > > > > > > > (long)cache.get("key3"));
> > > > > > > > > > > > > > > > > > > > Assert.assertFalse(cache.
> > > > > containsKey("key2"));
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > In method *ts.txStart(...)* we just
> > > rebind
> > > > > *tx*
> > > > > > > to
> > > > > > > > > > > current
> > > > > > > > > > > > > > > thread:
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > public void txStart(Transaction tx) {
> > > > > > > > > > > > > > > > > > > >     TransactionProxyImpl
> > > transactionProxy =
> > > > > > > > > > > > > > > > (TransactionProxyImpl)tx;
> > > > > > > > > > > > > > > > > > > >     cctx.tm().reopenTx(
> > > > > transactionProxy.tx());
> > > > > > > > > > > > > > > > > > > >     transactionProxy.
> > > bindToCurrentThread();
> > > > > > > > > > > > > > > > > > > > }
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > In method *reopenTx* we alter
> > *threadMap*
> > > > so
> > > > > > that
> > > > > > > > it
> > > > > > > > > > > binds
> > > > > > > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > > > > > to current thread.
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > How do u think about it ?
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > вт, 7 мар. 2017 г. в 22:38, Denis
> > Magda <
> > > > > > > > > > > dmagda@apache.org
> > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > Hi Alexey,
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > Please share the rational behind
> this
> > > and
> > > > > the
> > > > > > > > > > thoughts,
> > > > > > > > > > > > > > design
> > > > > > > > > > > > > > > > > ideas
> > > > > > > > > > > > > > > > > > > you
> > > > > > > > > > > > > > > > > > > > > have in mind.
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > —
> > > > > > > > > > > > > > > > > > > > > Denis
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > On Mar 7, 2017, at 3:19 AM,
> ALEKSEY
> > > > > > > KUZNETSOV <
> > > > > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com>
> > > > > > > > > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > Hi all! Im designing distributed
> > > > > > transaction
> > > > > > > > > which
> > > > > > > > > > > can
> > > > > > > > > > > > be
> > > > > > > > > > > > > > > > started
> > > > > > > > > > > > > > > > > > at
> > > > > > > > > > > > > > > > > > > > one
> > > > > > > > > > > > > > > > > > > > > > node, and continued at other one.
> > Has
> > > > > > anybody
> > > > > > > > > > > thoughts
> > > > > > > > > > > > on
> > > > > > > > > > > > > > it
> > > > > > > > > > > > > > > ?
> > > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > --
> > > > > > > > > > > > >
> > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > >
> > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > --
> > > > > > > > > > >
> > > > > > > > > > > *Best Regards,*
> > > > > > > > > > >
> > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > --
> > > > > > > > >
> > > > > > > > > *Best Regards,*
> > > > > > > > >
> > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > >
> > > > > > > >
> > > > > > > --
> > > > > > >
> > > > > > > *Best Regards,*
> > > > > > >
> > > > > > > *Kuznetsov Aleksey*
> > > > > > >
> > > > > >
> > > > > --
> > > > >
> > > > > *Best Regards,*
> > > > >
> > > > > *Kuznetsov Aleksey*
> > > > >
> > > >
> > > --
> > >
> > > *Best Regards,*
> > >
> > > *Kuznetsov Aleksey*
> > >
> >
> --
>
> *Best Regards,*
>
> *Kuznetsov Aleksey*
>

Re: distributed transaction of non-single coordinator

Posted by ALEKSEY KUZNETSOV <al...@gmail.com>.
Okay, we are open for proposals on business task. I mean, we can make use
of some other thing, not distributed transaction. Not transaction yet.

ср, 15 мар. 2017 г. в 11:24, Vladimir Ozerov <vo...@gridgain.com>:

> IMO the use case makes sense. However, as Sergi already mentioned, the
> problem is far more complex, than simply passing TX state over a wire. Most
> probably a kind of coordinator will be required still to manage all kinds
> of failures. This task should be started with clean design proposal
> explaining how we handle all these concurrent events. And only then, when
> we understand all implications, we should move to development stage.
>
> On Wed, Mar 15, 2017 at 10:38 AM, ALEKSEY KUZNETSOV <
> alkuznetsov.sb@gmail.com> wrote:
>
> > Right
> >
> > ср, 15 мар. 2017 г. в 10:35, Sergi Vladykin <se...@gmail.com>:
> >
> > > Good! Basically your orchestrator just takes some predefined graph of
> > > distributed services to be invoked, calls them by some kind of RPC and
> > > passes the needed parameters between them, right?
> > >
> > > Sergi
> > >
> > > 2017-03-14 22:46 GMT+03:00 ALEKSEY KUZNETSOV <alkuznetsov.sb@gmail.com
> >:
> > >
> > > > orchestrator is a custom thing. He is responsible for managing
> business
> > > > scenarios flows. Many nodes are involved in scenarios. They exchange
> > data
> > > > and folow one another. If you acquinted with BPMN framework, so
> > > > orchestrator is like bpmn engine.
> > > >
> > > > вт, 14 Мар 2017 г., 18:56 Sergi Vladykin <se...@gmail.com>:
> > > >
> > > > > What is Orchestrator for you? Is it a thing from Microsoft or your
> > > custom
> > > > > in-house software?
> > > > >
> > > > > Sergi
> > > > >
> > > > > 2017-03-14 18:00 GMT+03:00 ALEKSEY KUZNETSOV <
> > alkuznetsov.sb@gmail.com
> > > >:
> > > > >
> > > > > > Fine. Let's say we've got multiple servers which fulfills custom
> > > logic.
> > > > > > This servers compound oriented graph (BPMN process) which
> > controlled
> > > by
> > > > > > Orchestrator.
> > > > > > For instance, *server1  *creates *variable A *with value 1,
> > persists
> > > it
> > > > > to
> > > > > > IGNITE cache and creates *variable B *and sends it to* server2.
> > *The
> > > > > > latests receives *variable B*, do some logic with it and stores
> to
> > > > > IGNITE.
> > > > > > All the work made by both servers must be fulfilled in *one*
> > > > transaction.
> > > > > > Because we need all information done, or nothing(rollbacked). The
> > > > > scenario
> > > > > > is managed by orchestrator.
> > > > > >
> > > > > > вт, 14 мар. 2017 г. в 17:31, Sergi Vladykin <
> > > sergi.vladykin@gmail.com
> > > > >:
> > > > > >
> > > > > > > Ok, it is not a business case, it is your wrong solution for
> it.
> > > > > > > Lets try again, what is the business case?
> > > > > > >
> > > > > > > Sergi
> > > > > > >
> > > > > > > 2017-03-14 16:42 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > alkuznetsov.sb@gmail.com
> > > > > >:
> > > > > > >
> > > > > > > > The case is the following, One starts transaction in one
> node,
> > > and
> > > > > > commit
> > > > > > > > this transaction in another jvm node(or rollback it
> remotely).
> > > > > > > >
> > > > > > > > вт, 14 мар. 2017 г. в 16:30, Sergi Vladykin <
> > > > > sergi.vladykin@gmail.com
> > > > > > >:
> > > > > > > >
> > > > > > > > > Because even if you make it work for some simplistic
> > scenario,
> > > > get
> > > > > > > ready
> > > > > > > > to
> > > > > > > > > write many fault tolerance tests and make sure that you TXs
> > > work
> > > > > > > > gracefully
> > > > > > > > > in all modes in case of crashes. Also make sure that we do
> > not
> > > > have
> > > > > > any
> > > > > > > > > performance drops after all your changes in existing
> > > benchmarks.
> > > > > All
> > > > > > in
> > > > > > > > all
> > > > > > > > > I don't believe these conditions will be met and your
> > > > contribution
> > > > > > will
> > > > > > > > be
> > > > > > > > > accepted.
> > > > > > > > >
> > > > > > > > > Better solution to what problem? Sending TX to another
> node?
> > > The
> > > > > > > problem
> > > > > > > > > statement itself is already wrong. What business case you
> are
> > > > > trying
> > > > > > to
> > > > > > > > > solve? I'm sure everything you need can be done in a much
> > more
> > > > > simple
> > > > > > > and
> > > > > > > > > efficient way at the application level.
> > > > > > > > >
> > > > > > > > > Sergi
> > > > > > > > >
> > > > > > > > > 2017-03-14 16:03 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > alkuznetsov.sb@gmail.com
> > > > > > > >:
> > > > > > > > >
> > > > > > > > > > Why wrong ? You know the better solution?
> > > > > > > > > >
> > > > > > > > > > вт, 14 мар. 2017 г. в 15:46, Sergi Vladykin <
> > > > > > > sergi.vladykin@gmail.com
> > > > > > > > >:
> > > > > > > > > >
> > > > > > > > > > > Just serializing TX object and deserializing it on
> > another
> > > > node
> > > > > > is
> > > > > > > > > > > meaningless, because other nodes participating in the
> TX
> > > have
> > > > > to
> > > > > > > know
> > > > > > > > > > about
> > > > > > > > > > > the new coordinator. This will require protocol
> changes,
> > we
> > > > > > > > definitely
> > > > > > > > > > will
> > > > > > > > > > > have fault tolerance and performance issues. IMO the
> > whole
> > > > idea
> > > > > > is
> > > > > > > > > wrong
> > > > > > > > > > > and it makes no sense to waste time on it.
> > > > > > > > > > >
> > > > > > > > > > > Sergi
> > > > > > > > > > >
> > > > > > > > > > > 2017-03-14 10:57 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > >:
> > > > > > > > > > >
> > > > > > > > > > > > IgniteTransactionState implememntation contains
> > > > > IgniteTxEntry's
> > > > > > > > which
> > > > > > > > > > is
> > > > > > > > > > > > supposed to be transferable
> > > > > > > > > > > >
> > > > > > > > > > > > пн, 13 мар. 2017 г. в 19:32, Dmitriy Setrakyan <
> > > > > > > > > dsetrakyan@apache.org
> > > > > > > > > > >:
> > > > > > > > > > > >
> > > > > > > > > > > > > It sounds a little scary to me that we are passing
> > > > > > transaction
> > > > > > > > > > objects
> > > > > > > > > > > > > around. Such object may contain all sorts of Ignite
> > > > > context.
> > > > > > If
> > > > > > > > > some
> > > > > > > > > > > data
> > > > > > > > > > > > > needs to be passed across, we should create a
> special
> > > > > > transfer
> > > > > > > > > object
> > > > > > > > > > > in
> > > > > > > > > > > > > this case.
> > > > > > > > > > > > >
> > > > > > > > > > > > > D.
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > > On Mon, Mar 13, 2017 at 9:10 AM, ALEKSEY KUZNETSOV
> <
> > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > >
> > > > > > > > > > > > > > well, there a couple of issues preventing
> > transaction
> > > > > > > > proceeding.
> > > > > > > > > > > > > > At first, After transaction serialization and
> > > > > > deserialization
> > > > > > > > on
> > > > > > > > > > the
> > > > > > > > > > > > > remote
> > > > > > > > > > > > > > server, there is no txState. So im going to put
> it
> > in
> > > > > > > > > > > > > > writeExternal()\readExternal()
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > The last one is Deserialized transaction lacks of
> > > > shared
> > > > > > > cache
> > > > > > > > > > > context
> > > > > > > > > > > > > > field at TransactionProxyImpl. Perhaps, it must
> be
> > > > > injected
> > > > > > > by
> > > > > > > > > > > > > > GridResourceProcessor ?
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > пн, 13 мар. 2017 г. в 17:27, ALEKSEY KUZNETSOV <
> > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > >:
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > > while starting and continuing transaction in
> > > > different
> > > > > > jvms
> > > > > > > > in
> > > > > > > > > > run
> > > > > > > > > > > > into
> > > > > > > > > > > > > > > serialization exception in writeExternalMeta :
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > @Override public void
> writeExternal(ObjectOutput
> > > out)
> > > > > > > throws
> > > > > > > > > > > > > IOException
> > > > > > > > > > > > > > {
> > > > > > > > > > > > > > >     writeExternalMeta(out);
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > some meta is cannot be serialized.
> > > > > > > > > > > > > > > пт, 10 мар. 2017 г. в 17:25, Alexey Goncharuk <
> > > > > > > > > > > > > > alexey.goncharuk@gmail.com
> > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Aleksey,
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > I think I am starting to get what you want,
> but I
> > > > have
> > > > > a
> > > > > > > few
> > > > > > > > > > > > concerns:
> > > > > > > > > > > > > > >  - What is the API for the proposed change? In
> > your
> > > > > test,
> > > > > > > you
> > > > > > > > > > pass
> > > > > > > > > > > an
> > > > > > > > > > > > > > > instance of transaction created on ignite(0) to
> > the
> > > > > > ignite
> > > > > > > > > > instance
> > > > > > > > > > > > > > > ignite(1). This is obviously not possible in a
> > > truly
> > > > > > > > > distributed
> > > > > > > > > > > > > > > (multi-jvm) environment.
> > > > > > > > > > > > > > > - How will you synchronize cache update actions
> > and
> > > > > > > > transaction
> > > > > > > > > > > > commit?
> > > > > > > > > > > > > > > Say, you have one node that decided to commit,
> > but
> > > > > > another
> > > > > > > > node
> > > > > > > > > > is
> > > > > > > > > > > > > still
> > > > > > > > > > > > > > > writing within this transaction. How do you
> make
> > > sure
> > > > > > that
> > > > > > > > two
> > > > > > > > > > > nodes
> > > > > > > > > > > > > will
> > > > > > > > > > > > > > > not call commit() and rollback()
> simultaneously?
> > > > > > > > > > > > > > >  - How do you make sure that either commit() or
> > > > > > rollback()
> > > > > > > is
> > > > > > > > > > > called
> > > > > > > > > > > > if
> > > > > > > > > > > > > > an
> > > > > > > > > > > > > > > originator failed?
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > 2017-03-10 15:38 GMT+03:00 Дмитрий Рябов <
> > > > > > > > > somefireone@gmail.com
> > > > > > > > > > >:
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > Alexey Goncharuk, heh, my initial
> understanding
> > > was
> > > > > > that
> > > > > > > > > > > > transferring
> > > > > > > > > > > > > > of
> > > > > > > > > > > > > > > tx
> > > > > > > > > > > > > > > > ownership from one node to another will be
> > > happened
> > > > > > > > > > automatically
> > > > > > > > > > > > > when
> > > > > > > > > > > > > > > > originating node is gone down.
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > 2017-03-10 15:36 GMT+03:00 ALEKSEY KUZNETSOV
> <
> > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > Im aiming to span transaction on multiple
> > > > threads,
> > > > > > > nodes,
> > > > > > > > > > > > > jvms(soon).
> > > > > > > > > > > > > > > So
> > > > > > > > > > > > > > > > > every node is able to rollback, or commit
> > > common
> > > > > > > > > > transaction.It
> > > > > > > > > > > > > > turned
> > > > > > > > > > > > > > > > up i
> > > > > > > > > > > > > > > > > need to transfer tx between nodes in order
> to
> > > > > commit
> > > > > > > > > > > transaction
> > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > different node(in the same jvm).
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > пт, 10 мар. 2017 г. в 15:20, Alexey
> > Goncharuk <
> > > > > > > > > > > > > > > > alexey.goncharuk@gmail.com
> > > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > Aleksey,
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > Do you mean that you want a concept of
> > > > > transferring
> > > > > > > of
> > > > > > > > tx
> > > > > > > > > > > > > ownership
> > > > > > > > > > > > > > > > from
> > > > > > > > > > > > > > > > > > one node to another? My initial
> > understanding
> > > > was
> > > > > > > that
> > > > > > > > > you
> > > > > > > > > > > want
> > > > > > > > > > > > > to
> > > > > > > > > > > > > > be
> > > > > > > > > > > > > > > > > able
> > > > > > > > > > > > > > > > > > to update keys in a transaction from
> > multiple
> > > > > > threads
> > > > > > > > in
> > > > > > > > > > > > > parallel.
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > --AG
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > 2017-03-10 15:01 GMT+03:00 ALEKSEY
> > KUZNETSOV
> > > <
> > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > Well. Consider transaction started in
> one
> > > > node,
> > > > > > and
> > > > > > > > > > > continued
> > > > > > > > > > > > > in
> > > > > > > > > > > > > > > > > another
> > > > > > > > > > > > > > > > > > > one.
> > > > > > > > > > > > > > > > > > > The following test describes my idea:
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > Ignite ignite1 = ignite(0);
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > IgniteTransactions transactions =
> > > > > > > > > ignite1.transactions();
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > IgniteCache<String, Integer> cache =
> > > > > > > > > > > > ignite1.getOrCreateCache("
> > > > > > > > > > > > > > > > > > > testCache");
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > Transaction tx = transactions.txStart(
> > > > > > concurrency,
> > > > > > > > > > > > isolation);
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > cache.put("key1", 1);
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > cache.put("key2", 2);
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > tx.stop();
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > IgniteInternalFuture<Boolean> fut =
> > > > > > > > > > > GridTestUtils.runAsync(()
> > > > > > > > > > > > > ->
> > > > > > > > > > > > > > {
> > > > > > > > > > > > > > > > > > >     IgniteTransactions ts =
> > > > > > > ignite(1).transactions();
> > > > > > > > > > > > > > > > > > >     Assert.assertNull(ts.tx());
> > > > > > > > > > > > > > > > > > >     Assert.assertEquals(
> > > > > > TransactionState.STOPPED,
> > > > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > > >     ts.txStart(tx);
> > > > > > > > > > > > > > > > > > >
> > > > >  Assert.assertEquals(TransactionState.ACTIVE,
> > > > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > > >     cache.put("key3", 3);
> > > > > > > > > > > > > > > > > > >
> > >  Assert.assertTrue(cache.remove("key2"));
> > > > > > > > > > > > > > > > > > >     tx.commit();
> > > > > > > > > > > > > > > > > > >     return true;
> > > > > > > > > > > > > > > > > > > });
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > fut.get();
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > Assert.assertEquals(
> > > > TransactionState.COMMITTED,
> > > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > > > Assert.assertEquals((long)1,
> > > > > > > > (long)cache.get("key1"));
> > > > > > > > > > > > > > > > > > > Assert.assertEquals((long)3,
> > > > > > > > (long)cache.get("key3"));
> > > > > > > > > > > > > > > > > > > Assert.assertFalse(cache.
> > > > containsKey("key2"));
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > In method *ts.txStart(...)* we just
> > rebind
> > > > *tx*
> > > > > > to
> > > > > > > > > > current
> > > > > > > > > > > > > > thread:
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > public void txStart(Transaction tx) {
> > > > > > > > > > > > > > > > > > >     TransactionProxyImpl
> > transactionProxy =
> > > > > > > > > > > > > > > (TransactionProxyImpl)tx;
> > > > > > > > > > > > > > > > > > >     cctx.tm().reopenTx(
> > > > transactionProxy.tx());
> > > > > > > > > > > > > > > > > > >     transactionProxy.
> > bindToCurrentThread();
> > > > > > > > > > > > > > > > > > > }
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > In method *reopenTx* we alter
> *threadMap*
> > > so
> > > > > that
> > > > > > > it
> > > > > > > > > > binds
> > > > > > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > > > > to current thread.
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > How do u think about it ?
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > вт, 7 мар. 2017 г. в 22:38, Denis
> Magda <
> > > > > > > > > > dmagda@apache.org
> > > > > > > > > > > >:
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > Hi Alexey,
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > Please share the rational behind this
> > and
> > > > the
> > > > > > > > > thoughts,
> > > > > > > > > > > > > design
> > > > > > > > > > > > > > > > ideas
> > > > > > > > > > > > > > > > > > you
> > > > > > > > > > > > > > > > > > > > have in mind.
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > —
> > > > > > > > > > > > > > > > > > > > Denis
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > On Mar 7, 2017, at 3:19 AM, ALEKSEY
> > > > > > KUZNETSOV <
> > > > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com>
> > > > > > > > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > Hi all! Im designing distributed
> > > > > transaction
> > > > > > > > which
> > > > > > > > > > can
> > > > > > > > > > > be
> > > > > > > > > > > > > > > started
> > > > > > > > > > > > > > > > > at
> > > > > > > > > > > > > > > > > > > one
> > > > > > > > > > > > > > > > > > > > > node, and continued at other one.
> Has
> > > > > anybody
> > > > > > > > > > thoughts
> > > > > > > > > > > on
> > > > > > > > > > > > > it
> > > > > > > > > > > > > > ?
> > > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > --
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > --
> > > > > > > > > > > >
> > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > >
> > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > --
> > > > > > > > > >
> > > > > > > > > > *Best Regards,*
> > > > > > > > > >
> > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > --
> > > > > > > >
> > > > > > > > *Best Regards,*
> > > > > > > >
> > > > > > > > *Kuznetsov Aleksey*
> > > > > > > >
> > > > > > >
> > > > > > --
> > > > > >
> > > > > > *Best Regards,*
> > > > > >
> > > > > > *Kuznetsov Aleksey*
> > > > > >
> > > > >
> > > > --
> > > >
> > > > *Best Regards,*
> > > >
> > > > *Kuznetsov Aleksey*
> > > >
> > >
> > --
> >
> > *Best Regards,*
> >
> > *Kuznetsov Aleksey*
> >
>
-- 

*Best Regards,*

*Kuznetsov Aleksey*

Re: distributed transaction of non-single coordinator

Posted by Vladimir Ozerov <vo...@gridgain.com>.
IMO the use case makes sense. However, as Sergi already mentioned, the
problem is far more complex, than simply passing TX state over a wire. Most
probably a kind of coordinator will be required still to manage all kinds
of failures. This task should be started with clean design proposal
explaining how we handle all these concurrent events. And only then, when
we understand all implications, we should move to development stage.

On Wed, Mar 15, 2017 at 10:38 AM, ALEKSEY KUZNETSOV <
alkuznetsov.sb@gmail.com> wrote:

> Right
>
> ср, 15 мар. 2017 г. в 10:35, Sergi Vladykin <se...@gmail.com>:
>
> > Good! Basically your orchestrator just takes some predefined graph of
> > distributed services to be invoked, calls them by some kind of RPC and
> > passes the needed parameters between them, right?
> >
> > Sergi
> >
> > 2017-03-14 22:46 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:
> >
> > > orchestrator is a custom thing. He is responsible for managing business
> > > scenarios flows. Many nodes are involved in scenarios. They exchange
> data
> > > and folow one another. If you acquinted with BPMN framework, so
> > > orchestrator is like bpmn engine.
> > >
> > > вт, 14 Мар 2017 г., 18:56 Sergi Vladykin <se...@gmail.com>:
> > >
> > > > What is Orchestrator for you? Is it a thing from Microsoft or your
> > custom
> > > > in-house software?
> > > >
> > > > Sergi
> > > >
> > > > 2017-03-14 18:00 GMT+03:00 ALEKSEY KUZNETSOV <
> alkuznetsov.sb@gmail.com
> > >:
> > > >
> > > > > Fine. Let's say we've got multiple servers which fulfills custom
> > logic.
> > > > > This servers compound oriented graph (BPMN process) which
> controlled
> > by
> > > > > Orchestrator.
> > > > > For instance, *server1  *creates *variable A *with value 1,
> persists
> > it
> > > > to
> > > > > IGNITE cache and creates *variable B *and sends it to* server2.
> *The
> > > > > latests receives *variable B*, do some logic with it and stores to
> > > > IGNITE.
> > > > > All the work made by both servers must be fulfilled in *one*
> > > transaction.
> > > > > Because we need all information done, or nothing(rollbacked). The
> > > > scenario
> > > > > is managed by orchestrator.
> > > > >
> > > > > вт, 14 мар. 2017 г. в 17:31, Sergi Vladykin <
> > sergi.vladykin@gmail.com
> > > >:
> > > > >
> > > > > > Ok, it is not a business case, it is your wrong solution for it.
> > > > > > Lets try again, what is the business case?
> > > > > >
> > > > > > Sergi
> > > > > >
> > > > > > 2017-03-14 16:42 GMT+03:00 ALEKSEY KUZNETSOV <
> > > alkuznetsov.sb@gmail.com
> > > > >:
> > > > > >
> > > > > > > The case is the following, One starts transaction in one node,
> > and
> > > > > commit
> > > > > > > this transaction in another jvm node(or rollback it remotely).
> > > > > > >
> > > > > > > вт, 14 мар. 2017 г. в 16:30, Sergi Vladykin <
> > > > sergi.vladykin@gmail.com
> > > > > >:
> > > > > > >
> > > > > > > > Because even if you make it work for some simplistic
> scenario,
> > > get
> > > > > > ready
> > > > > > > to
> > > > > > > > write many fault tolerance tests and make sure that you TXs
> > work
> > > > > > > gracefully
> > > > > > > > in all modes in case of crashes. Also make sure that we do
> not
> > > have
> > > > > any
> > > > > > > > performance drops after all your changes in existing
> > benchmarks.
> > > > All
> > > > > in
> > > > > > > all
> > > > > > > > I don't believe these conditions will be met and your
> > > contribution
> > > > > will
> > > > > > > be
> > > > > > > > accepted.
> > > > > > > >
> > > > > > > > Better solution to what problem? Sending TX to another node?
> > The
> > > > > > problem
> > > > > > > > statement itself is already wrong. What business case you are
> > > > trying
> > > > > to
> > > > > > > > solve? I'm sure everything you need can be done in a much
> more
> > > > simple
> > > > > > and
> > > > > > > > efficient way at the application level.
> > > > > > > >
> > > > > > > > Sergi
> > > > > > > >
> > > > > > > > 2017-03-14 16:03 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > alkuznetsov.sb@gmail.com
> > > > > > >:
> > > > > > > >
> > > > > > > > > Why wrong ? You know the better solution?
> > > > > > > > >
> > > > > > > > > вт, 14 мар. 2017 г. в 15:46, Sergi Vladykin <
> > > > > > sergi.vladykin@gmail.com
> > > > > > > >:
> > > > > > > > >
> > > > > > > > > > Just serializing TX object and deserializing it on
> another
> > > node
> > > > > is
> > > > > > > > > > meaningless, because other nodes participating in the TX
> > have
> > > > to
> > > > > > know
> > > > > > > > > about
> > > > > > > > > > the new coordinator. This will require protocol changes,
> we
> > > > > > > definitely
> > > > > > > > > will
> > > > > > > > > > have fault tolerance and performance issues. IMO the
> whole
> > > idea
> > > > > is
> > > > > > > > wrong
> > > > > > > > > > and it makes no sense to waste time on it.
> > > > > > > > > >
> > > > > > > > > > Sergi
> > > > > > > > > >
> > > > > > > > > > 2017-03-14 10:57 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > >:
> > > > > > > > > >
> > > > > > > > > > > IgniteTransactionState implememntation contains
> > > > IgniteTxEntry's
> > > > > > > which
> > > > > > > > > is
> > > > > > > > > > > supposed to be transferable
> > > > > > > > > > >
> > > > > > > > > > > пн, 13 мар. 2017 г. в 19:32, Dmitriy Setrakyan <
> > > > > > > > dsetrakyan@apache.org
> > > > > > > > > >:
> > > > > > > > > > >
> > > > > > > > > > > > It sounds a little scary to me that we are passing
> > > > > transaction
> > > > > > > > > objects
> > > > > > > > > > > > around. Such object may contain all sorts of Ignite
> > > > context.
> > > > > If
> > > > > > > > some
> > > > > > > > > > data
> > > > > > > > > > > > needs to be passed across, we should create a special
> > > > > transfer
> > > > > > > > object
> > > > > > > > > > in
> > > > > > > > > > > > this case.
> > > > > > > > > > > >
> > > > > > > > > > > > D.
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > On Mon, Mar 13, 2017 at 9:10 AM, ALEKSEY KUZNETSOV <
> > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > wrote:
> > > > > > > > > > > >
> > > > > > > > > > > > > well, there a couple of issues preventing
> transaction
> > > > > > > proceeding.
> > > > > > > > > > > > > At first, After transaction serialization and
> > > > > deserialization
> > > > > > > on
> > > > > > > > > the
> > > > > > > > > > > > remote
> > > > > > > > > > > > > server, there is no txState. So im going to put it
> in
> > > > > > > > > > > > > writeExternal()\readExternal()
> > > > > > > > > > > > >
> > > > > > > > > > > > > The last one is Deserialized transaction lacks of
> > > shared
> > > > > > cache
> > > > > > > > > > context
> > > > > > > > > > > > > field at TransactionProxyImpl. Perhaps, it must be
> > > > injected
> > > > > > by
> > > > > > > > > > > > > GridResourceProcessor ?
> > > > > > > > > > > > >
> > > > > > > > > > > > > пн, 13 мар. 2017 г. в 17:27, ALEKSEY KUZNETSOV <
> > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > >:
> > > > > > > > > > > > >
> > > > > > > > > > > > > > while starting and continuing transaction in
> > > different
> > > > > jvms
> > > > > > > in
> > > > > > > > > run
> > > > > > > > > > > into
> > > > > > > > > > > > > > serialization exception in writeExternalMeta :
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > @Override public void writeExternal(ObjectOutput
> > out)
> > > > > > throws
> > > > > > > > > > > > IOException
> > > > > > > > > > > > > {
> > > > > > > > > > > > > >     writeExternalMeta(out);
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > some meta is cannot be serialized.
> > > > > > > > > > > > > > пт, 10 мар. 2017 г. в 17:25, Alexey Goncharuk <
> > > > > > > > > > > > > alexey.goncharuk@gmail.com
> > > > > > > > > > > > > > >:
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Aleksey,
> > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > I think I am starting to get what you want, but I
> > > have
> > > > a
> > > > > > few
> > > > > > > > > > > concerns:
> > > > > > > > > > > > > >  - What is the API for the proposed change? In
> your
> > > > test,
> > > > > > you
> > > > > > > > > pass
> > > > > > > > > > an
> > > > > > > > > > > > > > instance of transaction created on ignite(0) to
> the
> > > > > ignite
> > > > > > > > > instance
> > > > > > > > > > > > > > ignite(1). This is obviously not possible in a
> > truly
> > > > > > > > distributed
> > > > > > > > > > > > > > (multi-jvm) environment.
> > > > > > > > > > > > > > - How will you synchronize cache update actions
> and
> > > > > > > transaction
> > > > > > > > > > > commit?
> > > > > > > > > > > > > > Say, you have one node that decided to commit,
> but
> > > > > another
> > > > > > > node
> > > > > > > > > is
> > > > > > > > > > > > still
> > > > > > > > > > > > > > writing within this transaction. How do you make
> > sure
> > > > > that
> > > > > > > two
> > > > > > > > > > nodes
> > > > > > > > > > > > will
> > > > > > > > > > > > > > not call commit() and rollback() simultaneously?
> > > > > > > > > > > > > >  - How do you make sure that either commit() or
> > > > > rollback()
> > > > > > is
> > > > > > > > > > called
> > > > > > > > > > > if
> > > > > > > > > > > > > an
> > > > > > > > > > > > > > originator failed?
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > 2017-03-10 15:38 GMT+03:00 Дмитрий Рябов <
> > > > > > > > somefireone@gmail.com
> > > > > > > > > >:
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Alexey Goncharuk, heh, my initial understanding
> > was
> > > > > that
> > > > > > > > > > > transferring
> > > > > > > > > > > > > of
> > > > > > > > > > > > > > tx
> > > > > > > > > > > > > > > ownership from one node to another will be
> > happened
> > > > > > > > > automatically
> > > > > > > > > > > > when
> > > > > > > > > > > > > > > originating node is gone down.
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > 2017-03-10 15:36 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > >:
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > Im aiming to span transaction on multiple
> > > threads,
> > > > > > nodes,
> > > > > > > > > > > > jvms(soon).
> > > > > > > > > > > > > > So
> > > > > > > > > > > > > > > > every node is able to rollback, or commit
> > common
> > > > > > > > > transaction.It
> > > > > > > > > > > > > turned
> > > > > > > > > > > > > > > up i
> > > > > > > > > > > > > > > > need to transfer tx between nodes in order to
> > > > commit
> > > > > > > > > > transaction
> > > > > > > > > > > in
> > > > > > > > > > > > > > > > different node(in the same jvm).
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > пт, 10 мар. 2017 г. в 15:20, Alexey
> Goncharuk <
> > > > > > > > > > > > > > > alexey.goncharuk@gmail.com
> > > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > Aleksey,
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > Do you mean that you want a concept of
> > > > transferring
> > > > > > of
> > > > > > > tx
> > > > > > > > > > > > ownership
> > > > > > > > > > > > > > > from
> > > > > > > > > > > > > > > > > one node to another? My initial
> understanding
> > > was
> > > > > > that
> > > > > > > > you
> > > > > > > > > > want
> > > > > > > > > > > > to
> > > > > > > > > > > > > be
> > > > > > > > > > > > > > > > able
> > > > > > > > > > > > > > > > > to update keys in a transaction from
> multiple
> > > > > threads
> > > > > > > in
> > > > > > > > > > > > parallel.
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > --AG
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > 2017-03-10 15:01 GMT+03:00 ALEKSEY
> KUZNETSOV
> > <
> > > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > Well. Consider transaction started in one
> > > node,
> > > > > and
> > > > > > > > > > continued
> > > > > > > > > > > > in
> > > > > > > > > > > > > > > > another
> > > > > > > > > > > > > > > > > > one.
> > > > > > > > > > > > > > > > > > The following test describes my idea:
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > Ignite ignite1 = ignite(0);
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > IgniteTransactions transactions =
> > > > > > > > ignite1.transactions();
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > IgniteCache<String, Integer> cache =
> > > > > > > > > > > ignite1.getOrCreateCache("
> > > > > > > > > > > > > > > > > > testCache");
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > Transaction tx = transactions.txStart(
> > > > > concurrency,
> > > > > > > > > > > isolation);
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > cache.put("key1", 1);
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > cache.put("key2", 2);
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > tx.stop();
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > IgniteInternalFuture<Boolean> fut =
> > > > > > > > > > GridTestUtils.runAsync(()
> > > > > > > > > > > > ->
> > > > > > > > > > > > > {
> > > > > > > > > > > > > > > > > >     IgniteTransactions ts =
> > > > > > ignite(1).transactions();
> > > > > > > > > > > > > > > > > >     Assert.assertNull(ts.tx());
> > > > > > > > > > > > > > > > > >     Assert.assertEquals(
> > > > > TransactionState.STOPPED,
> > > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > >     ts.txStart(tx);
> > > > > > > > > > > > > > > > > >
> > > >  Assert.assertEquals(TransactionState.ACTIVE,
> > > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > >     cache.put("key3", 3);
> > > > > > > > > > > > > > > > > >
> >  Assert.assertTrue(cache.remove("key2"));
> > > > > > > > > > > > > > > > > >     tx.commit();
> > > > > > > > > > > > > > > > > >     return true;
> > > > > > > > > > > > > > > > > > });
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > fut.get();
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > Assert.assertEquals(
> > > TransactionState.COMMITTED,
> > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > > Assert.assertEquals((long)1,
> > > > > > > (long)cache.get("key1"));
> > > > > > > > > > > > > > > > > > Assert.assertEquals((long)3,
> > > > > > > (long)cache.get("key3"));
> > > > > > > > > > > > > > > > > > Assert.assertFalse(cache.
> > > containsKey("key2"));
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > In method *ts.txStart(...)* we just
> rebind
> > > *tx*
> > > > > to
> > > > > > > > > current
> > > > > > > > > > > > > thread:
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > public void txStart(Transaction tx) {
> > > > > > > > > > > > > > > > > >     TransactionProxyImpl
> transactionProxy =
> > > > > > > > > > > > > > (TransactionProxyImpl)tx;
> > > > > > > > > > > > > > > > > >     cctx.tm().reopenTx(
> > > transactionProxy.tx());
> > > > > > > > > > > > > > > > > >     transactionProxy.
> bindToCurrentThread();
> > > > > > > > > > > > > > > > > > }
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > In method *reopenTx* we alter *threadMap*
> > so
> > > > that
> > > > > > it
> > > > > > > > > binds
> > > > > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > > > to current thread.
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > How do u think about it ?
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > вт, 7 мар. 2017 г. в 22:38, Denis Magda <
> > > > > > > > > dmagda@apache.org
> > > > > > > > > > >:
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > Hi Alexey,
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > Please share the rational behind this
> and
> > > the
> > > > > > > > thoughts,
> > > > > > > > > > > > design
> > > > > > > > > > > > > > > ideas
> > > > > > > > > > > > > > > > > you
> > > > > > > > > > > > > > > > > > > have in mind.
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > —
> > > > > > > > > > > > > > > > > > > Denis
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > On Mar 7, 2017, at 3:19 AM, ALEKSEY
> > > > > KUZNETSOV <
> > > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com>
> > > > > > > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > Hi all! Im designing distributed
> > > > transaction
> > > > > > > which
> > > > > > > > > can
> > > > > > > > > > be
> > > > > > > > > > > > > > started
> > > > > > > > > > > > > > > > at
> > > > > > > > > > > > > > > > > > one
> > > > > > > > > > > > > > > > > > > > node, and continued at other one. Has
> > > > anybody
> > > > > > > > > thoughts
> > > > > > > > > > on
> > > > > > > > > > > > it
> > > > > > > > > > > > > ?
> > > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > --
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > >
> > > > > > > > > > > > > --
> > > > > > > > > > > > >
> > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > >
> > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > --
> > > > > > > > > > >
> > > > > > > > > > > *Best Regards,*
> > > > > > > > > > >
> > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > --
> > > > > > > > >
> > > > > > > > > *Best Regards,*
> > > > > > > > >
> > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > >
> > > > > > > >
> > > > > > > --
> > > > > > >
> > > > > > > *Best Regards,*
> > > > > > >
> > > > > > > *Kuznetsov Aleksey*
> > > > > > >
> > > > > >
> > > > > --
> > > > >
> > > > > *Best Regards,*
> > > > >
> > > > > *Kuznetsov Aleksey*
> > > > >
> > > >
> > > --
> > >
> > > *Best Regards,*
> > >
> > > *Kuznetsov Aleksey*
> > >
> >
> --
>
> *Best Regards,*
>
> *Kuznetsov Aleksey*
>

Re: distributed transaction of non-single coordinator

Posted by ALEKSEY KUZNETSOV <al...@gmail.com>.
Right

ср, 15 мар. 2017 г. в 10:35, Sergi Vladykin <se...@gmail.com>:

> Good! Basically your orchestrator just takes some predefined graph of
> distributed services to be invoked, calls them by some kind of RPC and
> passes the needed parameters between them, right?
>
> Sergi
>
> 2017-03-14 22:46 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:
>
> > orchestrator is a custom thing. He is responsible for managing business
> > scenarios flows. Many nodes are involved in scenarios. They exchange data
> > and folow one another. If you acquinted with BPMN framework, so
> > orchestrator is like bpmn engine.
> >
> > вт, 14 Мар 2017 г., 18:56 Sergi Vladykin <se...@gmail.com>:
> >
> > > What is Orchestrator for you? Is it a thing from Microsoft or your
> custom
> > > in-house software?
> > >
> > > Sergi
> > >
> > > 2017-03-14 18:00 GMT+03:00 ALEKSEY KUZNETSOV <alkuznetsov.sb@gmail.com
> >:
> > >
> > > > Fine. Let's say we've got multiple servers which fulfills custom
> logic.
> > > > This servers compound oriented graph (BPMN process) which controlled
> by
> > > > Orchestrator.
> > > > For instance, *server1  *creates *variable A *with value 1, persists
> it
> > > to
> > > > IGNITE cache and creates *variable B *and sends it to* server2. *The
> > > > latests receives *variable B*, do some logic with it and stores to
> > > IGNITE.
> > > > All the work made by both servers must be fulfilled in *one*
> > transaction.
> > > > Because we need all information done, or nothing(rollbacked). The
> > > scenario
> > > > is managed by orchestrator.
> > > >
> > > > вт, 14 мар. 2017 г. в 17:31, Sergi Vladykin <
> sergi.vladykin@gmail.com
> > >:
> > > >
> > > > > Ok, it is not a business case, it is your wrong solution for it.
> > > > > Lets try again, what is the business case?
> > > > >
> > > > > Sergi
> > > > >
> > > > > 2017-03-14 16:42 GMT+03:00 ALEKSEY KUZNETSOV <
> > alkuznetsov.sb@gmail.com
> > > >:
> > > > >
> > > > > > The case is the following, One starts transaction in one node,
> and
> > > > commit
> > > > > > this transaction in another jvm node(or rollback it remotely).
> > > > > >
> > > > > > вт, 14 мар. 2017 г. в 16:30, Sergi Vladykin <
> > > sergi.vladykin@gmail.com
> > > > >:
> > > > > >
> > > > > > > Because even if you make it work for some simplistic scenario,
> > get
> > > > > ready
> > > > > > to
> > > > > > > write many fault tolerance tests and make sure that you TXs
> work
> > > > > > gracefully
> > > > > > > in all modes in case of crashes. Also make sure that we do not
> > have
> > > > any
> > > > > > > performance drops after all your changes in existing
> benchmarks.
> > > All
> > > > in
> > > > > > all
> > > > > > > I don't believe these conditions will be met and your
> > contribution
> > > > will
> > > > > > be
> > > > > > > accepted.
> > > > > > >
> > > > > > > Better solution to what problem? Sending TX to another node?
> The
> > > > > problem
> > > > > > > statement itself is already wrong. What business case you are
> > > trying
> > > > to
> > > > > > > solve? I'm sure everything you need can be done in a much more
> > > simple
> > > > > and
> > > > > > > efficient way at the application level.
> > > > > > >
> > > > > > > Sergi
> > > > > > >
> > > > > > > 2017-03-14 16:03 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > alkuznetsov.sb@gmail.com
> > > > > >:
> > > > > > >
> > > > > > > > Why wrong ? You know the better solution?
> > > > > > > >
> > > > > > > > вт, 14 мар. 2017 г. в 15:46, Sergi Vladykin <
> > > > > sergi.vladykin@gmail.com
> > > > > > >:
> > > > > > > >
> > > > > > > > > Just serializing TX object and deserializing it on another
> > node
> > > > is
> > > > > > > > > meaningless, because other nodes participating in the TX
> have
> > > to
> > > > > know
> > > > > > > > about
> > > > > > > > > the new coordinator. This will require protocol changes, we
> > > > > > definitely
> > > > > > > > will
> > > > > > > > > have fault tolerance and performance issues. IMO the whole
> > idea
> > > > is
> > > > > > > wrong
> > > > > > > > > and it makes no sense to waste time on it.
> > > > > > > > >
> > > > > > > > > Sergi
> > > > > > > > >
> > > > > > > > > 2017-03-14 10:57 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > alkuznetsov.sb@gmail.com
> > > > > > > >:
> > > > > > > > >
> > > > > > > > > > IgniteTransactionState implememntation contains
> > > IgniteTxEntry's
> > > > > > which
> > > > > > > > is
> > > > > > > > > > supposed to be transferable
> > > > > > > > > >
> > > > > > > > > > пн, 13 мар. 2017 г. в 19:32, Dmitriy Setrakyan <
> > > > > > > dsetrakyan@apache.org
> > > > > > > > >:
> > > > > > > > > >
> > > > > > > > > > > It sounds a little scary to me that we are passing
> > > > transaction
> > > > > > > > objects
> > > > > > > > > > > around. Such object may contain all sorts of Ignite
> > > context.
> > > > If
> > > > > > > some
> > > > > > > > > data
> > > > > > > > > > > needs to be passed across, we should create a special
> > > > transfer
> > > > > > > object
> > > > > > > > > in
> > > > > > > > > > > this case.
> > > > > > > > > > >
> > > > > > > > > > > D.
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > On Mon, Mar 13, 2017 at 9:10 AM, ALEKSEY KUZNETSOV <
> > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > wrote:
> > > > > > > > > > >
> > > > > > > > > > > > well, there a couple of issues preventing transaction
> > > > > > proceeding.
> > > > > > > > > > > > At first, After transaction serialization and
> > > > deserialization
> > > > > > on
> > > > > > > > the
> > > > > > > > > > > remote
> > > > > > > > > > > > server, there is no txState. So im going to put it in
> > > > > > > > > > > > writeExternal()\readExternal()
> > > > > > > > > > > >
> > > > > > > > > > > > The last one is Deserialized transaction lacks of
> > shared
> > > > > cache
> > > > > > > > > context
> > > > > > > > > > > > field at TransactionProxyImpl. Perhaps, it must be
> > > injected
> > > > > by
> > > > > > > > > > > > GridResourceProcessor ?
> > > > > > > > > > > >
> > > > > > > > > > > > пн, 13 мар. 2017 г. в 17:27, ALEKSEY KUZNETSOV <
> > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > >:
> > > > > > > > > > > >
> > > > > > > > > > > > > while starting and continuing transaction in
> > different
> > > > jvms
> > > > > > in
> > > > > > > > run
> > > > > > > > > > into
> > > > > > > > > > > > > serialization exception in writeExternalMeta :
> > > > > > > > > > > > >
> > > > > > > > > > > > > @Override public void writeExternal(ObjectOutput
> out)
> > > > > throws
> > > > > > > > > > > IOException
> > > > > > > > > > > > {
> > > > > > > > > > > > >     writeExternalMeta(out);
> > > > > > > > > > > > >
> > > > > > > > > > > > > some meta is cannot be serialized.
> > > > > > > > > > > > > пт, 10 мар. 2017 г. в 17:25, Alexey Goncharuk <
> > > > > > > > > > > > alexey.goncharuk@gmail.com
> > > > > > > > > > > > > >:
> > > > > > > > > > > > >
> > > > > > > > > > > > > Aleksey,
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > > I think I am starting to get what you want, but I
> > have
> > > a
> > > > > few
> > > > > > > > > > concerns:
> > > > > > > > > > > > >  - What is the API for the proposed change? In your
> > > test,
> > > > > you
> > > > > > > > pass
> > > > > > > > > an
> > > > > > > > > > > > > instance of transaction created on ignite(0) to the
> > > > ignite
> > > > > > > > instance
> > > > > > > > > > > > > ignite(1). This is obviously not possible in a
> truly
> > > > > > > distributed
> > > > > > > > > > > > > (multi-jvm) environment.
> > > > > > > > > > > > > - How will you synchronize cache update actions and
> > > > > > transaction
> > > > > > > > > > commit?
> > > > > > > > > > > > > Say, you have one node that decided to commit, but
> > > > another
> > > > > > node
> > > > > > > > is
> > > > > > > > > > > still
> > > > > > > > > > > > > writing within this transaction. How do you make
> sure
> > > > that
> > > > > > two
> > > > > > > > > nodes
> > > > > > > > > > > will
> > > > > > > > > > > > > not call commit() and rollback() simultaneously?
> > > > > > > > > > > > >  - How do you make sure that either commit() or
> > > > rollback()
> > > > > is
> > > > > > > > > called
> > > > > > > > > > if
> > > > > > > > > > > > an
> > > > > > > > > > > > > originator failed?
> > > > > > > > > > > > >
> > > > > > > > > > > > > 2017-03-10 15:38 GMT+03:00 Дмитрий Рябов <
> > > > > > > somefireone@gmail.com
> > > > > > > > >:
> > > > > > > > > > > > >
> > > > > > > > > > > > > > Alexey Goncharuk, heh, my initial understanding
> was
> > > > that
> > > > > > > > > > transferring
> > > > > > > > > > > > of
> > > > > > > > > > > > > tx
> > > > > > > > > > > > > > ownership from one node to another will be
> happened
> > > > > > > > automatically
> > > > > > > > > > > when
> > > > > > > > > > > > > > originating node is gone down.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > 2017-03-10 15:36 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > >:
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Im aiming to span transaction on multiple
> > threads,
> > > > > nodes,
> > > > > > > > > > > jvms(soon).
> > > > > > > > > > > > > So
> > > > > > > > > > > > > > > every node is able to rollback, or commit
> common
> > > > > > > > transaction.It
> > > > > > > > > > > > turned
> > > > > > > > > > > > > > up i
> > > > > > > > > > > > > > > need to transfer tx between nodes in order to
> > > commit
> > > > > > > > > transaction
> > > > > > > > > > in
> > > > > > > > > > > > > > > different node(in the same jvm).
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > пт, 10 мар. 2017 г. в 15:20, Alexey Goncharuk <
> > > > > > > > > > > > > > alexey.goncharuk@gmail.com
> > > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > Aleksey,
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > Do you mean that you want a concept of
> > > transferring
> > > > > of
> > > > > > tx
> > > > > > > > > > > ownership
> > > > > > > > > > > > > > from
> > > > > > > > > > > > > > > > one node to another? My initial understanding
> > was
> > > > > that
> > > > > > > you
> > > > > > > > > want
> > > > > > > > > > > to
> > > > > > > > > > > > be
> > > > > > > > > > > > > > > able
> > > > > > > > > > > > > > > > to update keys in a transaction from multiple
> > > > threads
> > > > > > in
> > > > > > > > > > > parallel.
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > --AG
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > 2017-03-10 15:01 GMT+03:00 ALEKSEY KUZNETSOV
> <
> > > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > > >:
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > Well. Consider transaction started in one
> > node,
> > > > and
> > > > > > > > > continued
> > > > > > > > > > > in
> > > > > > > > > > > > > > > another
> > > > > > > > > > > > > > > > > one.
> > > > > > > > > > > > > > > > > The following test describes my idea:
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > Ignite ignite1 = ignite(0);
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > IgniteTransactions transactions =
> > > > > > > ignite1.transactions();
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > IgniteCache<String, Integer> cache =
> > > > > > > > > > ignite1.getOrCreateCache("
> > > > > > > > > > > > > > > > > testCache");
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > Transaction tx = transactions.txStart(
> > > > concurrency,
> > > > > > > > > > isolation);
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > cache.put("key1", 1);
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > cache.put("key2", 2);
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > tx.stop();
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > IgniteInternalFuture<Boolean> fut =
> > > > > > > > > GridTestUtils.runAsync(()
> > > > > > > > > > > ->
> > > > > > > > > > > > {
> > > > > > > > > > > > > > > > >     IgniteTransactions ts =
> > > > > ignite(1).transactions();
> > > > > > > > > > > > > > > > >     Assert.assertNull(ts.tx());
> > > > > > > > > > > > > > > > >     Assert.assertEquals(
> > > > TransactionState.STOPPED,
> > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > >     ts.txStart(tx);
> > > > > > > > > > > > > > > > >
> > >  Assert.assertEquals(TransactionState.ACTIVE,
> > > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > >     cache.put("key3", 3);
> > > > > > > > > > > > > > > > >
>  Assert.assertTrue(cache.remove("key2"));
> > > > > > > > > > > > > > > > >     tx.commit();
> > > > > > > > > > > > > > > > >     return true;
> > > > > > > > > > > > > > > > > });
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > fut.get();
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > Assert.assertEquals(
> > TransactionState.COMMITTED,
> > > > > > > > > tx.state());
> > > > > > > > > > > > > > > > > Assert.assertEquals((long)1,
> > > > > > (long)cache.get("key1"));
> > > > > > > > > > > > > > > > > Assert.assertEquals((long)3,
> > > > > > (long)cache.get("key3"));
> > > > > > > > > > > > > > > > > Assert.assertFalse(cache.
> > containsKey("key2"));
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > In method *ts.txStart(...)* we just rebind
> > *tx*
> > > > to
> > > > > > > > current
> > > > > > > > > > > > thread:
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > public void txStart(Transaction tx) {
> > > > > > > > > > > > > > > > >     TransactionProxyImpl transactionProxy =
> > > > > > > > > > > > > (TransactionProxyImpl)tx;
> > > > > > > > > > > > > > > > >     cctx.tm().reopenTx(
> > transactionProxy.tx());
> > > > > > > > > > > > > > > > >     transactionProxy.bindToCurrentThread();
> > > > > > > > > > > > > > > > > }
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > In method *reopenTx* we alter *threadMap*
> so
> > > that
> > > > > it
> > > > > > > > binds
> > > > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > > to current thread.
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > How do u think about it ?
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > вт, 7 мар. 2017 г. в 22:38, Denis Magda <
> > > > > > > > dmagda@apache.org
> > > > > > > > > >:
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > Hi Alexey,
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > Please share the rational behind this and
> > the
> > > > > > > thoughts,
> > > > > > > > > > > design
> > > > > > > > > > > > > > ideas
> > > > > > > > > > > > > > > > you
> > > > > > > > > > > > > > > > > > have in mind.
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > —
> > > > > > > > > > > > > > > > > > Denis
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > On Mar 7, 2017, at 3:19 AM, ALEKSEY
> > > > KUZNETSOV <
> > > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com>
> > > > > > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > Hi all! Im designing distributed
> > > transaction
> > > > > > which
> > > > > > > > can
> > > > > > > > > be
> > > > > > > > > > > > > started
> > > > > > > > > > > > > > > at
> > > > > > > > > > > > > > > > > one
> > > > > > > > > > > > > > > > > > > node, and continued at other one. Has
> > > anybody
> > > > > > > > thoughts
> > > > > > > > > on
> > > > > > > > > > > it
> > > > > > > > > > > > ?
> > > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > > --
> > > > > > > > > > > > >
> > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > >
> > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > >
> > > > > > > > > > > > --
> > > > > > > > > > > >
> > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > >
> > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > --
> > > > > > > > > >
> > > > > > > > > > *Best Regards,*
> > > > > > > > > >
> > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > --
> > > > > > > >
> > > > > > > > *Best Regards,*
> > > > > > > >
> > > > > > > > *Kuznetsov Aleksey*
> > > > > > > >
> > > > > > >
> > > > > > --
> > > > > >
> > > > > > *Best Regards,*
> > > > > >
> > > > > > *Kuznetsov Aleksey*
> > > > > >
> > > > >
> > > > --
> > > >
> > > > *Best Regards,*
> > > >
> > > > *Kuznetsov Aleksey*
> > > >
> > >
> > --
> >
> > *Best Regards,*
> >
> > *Kuznetsov Aleksey*
> >
>
-- 

*Best Regards,*

*Kuznetsov Aleksey*

Re: distributed transaction of non-single coordinator

Posted by Sergi Vladykin <se...@gmail.com>.
Good! Basically your orchestrator just takes some predefined graph of
distributed services to be invoked, calls them by some kind of RPC and
passes the needed parameters between them, right?

Sergi

2017-03-14 22:46 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:

> orchestrator is a custom thing. He is responsible for managing business
> scenarios flows. Many nodes are involved in scenarios. They exchange data
> and folow one another. If you acquinted with BPMN framework, so
> orchestrator is like bpmn engine.
>
> вт, 14 Мар 2017 г., 18:56 Sergi Vladykin <se...@gmail.com>:
>
> > What is Orchestrator for you? Is it a thing from Microsoft or your custom
> > in-house software?
> >
> > Sergi
> >
> > 2017-03-14 18:00 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:
> >
> > > Fine. Let's say we've got multiple servers which fulfills custom logic.
> > > This servers compound oriented graph (BPMN process) which controlled by
> > > Orchestrator.
> > > For instance, *server1  *creates *variable A *with value 1, persists it
> > to
> > > IGNITE cache and creates *variable B *and sends it to* server2. *The
> > > latests receives *variable B*, do some logic with it and stores to
> > IGNITE.
> > > All the work made by both servers must be fulfilled in *one*
> transaction.
> > > Because we need all information done, or nothing(rollbacked). The
> > scenario
> > > is managed by orchestrator.
> > >
> > > вт, 14 мар. 2017 г. в 17:31, Sergi Vladykin <sergi.vladykin@gmail.com
> >:
> > >
> > > > Ok, it is not a business case, it is your wrong solution for it.
> > > > Lets try again, what is the business case?
> > > >
> > > > Sergi
> > > >
> > > > 2017-03-14 16:42 GMT+03:00 ALEKSEY KUZNETSOV <
> alkuznetsov.sb@gmail.com
> > >:
> > > >
> > > > > The case is the following, One starts transaction in one node, and
> > > commit
> > > > > this transaction in another jvm node(or rollback it remotely).
> > > > >
> > > > > вт, 14 мар. 2017 г. в 16:30, Sergi Vladykin <
> > sergi.vladykin@gmail.com
> > > >:
> > > > >
> > > > > > Because even if you make it work for some simplistic scenario,
> get
> > > > ready
> > > > > to
> > > > > > write many fault tolerance tests and make sure that you TXs work
> > > > > gracefully
> > > > > > in all modes in case of crashes. Also make sure that we do not
> have
> > > any
> > > > > > performance drops after all your changes in existing benchmarks.
> > All
> > > in
> > > > > all
> > > > > > I don't believe these conditions will be met and your
> contribution
> > > will
> > > > > be
> > > > > > accepted.
> > > > > >
> > > > > > Better solution to what problem? Sending TX to another node? The
> > > > problem
> > > > > > statement itself is already wrong. What business case you are
> > trying
> > > to
> > > > > > solve? I'm sure everything you need can be done in a much more
> > simple
> > > > and
> > > > > > efficient way at the application level.
> > > > > >
> > > > > > Sergi
> > > > > >
> > > > > > 2017-03-14 16:03 GMT+03:00 ALEKSEY KUZNETSOV <
> > > alkuznetsov.sb@gmail.com
> > > > >:
> > > > > >
> > > > > > > Why wrong ? You know the better solution?
> > > > > > >
> > > > > > > вт, 14 мар. 2017 г. в 15:46, Sergi Vladykin <
> > > > sergi.vladykin@gmail.com
> > > > > >:
> > > > > > >
> > > > > > > > Just serializing TX object and deserializing it on another
> node
> > > is
> > > > > > > > meaningless, because other nodes participating in the TX have
> > to
> > > > know
> > > > > > > about
> > > > > > > > the new coordinator. This will require protocol changes, we
> > > > > definitely
> > > > > > > will
> > > > > > > > have fault tolerance and performance issues. IMO the whole
> idea
> > > is
> > > > > > wrong
> > > > > > > > and it makes no sense to waste time on it.
> > > > > > > >
> > > > > > > > Sergi
> > > > > > > >
> > > > > > > > 2017-03-14 10:57 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > alkuznetsov.sb@gmail.com
> > > > > > >:
> > > > > > > >
> > > > > > > > > IgniteTransactionState implememntation contains
> > IgniteTxEntry's
> > > > > which
> > > > > > > is
> > > > > > > > > supposed to be transferable
> > > > > > > > >
> > > > > > > > > пн, 13 мар. 2017 г. в 19:32, Dmitriy Setrakyan <
> > > > > > dsetrakyan@apache.org
> > > > > > > >:
> > > > > > > > >
> > > > > > > > > > It sounds a little scary to me that we are passing
> > > transaction
> > > > > > > objects
> > > > > > > > > > around. Such object may contain all sorts of Ignite
> > context.
> > > If
> > > > > > some
> > > > > > > > data
> > > > > > > > > > needs to be passed across, we should create a special
> > > transfer
> > > > > > object
> > > > > > > > in
> > > > > > > > > > this case.
> > > > > > > > > >
> > > > > > > > > > D.
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > On Mon, Mar 13, 2017 at 9:10 AM, ALEKSEY KUZNETSOV <
> > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > wrote:
> > > > > > > > > >
> > > > > > > > > > > well, there a couple of issues preventing transaction
> > > > > proceeding.
> > > > > > > > > > > At first, After transaction serialization and
> > > deserialization
> > > > > on
> > > > > > > the
> > > > > > > > > > remote
> > > > > > > > > > > server, there is no txState. So im going to put it in
> > > > > > > > > > > writeExternal()\readExternal()
> > > > > > > > > > >
> > > > > > > > > > > The last one is Deserialized transaction lacks of
> shared
> > > > cache
> > > > > > > > context
> > > > > > > > > > > field at TransactionProxyImpl. Perhaps, it must be
> > injected
> > > > by
> > > > > > > > > > > GridResourceProcessor ?
> > > > > > > > > > >
> > > > > > > > > > > пн, 13 мар. 2017 г. в 17:27, ALEKSEY KUZNETSOV <
> > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > >:
> > > > > > > > > > >
> > > > > > > > > > > > while starting and continuing transaction in
> different
> > > jvms
> > > > > in
> > > > > > > run
> > > > > > > > > into
> > > > > > > > > > > > serialization exception in writeExternalMeta :
> > > > > > > > > > > >
> > > > > > > > > > > > @Override public void writeExternal(ObjectOutput out)
> > > > throws
> > > > > > > > > > IOException
> > > > > > > > > > > {
> > > > > > > > > > > >     writeExternalMeta(out);
> > > > > > > > > > > >
> > > > > > > > > > > > some meta is cannot be serialized.
> > > > > > > > > > > > пт, 10 мар. 2017 г. в 17:25, Alexey Goncharuk <
> > > > > > > > > > > alexey.goncharuk@gmail.com
> > > > > > > > > > > > >:
> > > > > > > > > > > >
> > > > > > > > > > > > Aleksey,
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > I think I am starting to get what you want, but I
> have
> > a
> > > > few
> > > > > > > > > concerns:
> > > > > > > > > > > >  - What is the API for the proposed change? In your
> > test,
> > > > you
> > > > > > > pass
> > > > > > > > an
> > > > > > > > > > > > instance of transaction created on ignite(0) to the
> > > ignite
> > > > > > > instance
> > > > > > > > > > > > ignite(1). This is obviously not possible in a truly
> > > > > > distributed
> > > > > > > > > > > > (multi-jvm) environment.
> > > > > > > > > > > > - How will you synchronize cache update actions and
> > > > > transaction
> > > > > > > > > commit?
> > > > > > > > > > > > Say, you have one node that decided to commit, but
> > > another
> > > > > node
> > > > > > > is
> > > > > > > > > > still
> > > > > > > > > > > > writing within this transaction. How do you make sure
> > > that
> > > > > two
> > > > > > > > nodes
> > > > > > > > > > will
> > > > > > > > > > > > not call commit() and rollback() simultaneously?
> > > > > > > > > > > >  - How do you make sure that either commit() or
> > > rollback()
> > > > is
> > > > > > > > called
> > > > > > > > > if
> > > > > > > > > > > an
> > > > > > > > > > > > originator failed?
> > > > > > > > > > > >
> > > > > > > > > > > > 2017-03-10 15:38 GMT+03:00 Дмитрий Рябов <
> > > > > > somefireone@gmail.com
> > > > > > > >:
> > > > > > > > > > > >
> > > > > > > > > > > > > Alexey Goncharuk, heh, my initial understanding was
> > > that
> > > > > > > > > transferring
> > > > > > > > > > > of
> > > > > > > > > > > > tx
> > > > > > > > > > > > > ownership from one node to another will be happened
> > > > > > > automatically
> > > > > > > > > > when
> > > > > > > > > > > > > originating node is gone down.
> > > > > > > > > > > > >
> > > > > > > > > > > > > 2017-03-10 15:36 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > >:
> > > > > > > > > > > > >
> > > > > > > > > > > > > > Im aiming to span transaction on multiple
> threads,
> > > > nodes,
> > > > > > > > > > jvms(soon).
> > > > > > > > > > > > So
> > > > > > > > > > > > > > every node is able to rollback, or commit common
> > > > > > > transaction.It
> > > > > > > > > > > turned
> > > > > > > > > > > > > up i
> > > > > > > > > > > > > > need to transfer tx between nodes in order to
> > commit
> > > > > > > > transaction
> > > > > > > > > in
> > > > > > > > > > > > > > different node(in the same jvm).
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > пт, 10 мар. 2017 г. в 15:20, Alexey Goncharuk <
> > > > > > > > > > > > > alexey.goncharuk@gmail.com
> > > > > > > > > > > > > > >:
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Aleksey,
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Do you mean that you want a concept of
> > transferring
> > > > of
> > > > > tx
> > > > > > > > > > ownership
> > > > > > > > > > > > > from
> > > > > > > > > > > > > > > one node to another? My initial understanding
> was
> > > > that
> > > > > > you
> > > > > > > > want
> > > > > > > > > > to
> > > > > > > > > > > be
> > > > > > > > > > > > > > able
> > > > > > > > > > > > > > > to update keys in a transaction from multiple
> > > threads
> > > > > in
> > > > > > > > > > parallel.
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > --AG
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > 2017-03-10 15:01 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > > >:
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > Well. Consider transaction started in one
> node,
> > > and
> > > > > > > > continued
> > > > > > > > > > in
> > > > > > > > > > > > > > another
> > > > > > > > > > > > > > > > one.
> > > > > > > > > > > > > > > > The following test describes my idea:
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > Ignite ignite1 = ignite(0);
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > IgniteTransactions transactions =
> > > > > > ignite1.transactions();
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > IgniteCache<String, Integer> cache =
> > > > > > > > > ignite1.getOrCreateCache("
> > > > > > > > > > > > > > > > testCache");
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > Transaction tx = transactions.txStart(
> > > concurrency,
> > > > > > > > > isolation);
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > cache.put("key1", 1);
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > cache.put("key2", 2);
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > tx.stop();
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > IgniteInternalFuture<Boolean> fut =
> > > > > > > > GridTestUtils.runAsync(()
> > > > > > > > > > ->
> > > > > > > > > > > {
> > > > > > > > > > > > > > > >     IgniteTransactions ts =
> > > > ignite(1).transactions();
> > > > > > > > > > > > > > > >     Assert.assertNull(ts.tx());
> > > > > > > > > > > > > > > >     Assert.assertEquals(
> > > TransactionState.STOPPED,
> > > > > > > > > tx.state());
> > > > > > > > > > > > > > > >     ts.txStart(tx);
> > > > > > > > > > > > > > > >
> >  Assert.assertEquals(TransactionState.ACTIVE,
> > > > > > > > > tx.state());
> > > > > > > > > > > > > > > >     cache.put("key3", 3);
> > > > > > > > > > > > > > > >     Assert.assertTrue(cache.remove("key2"));
> > > > > > > > > > > > > > > >     tx.commit();
> > > > > > > > > > > > > > > >     return true;
> > > > > > > > > > > > > > > > });
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > fut.get();
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > Assert.assertEquals(
> TransactionState.COMMITTED,
> > > > > > > > tx.state());
> > > > > > > > > > > > > > > > Assert.assertEquals((long)1,
> > > > > (long)cache.get("key1"));
> > > > > > > > > > > > > > > > Assert.assertEquals((long)3,
> > > > > (long)cache.get("key3"));
> > > > > > > > > > > > > > > > Assert.assertFalse(cache.
> containsKey("key2"));
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > In method *ts.txStart(...)* we just rebind
> *tx*
> > > to
> > > > > > > current
> > > > > > > > > > > thread:
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > public void txStart(Transaction tx) {
> > > > > > > > > > > > > > > >     TransactionProxyImpl transactionProxy =
> > > > > > > > > > > > (TransactionProxyImpl)tx;
> > > > > > > > > > > > > > > >     cctx.tm().reopenTx(
> transactionProxy.tx());
> > > > > > > > > > > > > > > >     transactionProxy.bindToCurrentThread();
> > > > > > > > > > > > > > > > }
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > In method *reopenTx* we alter *threadMap* so
> > that
> > > > it
> > > > > > > binds
> > > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > > to current thread.
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > How do u think about it ?
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > вт, 7 мар. 2017 г. в 22:38, Denis Magda <
> > > > > > > dmagda@apache.org
> > > > > > > > >:
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > Hi Alexey,
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > Please share the rational behind this and
> the
> > > > > > thoughts,
> > > > > > > > > > design
> > > > > > > > > > > > > ideas
> > > > > > > > > > > > > > > you
> > > > > > > > > > > > > > > > > have in mind.
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > —
> > > > > > > > > > > > > > > > > Denis
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > On Mar 7, 2017, at 3:19 AM, ALEKSEY
> > > KUZNETSOV <
> > > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com>
> > > > > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > Hi all! Im designing distributed
> > transaction
> > > > > which
> > > > > > > can
> > > > > > > > be
> > > > > > > > > > > > started
> > > > > > > > > > > > > > at
> > > > > > > > > > > > > > > > one
> > > > > > > > > > > > > > > > > > node, and continued at other one. Has
> > anybody
> > > > > > > thoughts
> > > > > > > > on
> > > > > > > > > > it
> > > > > > > > > > > ?
> > > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > --
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > --
> > > > > > > > > > > >
> > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > >
> > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > >
> > > > > > > > > > > --
> > > > > > > > > > >
> > > > > > > > > > > *Best Regards,*
> > > > > > > > > > >
> > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > --
> > > > > > > > >
> > > > > > > > > *Best Regards,*
> > > > > > > > >
> > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > >
> > > > > > > >
> > > > > > > --
> > > > > > >
> > > > > > > *Best Regards,*
> > > > > > >
> > > > > > > *Kuznetsov Aleksey*
> > > > > > >
> > > > > >
> > > > > --
> > > > >
> > > > > *Best Regards,*
> > > > >
> > > > > *Kuznetsov Aleksey*
> > > > >
> > > >
> > > --
> > >
> > > *Best Regards,*
> > >
> > > *Kuznetsov Aleksey*
> > >
> >
> --
>
> *Best Regards,*
>
> *Kuznetsov Aleksey*
>

Re: distributed transaction of non-single coordinator

Posted by ALEKSEY KUZNETSOV <al...@gmail.com>.
orchestrator is a custom thing. He is responsible for managing business
scenarios flows. Many nodes are involved in scenarios. They exchange data
and folow one another. If you acquinted with BPMN framework, so
orchestrator is like bpmn engine.

вт, 14 Мар 2017 г., 18:56 Sergi Vladykin <se...@gmail.com>:

> What is Orchestrator for you? Is it a thing from Microsoft or your custom
> in-house software?
>
> Sergi
>
> 2017-03-14 18:00 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:
>
> > Fine. Let's say we've got multiple servers which fulfills custom logic.
> > This servers compound oriented graph (BPMN process) which controlled by
> > Orchestrator.
> > For instance, *server1  *creates *variable A *with value 1, persists it
> to
> > IGNITE cache and creates *variable B *and sends it to* server2. *The
> > latests receives *variable B*, do some logic with it and stores to
> IGNITE.
> > All the work made by both servers must be fulfilled in *one* transaction.
> > Because we need all information done, or nothing(rollbacked). The
> scenario
> > is managed by orchestrator.
> >
> > вт, 14 мар. 2017 г. в 17:31, Sergi Vladykin <se...@gmail.com>:
> >
> > > Ok, it is not a business case, it is your wrong solution for it.
> > > Lets try again, what is the business case?
> > >
> > > Sergi
> > >
> > > 2017-03-14 16:42 GMT+03:00 ALEKSEY KUZNETSOV <alkuznetsov.sb@gmail.com
> >:
> > >
> > > > The case is the following, One starts transaction in one node, and
> > commit
> > > > this transaction in another jvm node(or rollback it remotely).
> > > >
> > > > вт, 14 мар. 2017 г. в 16:30, Sergi Vladykin <
> sergi.vladykin@gmail.com
> > >:
> > > >
> > > > > Because even if you make it work for some simplistic scenario, get
> > > ready
> > > > to
> > > > > write many fault tolerance tests and make sure that you TXs work
> > > > gracefully
> > > > > in all modes in case of crashes. Also make sure that we do not have
> > any
> > > > > performance drops after all your changes in existing benchmarks.
> All
> > in
> > > > all
> > > > > I don't believe these conditions will be met and your contribution
> > will
> > > > be
> > > > > accepted.
> > > > >
> > > > > Better solution to what problem? Sending TX to another node? The
> > > problem
> > > > > statement itself is already wrong. What business case you are
> trying
> > to
> > > > > solve? I'm sure everything you need can be done in a much more
> simple
> > > and
> > > > > efficient way at the application level.
> > > > >
> > > > > Sergi
> > > > >
> > > > > 2017-03-14 16:03 GMT+03:00 ALEKSEY KUZNETSOV <
> > alkuznetsov.sb@gmail.com
> > > >:
> > > > >
> > > > > > Why wrong ? You know the better solution?
> > > > > >
> > > > > > вт, 14 мар. 2017 г. в 15:46, Sergi Vladykin <
> > > sergi.vladykin@gmail.com
> > > > >:
> > > > > >
> > > > > > > Just serializing TX object and deserializing it on another node
> > is
> > > > > > > meaningless, because other nodes participating in the TX have
> to
> > > know
> > > > > > about
> > > > > > > the new coordinator. This will require protocol changes, we
> > > > definitely
> > > > > > will
> > > > > > > have fault tolerance and performance issues. IMO the whole idea
> > is
> > > > > wrong
> > > > > > > and it makes no sense to waste time on it.
> > > > > > >
> > > > > > > Sergi
> > > > > > >
> > > > > > > 2017-03-14 10:57 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > alkuznetsov.sb@gmail.com
> > > > > >:
> > > > > > >
> > > > > > > > IgniteTransactionState implememntation contains
> IgniteTxEntry's
> > > > which
> > > > > > is
> > > > > > > > supposed to be transferable
> > > > > > > >
> > > > > > > > пн, 13 мар. 2017 г. в 19:32, Dmitriy Setrakyan <
> > > > > dsetrakyan@apache.org
> > > > > > >:
> > > > > > > >
> > > > > > > > > It sounds a little scary to me that we are passing
> > transaction
> > > > > > objects
> > > > > > > > > around. Such object may contain all sorts of Ignite
> context.
> > If
> > > > > some
> > > > > > > data
> > > > > > > > > needs to be passed across, we should create a special
> > transfer
> > > > > object
> > > > > > > in
> > > > > > > > > this case.
> > > > > > > > >
> > > > > > > > > D.
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > On Mon, Mar 13, 2017 at 9:10 AM, ALEKSEY KUZNETSOV <
> > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > well, there a couple of issues preventing transaction
> > > > proceeding.
> > > > > > > > > > At first, After transaction serialization and
> > deserialization
> > > > on
> > > > > > the
> > > > > > > > > remote
> > > > > > > > > > server, there is no txState. So im going to put it in
> > > > > > > > > > writeExternal()\readExternal()
> > > > > > > > > >
> > > > > > > > > > The last one is Deserialized transaction lacks of shared
> > > cache
> > > > > > > context
> > > > > > > > > > field at TransactionProxyImpl. Perhaps, it must be
> injected
> > > by
> > > > > > > > > > GridResourceProcessor ?
> > > > > > > > > >
> > > > > > > > > > пн, 13 мар. 2017 г. в 17:27, ALEKSEY KUZNETSOV <
> > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > >:
> > > > > > > > > >
> > > > > > > > > > > while starting and continuing transaction in different
> > jvms
> > > > in
> > > > > > run
> > > > > > > > into
> > > > > > > > > > > serialization exception in writeExternalMeta :
> > > > > > > > > > >
> > > > > > > > > > > @Override public void writeExternal(ObjectOutput out)
> > > throws
> > > > > > > > > IOException
> > > > > > > > > > {
> > > > > > > > > > >     writeExternalMeta(out);
> > > > > > > > > > >
> > > > > > > > > > > some meta is cannot be serialized.
> > > > > > > > > > > пт, 10 мар. 2017 г. в 17:25, Alexey Goncharuk <
> > > > > > > > > > alexey.goncharuk@gmail.com
> > > > > > > > > > > >:
> > > > > > > > > > >
> > > > > > > > > > > Aleksey,
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > I think I am starting to get what you want, but I have
> a
> > > few
> > > > > > > > concerns:
> > > > > > > > > > >  - What is the API for the proposed change? In your
> test,
> > > you
> > > > > > pass
> > > > > > > an
> > > > > > > > > > > instance of transaction created on ignite(0) to the
> > ignite
> > > > > > instance
> > > > > > > > > > > ignite(1). This is obviously not possible in a truly
> > > > > distributed
> > > > > > > > > > > (multi-jvm) environment.
> > > > > > > > > > > - How will you synchronize cache update actions and
> > > > transaction
> > > > > > > > commit?
> > > > > > > > > > > Say, you have one node that decided to commit, but
> > another
> > > > node
> > > > > > is
> > > > > > > > > still
> > > > > > > > > > > writing within this transaction. How do you make sure
> > that
> > > > two
> > > > > > > nodes
> > > > > > > > > will
> > > > > > > > > > > not call commit() and rollback() simultaneously?
> > > > > > > > > > >  - How do you make sure that either commit() or
> > rollback()
> > > is
> > > > > > > called
> > > > > > > > if
> > > > > > > > > > an
> > > > > > > > > > > originator failed?
> > > > > > > > > > >
> > > > > > > > > > > 2017-03-10 15:38 GMT+03:00 Дмитрий Рябов <
> > > > > somefireone@gmail.com
> > > > > > >:
> > > > > > > > > > >
> > > > > > > > > > > > Alexey Goncharuk, heh, my initial understanding was
> > that
> > > > > > > > transferring
> > > > > > > > > > of
> > > > > > > > > > > tx
> > > > > > > > > > > > ownership from one node to another will be happened
> > > > > > automatically
> > > > > > > > > when
> > > > > > > > > > > > originating node is gone down.
> > > > > > > > > > > >
> > > > > > > > > > > > 2017-03-10 15:36 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > >:
> > > > > > > > > > > >
> > > > > > > > > > > > > Im aiming to span transaction on multiple threads,
> > > nodes,
> > > > > > > > > jvms(soon).
> > > > > > > > > > > So
> > > > > > > > > > > > > every node is able to rollback, or commit common
> > > > > > transaction.It
> > > > > > > > > > turned
> > > > > > > > > > > > up i
> > > > > > > > > > > > > need to transfer tx between nodes in order to
> commit
> > > > > > > transaction
> > > > > > > > in
> > > > > > > > > > > > > different node(in the same jvm).
> > > > > > > > > > > > >
> > > > > > > > > > > > > пт, 10 мар. 2017 г. в 15:20, Alexey Goncharuk <
> > > > > > > > > > > > alexey.goncharuk@gmail.com
> > > > > > > > > > > > > >:
> > > > > > > > > > > > >
> > > > > > > > > > > > > > Aleksey,
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Do you mean that you want a concept of
> transferring
> > > of
> > > > tx
> > > > > > > > > ownership
> > > > > > > > > > > > from
> > > > > > > > > > > > > > one node to another? My initial understanding was
> > > that
> > > > > you
> > > > > > > want
> > > > > > > > > to
> > > > > > > > > > be
> > > > > > > > > > > > > able
> > > > > > > > > > > > > > to update keys in a transaction from multiple
> > threads
> > > > in
> > > > > > > > > parallel.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > --AG
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > 2017-03-10 15:01 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > > >:
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Well. Consider transaction started in one node,
> > and
> > > > > > > continued
> > > > > > > > > in
> > > > > > > > > > > > > another
> > > > > > > > > > > > > > > one.
> > > > > > > > > > > > > > > The following test describes my idea:
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Ignite ignite1 = ignite(0);
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > IgniteTransactions transactions =
> > > > > ignite1.transactions();
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > IgniteCache<String, Integer> cache =
> > > > > > > > ignite1.getOrCreateCache("
> > > > > > > > > > > > > > > testCache");
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Transaction tx = transactions.txStart(
> > concurrency,
> > > > > > > > isolation);
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > cache.put("key1", 1);
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > cache.put("key2", 2);
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > tx.stop();
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > IgniteInternalFuture<Boolean> fut =
> > > > > > > GridTestUtils.runAsync(()
> > > > > > > > > ->
> > > > > > > > > > {
> > > > > > > > > > > > > > >     IgniteTransactions ts =
> > > ignite(1).transactions();
> > > > > > > > > > > > > > >     Assert.assertNull(ts.tx());
> > > > > > > > > > > > > > >     Assert.assertEquals(
> > TransactionState.STOPPED,
> > > > > > > > tx.state());
> > > > > > > > > > > > > > >     ts.txStart(tx);
> > > > > > > > > > > > > > >
>  Assert.assertEquals(TransactionState.ACTIVE,
> > > > > > > > tx.state());
> > > > > > > > > > > > > > >     cache.put("key3", 3);
> > > > > > > > > > > > > > >     Assert.assertTrue(cache.remove("key2"));
> > > > > > > > > > > > > > >     tx.commit();
> > > > > > > > > > > > > > >     return true;
> > > > > > > > > > > > > > > });
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > fut.get();
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Assert.assertEquals(TransactionState.COMMITTED,
> > > > > > > tx.state());
> > > > > > > > > > > > > > > Assert.assertEquals((long)1,
> > > > (long)cache.get("key1"));
> > > > > > > > > > > > > > > Assert.assertEquals((long)3,
> > > > (long)cache.get("key3"));
> > > > > > > > > > > > > > > Assert.assertFalse(cache.containsKey("key2"));
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > In method *ts.txStart(...)* we just rebind *tx*
> > to
> > > > > > current
> > > > > > > > > > thread:
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > public void txStart(Transaction tx) {
> > > > > > > > > > > > > > >     TransactionProxyImpl transactionProxy =
> > > > > > > > > > > (TransactionProxyImpl)tx;
> > > > > > > > > > > > > > >     cctx.tm().reopenTx(transactionProxy.tx());
> > > > > > > > > > > > > > >     transactionProxy.bindToCurrentThread();
> > > > > > > > > > > > > > > }
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > In method *reopenTx* we alter *threadMap* so
> that
> > > it
> > > > > > binds
> > > > > > > > > > > > transaction
> > > > > > > > > > > > > > > to current thread.
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > How do u think about it ?
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > вт, 7 мар. 2017 г. в 22:38, Denis Magda <
> > > > > > dmagda@apache.org
> > > > > > > >:
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > Hi Alexey,
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > Please share the rational behind this and the
> > > > > thoughts,
> > > > > > > > > design
> > > > > > > > > > > > ideas
> > > > > > > > > > > > > > you
> > > > > > > > > > > > > > > > have in mind.
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > —
> > > > > > > > > > > > > > > > Denis
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > On Mar 7, 2017, at 3:19 AM, ALEKSEY
> > KUZNETSOV <
> > > > > > > > > > > > > > > alkuznetsov.sb@gmail.com>
> > > > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > Hi all! Im designing distributed
> transaction
> > > > which
> > > > > > can
> > > > > > > be
> > > > > > > > > > > started
> > > > > > > > > > > > > at
> > > > > > > > > > > > > > > one
> > > > > > > > > > > > > > > > > node, and continued at other one. Has
> anybody
> > > > > > thoughts
> > > > > > > on
> > > > > > > > > it
> > > > > > > > > > ?
> > > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > --
> > > > > > > > > > > > >
> > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > >
> > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > --
> > > > > > > > > > >
> > > > > > > > > > > *Best Regards,*
> > > > > > > > > > >
> > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > >
> > > > > > > > > > --
> > > > > > > > > >
> > > > > > > > > > *Best Regards,*
> > > > > > > > > >
> > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > --
> > > > > > > >
> > > > > > > > *Best Regards,*
> > > > > > > >
> > > > > > > > *Kuznetsov Aleksey*
> > > > > > > >
> > > > > > >
> > > > > > --
> > > > > >
> > > > > > *Best Regards,*
> > > > > >
> > > > > > *Kuznetsov Aleksey*
> > > > > >
> > > > >
> > > > --
> > > >
> > > > *Best Regards,*
> > > >
> > > > *Kuznetsov Aleksey*
> > > >
> > >
> > --
> >
> > *Best Regards,*
> >
> > *Kuznetsov Aleksey*
> >
>
-- 

*Best Regards,*

*Kuznetsov Aleksey*

Re: distributed transaction of non-single coordinator

Posted by Sergi Vladykin <se...@gmail.com>.
What is Orchestrator for you? Is it a thing from Microsoft or your custom
in-house software?

Sergi

2017-03-14 18:00 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:

> Fine. Let's say we've got multiple servers which fulfills custom logic.
> This servers compound oriented graph (BPMN process) which controlled by
> Orchestrator.
> For instance, *server1  *creates *variable A *with value 1, persists it to
> IGNITE cache and creates *variable B *and sends it to* server2. *The
> latests receives *variable B*, do some logic with it and stores to IGNITE.
> All the work made by both servers must be fulfilled in *one* transaction.
> Because we need all information done, or nothing(rollbacked). The scenario
> is managed by orchestrator.
>
> вт, 14 мар. 2017 г. в 17:31, Sergi Vladykin <se...@gmail.com>:
>
> > Ok, it is not a business case, it is your wrong solution for it.
> > Lets try again, what is the business case?
> >
> > Sergi
> >
> > 2017-03-14 16:42 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:
> >
> > > The case is the following, One starts transaction in one node, and
> commit
> > > this transaction in another jvm node(or rollback it remotely).
> > >
> > > вт, 14 мар. 2017 г. в 16:30, Sergi Vladykin <sergi.vladykin@gmail.com
> >:
> > >
> > > > Because even if you make it work for some simplistic scenario, get
> > ready
> > > to
> > > > write many fault tolerance tests and make sure that you TXs work
> > > gracefully
> > > > in all modes in case of crashes. Also make sure that we do not have
> any
> > > > performance drops after all your changes in existing benchmarks. All
> in
> > > all
> > > > I don't believe these conditions will be met and your contribution
> will
> > > be
> > > > accepted.
> > > >
> > > > Better solution to what problem? Sending TX to another node? The
> > problem
> > > > statement itself is already wrong. What business case you are trying
> to
> > > > solve? I'm sure everything you need can be done in a much more simple
> > and
> > > > efficient way at the application level.
> > > >
> > > > Sergi
> > > >
> > > > 2017-03-14 16:03 GMT+03:00 ALEKSEY KUZNETSOV <
> alkuznetsov.sb@gmail.com
> > >:
> > > >
> > > > > Why wrong ? You know the better solution?
> > > > >
> > > > > вт, 14 мар. 2017 г. в 15:46, Sergi Vladykin <
> > sergi.vladykin@gmail.com
> > > >:
> > > > >
> > > > > > Just serializing TX object and deserializing it on another node
> is
> > > > > > meaningless, because other nodes participating in the TX have to
> > know
> > > > > about
> > > > > > the new coordinator. This will require protocol changes, we
> > > definitely
> > > > > will
> > > > > > have fault tolerance and performance issues. IMO the whole idea
> is
> > > > wrong
> > > > > > and it makes no sense to waste time on it.
> > > > > >
> > > > > > Sergi
> > > > > >
> > > > > > 2017-03-14 10:57 GMT+03:00 ALEKSEY KUZNETSOV <
> > > alkuznetsov.sb@gmail.com
> > > > >:
> > > > > >
> > > > > > > IgniteTransactionState implememntation contains IgniteTxEntry's
> > > which
> > > > > is
> > > > > > > supposed to be transferable
> > > > > > >
> > > > > > > пн, 13 мар. 2017 г. в 19:32, Dmitriy Setrakyan <
> > > > dsetrakyan@apache.org
> > > > > >:
> > > > > > >
> > > > > > > > It sounds a little scary to me that we are passing
> transaction
> > > > > objects
> > > > > > > > around. Such object may contain all sorts of Ignite context.
> If
> > > > some
> > > > > > data
> > > > > > > > needs to be passed across, we should create a special
> transfer
> > > > object
> > > > > > in
> > > > > > > > this case.
> > > > > > > >
> > > > > > > > D.
> > > > > > > >
> > > > > > > >
> > > > > > > > On Mon, Mar 13, 2017 at 9:10 AM, ALEKSEY KUZNETSOV <
> > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > wrote:
> > > > > > > >
> > > > > > > > > well, there a couple of issues preventing transaction
> > > proceeding.
> > > > > > > > > At first, After transaction serialization and
> deserialization
> > > on
> > > > > the
> > > > > > > > remote
> > > > > > > > > server, there is no txState. So im going to put it in
> > > > > > > > > writeExternal()\readExternal()
> > > > > > > > >
> > > > > > > > > The last one is Deserialized transaction lacks of shared
> > cache
> > > > > > context
> > > > > > > > > field at TransactionProxyImpl. Perhaps, it must be injected
> > by
> > > > > > > > > GridResourceProcessor ?
> > > > > > > > >
> > > > > > > > > пн, 13 мар. 2017 г. в 17:27, ALEKSEY KUZNETSOV <
> > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > >:
> > > > > > > > >
> > > > > > > > > > while starting and continuing transaction in different
> jvms
> > > in
> > > > > run
> > > > > > > into
> > > > > > > > > > serialization exception in writeExternalMeta :
> > > > > > > > > >
> > > > > > > > > > @Override public void writeExternal(ObjectOutput out)
> > throws
> > > > > > > > IOException
> > > > > > > > > {
> > > > > > > > > >     writeExternalMeta(out);
> > > > > > > > > >
> > > > > > > > > > some meta is cannot be serialized.
> > > > > > > > > > пт, 10 мар. 2017 г. в 17:25, Alexey Goncharuk <
> > > > > > > > > alexey.goncharuk@gmail.com
> > > > > > > > > > >:
> > > > > > > > > >
> > > > > > > > > > Aleksey,
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > I think I am starting to get what you want, but I have a
> > few
> > > > > > > concerns:
> > > > > > > > > >  - What is the API for the proposed change? In your test,
> > you
> > > > > pass
> > > > > > an
> > > > > > > > > > instance of transaction created on ignite(0) to the
> ignite
> > > > > instance
> > > > > > > > > > ignite(1). This is obviously not possible in a truly
> > > > distributed
> > > > > > > > > > (multi-jvm) environment.
> > > > > > > > > > - How will you synchronize cache update actions and
> > > transaction
> > > > > > > commit?
> > > > > > > > > > Say, you have one node that decided to commit, but
> another
> > > node
> > > > > is
> > > > > > > > still
> > > > > > > > > > writing within this transaction. How do you make sure
> that
> > > two
> > > > > > nodes
> > > > > > > > will
> > > > > > > > > > not call commit() and rollback() simultaneously?
> > > > > > > > > >  - How do you make sure that either commit() or
> rollback()
> > is
> > > > > > called
> > > > > > > if
> > > > > > > > > an
> > > > > > > > > > originator failed?
> > > > > > > > > >
> > > > > > > > > > 2017-03-10 15:38 GMT+03:00 Дмитрий Рябов <
> > > > somefireone@gmail.com
> > > > > >:
> > > > > > > > > >
> > > > > > > > > > > Alexey Goncharuk, heh, my initial understanding was
> that
> > > > > > > transferring
> > > > > > > > > of
> > > > > > > > > > tx
> > > > > > > > > > > ownership from one node to another will be happened
> > > > > automatically
> > > > > > > > when
> > > > > > > > > > > originating node is gone down.
> > > > > > > > > > >
> > > > > > > > > > > 2017-03-10 15:36 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > >:
> > > > > > > > > > >
> > > > > > > > > > > > Im aiming to span transaction on multiple threads,
> > nodes,
> > > > > > > > jvms(soon).
> > > > > > > > > > So
> > > > > > > > > > > > every node is able to rollback, or commit common
> > > > > transaction.It
> > > > > > > > > turned
> > > > > > > > > > > up i
> > > > > > > > > > > > need to transfer tx between nodes in order to commit
> > > > > > transaction
> > > > > > > in
> > > > > > > > > > > > different node(in the same jvm).
> > > > > > > > > > > >
> > > > > > > > > > > > пт, 10 мар. 2017 г. в 15:20, Alexey Goncharuk <
> > > > > > > > > > > alexey.goncharuk@gmail.com
> > > > > > > > > > > > >:
> > > > > > > > > > > >
> > > > > > > > > > > > > Aleksey,
> > > > > > > > > > > > >
> > > > > > > > > > > > > Do you mean that you want a concept of transferring
> > of
> > > tx
> > > > > > > > ownership
> > > > > > > > > > > from
> > > > > > > > > > > > > one node to another? My initial understanding was
> > that
> > > > you
> > > > > > want
> > > > > > > > to
> > > > > > > > > be
> > > > > > > > > > > > able
> > > > > > > > > > > > > to update keys in a transaction from multiple
> threads
> > > in
> > > > > > > > parallel.
> > > > > > > > > > > > >
> > > > > > > > > > > > > --AG
> > > > > > > > > > > > >
> > > > > > > > > > > > > 2017-03-10 15:01 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > > >:
> > > > > > > > > > > > >
> > > > > > > > > > > > > > Well. Consider transaction started in one node,
> and
> > > > > > continued
> > > > > > > > in
> > > > > > > > > > > > another
> > > > > > > > > > > > > > one.
> > > > > > > > > > > > > > The following test describes my idea:
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Ignite ignite1 = ignite(0);
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > IgniteTransactions transactions =
> > > > ignite1.transactions();
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > IgniteCache<String, Integer> cache =
> > > > > > > ignite1.getOrCreateCache("
> > > > > > > > > > > > > > testCache");
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Transaction tx = transactions.txStart(
> concurrency,
> > > > > > > isolation);
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > cache.put("key1", 1);
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > cache.put("key2", 2);
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > tx.stop();
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > IgniteInternalFuture<Boolean> fut =
> > > > > > GridTestUtils.runAsync(()
> > > > > > > > ->
> > > > > > > > > {
> > > > > > > > > > > > > >     IgniteTransactions ts =
> > ignite(1).transactions();
> > > > > > > > > > > > > >     Assert.assertNull(ts.tx());
> > > > > > > > > > > > > >     Assert.assertEquals(
> TransactionState.STOPPED,
> > > > > > > tx.state());
> > > > > > > > > > > > > >     ts.txStart(tx);
> > > > > > > > > > > > > >     Assert.assertEquals(TransactionState.ACTIVE,
> > > > > > > tx.state());
> > > > > > > > > > > > > >     cache.put("key3", 3);
> > > > > > > > > > > > > >     Assert.assertTrue(cache.remove("key2"));
> > > > > > > > > > > > > >     tx.commit();
> > > > > > > > > > > > > >     return true;
> > > > > > > > > > > > > > });
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > fut.get();
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Assert.assertEquals(TransactionState.COMMITTED,
> > > > > > tx.state());
> > > > > > > > > > > > > > Assert.assertEquals((long)1,
> > > (long)cache.get("key1"));
> > > > > > > > > > > > > > Assert.assertEquals((long)3,
> > > (long)cache.get("key3"));
> > > > > > > > > > > > > > Assert.assertFalse(cache.containsKey("key2"));
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > In method *ts.txStart(...)* we just rebind *tx*
> to
> > > > > current
> > > > > > > > > thread:
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > public void txStart(Transaction tx) {
> > > > > > > > > > > > > >     TransactionProxyImpl transactionProxy =
> > > > > > > > > > (TransactionProxyImpl)tx;
> > > > > > > > > > > > > >     cctx.tm().reopenTx(transactionProxy.tx());
> > > > > > > > > > > > > >     transactionProxy.bindToCurrentThread();
> > > > > > > > > > > > > > }
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > In method *reopenTx* we alter *threadMap* so that
> > it
> > > > > binds
> > > > > > > > > > > transaction
> > > > > > > > > > > > > > to current thread.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > How do u think about it ?
> > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > вт, 7 мар. 2017 г. в 22:38, Denis Magda <
> > > > > dmagda@apache.org
> > > > > > >:
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Hi Alexey,
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Please share the rational behind this and the
> > > > thoughts,
> > > > > > > > design
> > > > > > > > > > > ideas
> > > > > > > > > > > > > you
> > > > > > > > > > > > > > > have in mind.
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > —
> > > > > > > > > > > > > > > Denis
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > On Mar 7, 2017, at 3:19 AM, ALEKSEY
> KUZNETSOV <
> > > > > > > > > > > > > > alkuznetsov.sb@gmail.com>
> > > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > Hi all! Im designing distributed transaction
> > > which
> > > > > can
> > > > > > be
> > > > > > > > > > started
> > > > > > > > > > > > at
> > > > > > > > > > > > > > one
> > > > > > > > > > > > > > > > node, and continued at other one. Has anybody
> > > > > thoughts
> > > > > > on
> > > > > > > > it
> > > > > > > > > ?
> > > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > --
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > --
> > > > > > > > > > > >
> > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > >
> > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > --
> > > > > > > > > >
> > > > > > > > > > *Best Regards,*
> > > > > > > > > >
> > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > >
> > > > > > > > > --
> > > > > > > > >
> > > > > > > > > *Best Regards,*
> > > > > > > > >
> > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > >
> > > > > > > >
> > > > > > > --
> > > > > > >
> > > > > > > *Best Regards,*
> > > > > > >
> > > > > > > *Kuznetsov Aleksey*
> > > > > > >
> > > > > >
> > > > > --
> > > > >
> > > > > *Best Regards,*
> > > > >
> > > > > *Kuznetsov Aleksey*
> > > > >
> > > >
> > > --
> > >
> > > *Best Regards,*
> > >
> > > *Kuznetsov Aleksey*
> > >
> >
> --
>
> *Best Regards,*
>
> *Kuznetsov Aleksey*
>

Re: distributed transaction of non-single coordinator

Posted by ALEKSEY KUZNETSOV <al...@gmail.com>.
Fine. Let's say we've got multiple servers which fulfills custom logic.
This servers compound oriented graph (BPMN process) which controlled by
Orchestrator.
For instance, *server1  *creates *variable A *with value 1, persists it to
IGNITE cache and creates *variable B *and sends it to* server2. *The
latests receives *variable B*, do some logic with it and stores to IGNITE.
All the work made by both servers must be fulfilled in *one* transaction.
Because we need all information done, or nothing(rollbacked). The scenario
is managed by orchestrator.

вт, 14 мар. 2017 г. в 17:31, Sergi Vladykin <se...@gmail.com>:

> Ok, it is not a business case, it is your wrong solution for it.
> Lets try again, what is the business case?
>
> Sergi
>
> 2017-03-14 16:42 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:
>
> > The case is the following, One starts transaction in one node, and commit
> > this transaction in another jvm node(or rollback it remotely).
> >
> > вт, 14 мар. 2017 г. в 16:30, Sergi Vladykin <se...@gmail.com>:
> >
> > > Because even if you make it work for some simplistic scenario, get
> ready
> > to
> > > write many fault tolerance tests and make sure that you TXs work
> > gracefully
> > > in all modes in case of crashes. Also make sure that we do not have any
> > > performance drops after all your changes in existing benchmarks. All in
> > all
> > > I don't believe these conditions will be met and your contribution will
> > be
> > > accepted.
> > >
> > > Better solution to what problem? Sending TX to another node? The
> problem
> > > statement itself is already wrong. What business case you are trying to
> > > solve? I'm sure everything you need can be done in a much more simple
> and
> > > efficient way at the application level.
> > >
> > > Sergi
> > >
> > > 2017-03-14 16:03 GMT+03:00 ALEKSEY KUZNETSOV <alkuznetsov.sb@gmail.com
> >:
> > >
> > > > Why wrong ? You know the better solution?
> > > >
> > > > вт, 14 мар. 2017 г. в 15:46, Sergi Vladykin <
> sergi.vladykin@gmail.com
> > >:
> > > >
> > > > > Just serializing TX object and deserializing it on another node is
> > > > > meaningless, because other nodes participating in the TX have to
> know
> > > > about
> > > > > the new coordinator. This will require protocol changes, we
> > definitely
> > > > will
> > > > > have fault tolerance and performance issues. IMO the whole idea is
> > > wrong
> > > > > and it makes no sense to waste time on it.
> > > > >
> > > > > Sergi
> > > > >
> > > > > 2017-03-14 10:57 GMT+03:00 ALEKSEY KUZNETSOV <
> > alkuznetsov.sb@gmail.com
> > > >:
> > > > >
> > > > > > IgniteTransactionState implememntation contains IgniteTxEntry's
> > which
> > > > is
> > > > > > supposed to be transferable
> > > > > >
> > > > > > пн, 13 мар. 2017 г. в 19:32, Dmitriy Setrakyan <
> > > dsetrakyan@apache.org
> > > > >:
> > > > > >
> > > > > > > It sounds a little scary to me that we are passing transaction
> > > > objects
> > > > > > > around. Such object may contain all sorts of Ignite context. If
> > > some
> > > > > data
> > > > > > > needs to be passed across, we should create a special transfer
> > > object
> > > > > in
> > > > > > > this case.
> > > > > > >
> > > > > > > D.
> > > > > > >
> > > > > > >
> > > > > > > On Mon, Mar 13, 2017 at 9:10 AM, ALEKSEY KUZNETSOV <
> > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > wrote:
> > > > > > >
> > > > > > > > well, there a couple of issues preventing transaction
> > proceeding.
> > > > > > > > At first, After transaction serialization and deserialization
> > on
> > > > the
> > > > > > > remote
> > > > > > > > server, there is no txState. So im going to put it in
> > > > > > > > writeExternal()\readExternal()
> > > > > > > >
> > > > > > > > The last one is Deserialized transaction lacks of shared
> cache
> > > > > context
> > > > > > > > field at TransactionProxyImpl. Perhaps, it must be injected
> by
> > > > > > > > GridResourceProcessor ?
> > > > > > > >
> > > > > > > > пн, 13 мар. 2017 г. в 17:27, ALEKSEY KUZNETSOV <
> > > > > > alkuznetsov.sb@gmail.com
> > > > > > > >:
> > > > > > > >
> > > > > > > > > while starting and continuing transaction in different jvms
> > in
> > > > run
> > > > > > into
> > > > > > > > > serialization exception in writeExternalMeta :
> > > > > > > > >
> > > > > > > > > @Override public void writeExternal(ObjectOutput out)
> throws
> > > > > > > IOException
> > > > > > > > {
> > > > > > > > >     writeExternalMeta(out);
> > > > > > > > >
> > > > > > > > > some meta is cannot be serialized.
> > > > > > > > > пт, 10 мар. 2017 г. в 17:25, Alexey Goncharuk <
> > > > > > > > alexey.goncharuk@gmail.com
> > > > > > > > > >:
> > > > > > > > >
> > > > > > > > > Aleksey,
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > I think I am starting to get what you want, but I have a
> few
> > > > > > concerns:
> > > > > > > > >  - What is the API for the proposed change? In your test,
> you
> > > > pass
> > > > > an
> > > > > > > > > instance of transaction created on ignite(0) to the ignite
> > > > instance
> > > > > > > > > ignite(1). This is obviously not possible in a truly
> > > distributed
> > > > > > > > > (multi-jvm) environment.
> > > > > > > > > - How will you synchronize cache update actions and
> > transaction
> > > > > > commit?
> > > > > > > > > Say, you have one node that decided to commit, but another
> > node
> > > > is
> > > > > > > still
> > > > > > > > > writing within this transaction. How do you make sure that
> > two
> > > > > nodes
> > > > > > > will
> > > > > > > > > not call commit() and rollback() simultaneously?
> > > > > > > > >  - How do you make sure that either commit() or rollback()
> is
> > > > > called
> > > > > > if
> > > > > > > > an
> > > > > > > > > originator failed?
> > > > > > > > >
> > > > > > > > > 2017-03-10 15:38 GMT+03:00 Дмитрий Рябов <
> > > somefireone@gmail.com
> > > > >:
> > > > > > > > >
> > > > > > > > > > Alexey Goncharuk, heh, my initial understanding was that
> > > > > > transferring
> > > > > > > > of
> > > > > > > > > tx
> > > > > > > > > > ownership from one node to another will be happened
> > > > automatically
> > > > > > > when
> > > > > > > > > > originating node is gone down.
> > > > > > > > > >
> > > > > > > > > > 2017-03-10 15:36 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > >:
> > > > > > > > > >
> > > > > > > > > > > Im aiming to span transaction on multiple threads,
> nodes,
> > > > > > > jvms(soon).
> > > > > > > > > So
> > > > > > > > > > > every node is able to rollback, or commit common
> > > > transaction.It
> > > > > > > > turned
> > > > > > > > > > up i
> > > > > > > > > > > need to transfer tx between nodes in order to commit
> > > > > transaction
> > > > > > in
> > > > > > > > > > > different node(in the same jvm).
> > > > > > > > > > >
> > > > > > > > > > > пт, 10 мар. 2017 г. в 15:20, Alexey Goncharuk <
> > > > > > > > > > alexey.goncharuk@gmail.com
> > > > > > > > > > > >:
> > > > > > > > > > >
> > > > > > > > > > > > Aleksey,
> > > > > > > > > > > >
> > > > > > > > > > > > Do you mean that you want a concept of transferring
> of
> > tx
> > > > > > > ownership
> > > > > > > > > > from
> > > > > > > > > > > > one node to another? My initial understanding was
> that
> > > you
> > > > > want
> > > > > > > to
> > > > > > > > be
> > > > > > > > > > > able
> > > > > > > > > > > > to update keys in a transaction from multiple threads
> > in
> > > > > > > parallel.
> > > > > > > > > > > >
> > > > > > > > > > > > --AG
> > > > > > > > > > > >
> > > > > > > > > > > > 2017-03-10 15:01 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > > >:
> > > > > > > > > > > >
> > > > > > > > > > > > > Well. Consider transaction started in one node, and
> > > > > continued
> > > > > > > in
> > > > > > > > > > > another
> > > > > > > > > > > > > one.
> > > > > > > > > > > > > The following test describes my idea:
> > > > > > > > > > > > >
> > > > > > > > > > > > > Ignite ignite1 = ignite(0);
> > > > > > > > > > > > >
> > > > > > > > > > > > > IgniteTransactions transactions =
> > > ignite1.transactions();
> > > > > > > > > > > > >
> > > > > > > > > > > > > IgniteCache<String, Integer> cache =
> > > > > > ignite1.getOrCreateCache("
> > > > > > > > > > > > > testCache");
> > > > > > > > > > > > >
> > > > > > > > > > > > > Transaction tx = transactions.txStart(concurrency,
> > > > > > isolation);
> > > > > > > > > > > > >
> > > > > > > > > > > > > cache.put("key1", 1);
> > > > > > > > > > > > >
> > > > > > > > > > > > > cache.put("key2", 2);
> > > > > > > > > > > > >
> > > > > > > > > > > > > tx.stop();
> > > > > > > > > > > > >
> > > > > > > > > > > > > IgniteInternalFuture<Boolean> fut =
> > > > > GridTestUtils.runAsync(()
> > > > > > > ->
> > > > > > > > {
> > > > > > > > > > > > >     IgniteTransactions ts =
> ignite(1).transactions();
> > > > > > > > > > > > >     Assert.assertNull(ts.tx());
> > > > > > > > > > > > >     Assert.assertEquals(TransactionState.STOPPED,
> > > > > > tx.state());
> > > > > > > > > > > > >     ts.txStart(tx);
> > > > > > > > > > > > >     Assert.assertEquals(TransactionState.ACTIVE,
> > > > > > tx.state());
> > > > > > > > > > > > >     cache.put("key3", 3);
> > > > > > > > > > > > >     Assert.assertTrue(cache.remove("key2"));
> > > > > > > > > > > > >     tx.commit();
> > > > > > > > > > > > >     return true;
> > > > > > > > > > > > > });
> > > > > > > > > > > > >
> > > > > > > > > > > > > fut.get();
> > > > > > > > > > > > >
> > > > > > > > > > > > > Assert.assertEquals(TransactionState.COMMITTED,
> > > > > tx.state());
> > > > > > > > > > > > > Assert.assertEquals((long)1,
> > (long)cache.get("key1"));
> > > > > > > > > > > > > Assert.assertEquals((long)3,
> > (long)cache.get("key3"));
> > > > > > > > > > > > > Assert.assertFalse(cache.containsKey("key2"));
> > > > > > > > > > > > >
> > > > > > > > > > > > > In method *ts.txStart(...)* we just rebind *tx* to
> > > > current
> > > > > > > > thread:
> > > > > > > > > > > > >
> > > > > > > > > > > > > public void txStart(Transaction tx) {
> > > > > > > > > > > > >     TransactionProxyImpl transactionProxy =
> > > > > > > > > (TransactionProxyImpl)tx;
> > > > > > > > > > > > >     cctx.tm().reopenTx(transactionProxy.tx());
> > > > > > > > > > > > >     transactionProxy.bindToCurrentThread();
> > > > > > > > > > > > > }
> > > > > > > > > > > > >
> > > > > > > > > > > > > In method *reopenTx* we alter *threadMap* so that
> it
> > > > binds
> > > > > > > > > > transaction
> > > > > > > > > > > > > to current thread.
> > > > > > > > > > > > >
> > > > > > > > > > > > > How do u think about it ?
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > > вт, 7 мар. 2017 г. в 22:38, Denis Magda <
> > > > dmagda@apache.org
> > > > > >:
> > > > > > > > > > > > >
> > > > > > > > > > > > > > Hi Alexey,
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Please share the rational behind this and the
> > > thoughts,
> > > > > > > design
> > > > > > > > > > ideas
> > > > > > > > > > > > you
> > > > > > > > > > > > > > have in mind.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > —
> > > > > > > > > > > > > > Denis
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > > On Mar 7, 2017, at 3:19 AM, ALEKSEY KUZNETSOV <
> > > > > > > > > > > > > alkuznetsov.sb@gmail.com>
> > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Hi all! Im designing distributed transaction
> > which
> > > > can
> > > > > be
> > > > > > > > > started
> > > > > > > > > > > at
> > > > > > > > > > > > > one
> > > > > > > > > > > > > > > node, and continued at other one. Has anybody
> > > > thoughts
> > > > > on
> > > > > > > it
> > > > > > > > ?
> > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > --
> > > > > > > > > > > > >
> > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > >
> > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > --
> > > > > > > > > > >
> > > > > > > > > > > *Best Regards,*
> > > > > > > > > > >
> > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > > --
> > > > > > > > >
> > > > > > > > > *Best Regards,*
> > > > > > > > >
> > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > >
> > > > > > > > --
> > > > > > > >
> > > > > > > > *Best Regards,*
> > > > > > > >
> > > > > > > > *Kuznetsov Aleksey*
> > > > > > > >
> > > > > > >
> > > > > > --
> > > > > >
> > > > > > *Best Regards,*
> > > > > >
> > > > > > *Kuznetsov Aleksey*
> > > > > >
> > > > >
> > > > --
> > > >
> > > > *Best Regards,*
> > > >
> > > > *Kuznetsov Aleksey*
> > > >
> > >
> > --
> >
> > *Best Regards,*
> >
> > *Kuznetsov Aleksey*
> >
>
-- 

*Best Regards,*

*Kuznetsov Aleksey*

Re: distributed transaction of non-single coordinator

Posted by Sergi Vladykin <se...@gmail.com>.
Ok, it is not a business case, it is your wrong solution for it.
Lets try again, what is the business case?

Sergi

2017-03-14 16:42 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:

> The case is the following, One starts transaction in one node, and commit
> this transaction in another jvm node(or rollback it remotely).
>
> вт, 14 мар. 2017 г. в 16:30, Sergi Vladykin <se...@gmail.com>:
>
> > Because even if you make it work for some simplistic scenario, get ready
> to
> > write many fault tolerance tests and make sure that you TXs work
> gracefully
> > in all modes in case of crashes. Also make sure that we do not have any
> > performance drops after all your changes in existing benchmarks. All in
> all
> > I don't believe these conditions will be met and your contribution will
> be
> > accepted.
> >
> > Better solution to what problem? Sending TX to another node? The problem
> > statement itself is already wrong. What business case you are trying to
> > solve? I'm sure everything you need can be done in a much more simple and
> > efficient way at the application level.
> >
> > Sergi
> >
> > 2017-03-14 16:03 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:
> >
> > > Why wrong ? You know the better solution?
> > >
> > > вт, 14 мар. 2017 г. в 15:46, Sergi Vladykin <sergi.vladykin@gmail.com
> >:
> > >
> > > > Just serializing TX object and deserializing it on another node is
> > > > meaningless, because other nodes participating in the TX have to know
> > > about
> > > > the new coordinator. This will require protocol changes, we
> definitely
> > > will
> > > > have fault tolerance and performance issues. IMO the whole idea is
> > wrong
> > > > and it makes no sense to waste time on it.
> > > >
> > > > Sergi
> > > >
> > > > 2017-03-14 10:57 GMT+03:00 ALEKSEY KUZNETSOV <
> alkuznetsov.sb@gmail.com
> > >:
> > > >
> > > > > IgniteTransactionState implememntation contains IgniteTxEntry's
> which
> > > is
> > > > > supposed to be transferable
> > > > >
> > > > > пн, 13 мар. 2017 г. в 19:32, Dmitriy Setrakyan <
> > dsetrakyan@apache.org
> > > >:
> > > > >
> > > > > > It sounds a little scary to me that we are passing transaction
> > > objects
> > > > > > around. Such object may contain all sorts of Ignite context. If
> > some
> > > > data
> > > > > > needs to be passed across, we should create a special transfer
> > object
> > > > in
> > > > > > this case.
> > > > > >
> > > > > > D.
> > > > > >
> > > > > >
> > > > > > On Mon, Mar 13, 2017 at 9:10 AM, ALEKSEY KUZNETSOV <
> > > > > > alkuznetsov.sb@gmail.com
> > > > > > > wrote:
> > > > > >
> > > > > > > well, there a couple of issues preventing transaction
> proceeding.
> > > > > > > At first, After transaction serialization and deserialization
> on
> > > the
> > > > > > remote
> > > > > > > server, there is no txState. So im going to put it in
> > > > > > > writeExternal()\readExternal()
> > > > > > >
> > > > > > > The last one is Deserialized transaction lacks of shared cache
> > > > context
> > > > > > > field at TransactionProxyImpl. Perhaps, it must be injected by
> > > > > > > GridResourceProcessor ?
> > > > > > >
> > > > > > > пн, 13 мар. 2017 г. в 17:27, ALEKSEY KUZNETSOV <
> > > > > alkuznetsov.sb@gmail.com
> > > > > > >:
> > > > > > >
> > > > > > > > while starting and continuing transaction in different jvms
> in
> > > run
> > > > > into
> > > > > > > > serialization exception in writeExternalMeta :
> > > > > > > >
> > > > > > > > @Override public void writeExternal(ObjectOutput out) throws
> > > > > > IOException
> > > > > > > {
> > > > > > > >     writeExternalMeta(out);
> > > > > > > >
> > > > > > > > some meta is cannot be serialized.
> > > > > > > > пт, 10 мар. 2017 г. в 17:25, Alexey Goncharuk <
> > > > > > > alexey.goncharuk@gmail.com
> > > > > > > > >:
> > > > > > > >
> > > > > > > > Aleksey,
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > I think I am starting to get what you want, but I have a few
> > > > > concerns:
> > > > > > > >  - What is the API for the proposed change? In your test, you
> > > pass
> > > > an
> > > > > > > > instance of transaction created on ignite(0) to the ignite
> > > instance
> > > > > > > > ignite(1). This is obviously not possible in a truly
> > distributed
> > > > > > > > (multi-jvm) environment.
> > > > > > > > - How will you synchronize cache update actions and
> transaction
> > > > > commit?
> > > > > > > > Say, you have one node that decided to commit, but another
> node
> > > is
> > > > > > still
> > > > > > > > writing within this transaction. How do you make sure that
> two
> > > > nodes
> > > > > > will
> > > > > > > > not call commit() and rollback() simultaneously?
> > > > > > > >  - How do you make sure that either commit() or rollback() is
> > > > called
> > > > > if
> > > > > > > an
> > > > > > > > originator failed?
> > > > > > > >
> > > > > > > > 2017-03-10 15:38 GMT+03:00 Дмитрий Рябов <
> > somefireone@gmail.com
> > > >:
> > > > > > > >
> > > > > > > > > Alexey Goncharuk, heh, my initial understanding was that
> > > > > transferring
> > > > > > > of
> > > > > > > > tx
> > > > > > > > > ownership from one node to another will be happened
> > > automatically
> > > > > > when
> > > > > > > > > originating node is gone down.
> > > > > > > > >
> > > > > > > > > 2017-03-10 15:36 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > alkuznetsov.sb@gmail.com
> > > > > > > >:
> > > > > > > > >
> > > > > > > > > > Im aiming to span transaction on multiple threads, nodes,
> > > > > > jvms(soon).
> > > > > > > > So
> > > > > > > > > > every node is able to rollback, or commit common
> > > transaction.It
> > > > > > > turned
> > > > > > > > > up i
> > > > > > > > > > need to transfer tx between nodes in order to commit
> > > > transaction
> > > > > in
> > > > > > > > > > different node(in the same jvm).
> > > > > > > > > >
> > > > > > > > > > пт, 10 мар. 2017 г. в 15:20, Alexey Goncharuk <
> > > > > > > > > alexey.goncharuk@gmail.com
> > > > > > > > > > >:
> > > > > > > > > >
> > > > > > > > > > > Aleksey,
> > > > > > > > > > >
> > > > > > > > > > > Do you mean that you want a concept of transferring of
> tx
> > > > > > ownership
> > > > > > > > > from
> > > > > > > > > > > one node to another? My initial understanding was that
> > you
> > > > want
> > > > > > to
> > > > > > > be
> > > > > > > > > > able
> > > > > > > > > > > to update keys in a transaction from multiple threads
> in
> > > > > > parallel.
> > > > > > > > > > >
> > > > > > > > > > > --AG
> > > > > > > > > > >
> > > > > > > > > > > 2017-03-10 15:01 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > > >:
> > > > > > > > > > >
> > > > > > > > > > > > Well. Consider transaction started in one node, and
> > > > continued
> > > > > > in
> > > > > > > > > > another
> > > > > > > > > > > > one.
> > > > > > > > > > > > The following test describes my idea:
> > > > > > > > > > > >
> > > > > > > > > > > > Ignite ignite1 = ignite(0);
> > > > > > > > > > > >
> > > > > > > > > > > > IgniteTransactions transactions =
> > ignite1.transactions();
> > > > > > > > > > > >
> > > > > > > > > > > > IgniteCache<String, Integer> cache =
> > > > > ignite1.getOrCreateCache("
> > > > > > > > > > > > testCache");
> > > > > > > > > > > >
> > > > > > > > > > > > Transaction tx = transactions.txStart(concurrency,
> > > > > isolation);
> > > > > > > > > > > >
> > > > > > > > > > > > cache.put("key1", 1);
> > > > > > > > > > > >
> > > > > > > > > > > > cache.put("key2", 2);
> > > > > > > > > > > >
> > > > > > > > > > > > tx.stop();
> > > > > > > > > > > >
> > > > > > > > > > > > IgniteInternalFuture<Boolean> fut =
> > > > GridTestUtils.runAsync(()
> > > > > > ->
> > > > > > > {
> > > > > > > > > > > >     IgniteTransactions ts = ignite(1).transactions();
> > > > > > > > > > > >     Assert.assertNull(ts.tx());
> > > > > > > > > > > >     Assert.assertEquals(TransactionState.STOPPED,
> > > > > tx.state());
> > > > > > > > > > > >     ts.txStart(tx);
> > > > > > > > > > > >     Assert.assertEquals(TransactionState.ACTIVE,
> > > > > tx.state());
> > > > > > > > > > > >     cache.put("key3", 3);
> > > > > > > > > > > >     Assert.assertTrue(cache.remove("key2"));
> > > > > > > > > > > >     tx.commit();
> > > > > > > > > > > >     return true;
> > > > > > > > > > > > });
> > > > > > > > > > > >
> > > > > > > > > > > > fut.get();
> > > > > > > > > > > >
> > > > > > > > > > > > Assert.assertEquals(TransactionState.COMMITTED,
> > > > tx.state());
> > > > > > > > > > > > Assert.assertEquals((long)1,
> (long)cache.get("key1"));
> > > > > > > > > > > > Assert.assertEquals((long)3,
> (long)cache.get("key3"));
> > > > > > > > > > > > Assert.assertFalse(cache.containsKey("key2"));
> > > > > > > > > > > >
> > > > > > > > > > > > In method *ts.txStart(...)* we just rebind *tx* to
> > > current
> > > > > > > thread:
> > > > > > > > > > > >
> > > > > > > > > > > > public void txStart(Transaction tx) {
> > > > > > > > > > > >     TransactionProxyImpl transactionProxy =
> > > > > > > > (TransactionProxyImpl)tx;
> > > > > > > > > > > >     cctx.tm().reopenTx(transactionProxy.tx());
> > > > > > > > > > > >     transactionProxy.bindToCurrentThread();
> > > > > > > > > > > > }
> > > > > > > > > > > >
> > > > > > > > > > > > In method *reopenTx* we alter *threadMap* so that it
> > > binds
> > > > > > > > > transaction
> > > > > > > > > > > > to current thread.
> > > > > > > > > > > >
> > > > > > > > > > > > How do u think about it ?
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > вт, 7 мар. 2017 г. в 22:38, Denis Magda <
> > > dmagda@apache.org
> > > > >:
> > > > > > > > > > > >
> > > > > > > > > > > > > Hi Alexey,
> > > > > > > > > > > > >
> > > > > > > > > > > > > Please share the rational behind this and the
> > thoughts,
> > > > > > design
> > > > > > > > > ideas
> > > > > > > > > > > you
> > > > > > > > > > > > > have in mind.
> > > > > > > > > > > > >
> > > > > > > > > > > > > —
> > > > > > > > > > > > > Denis
> > > > > > > > > > > > >
> > > > > > > > > > > > > > On Mar 7, 2017, at 3:19 AM, ALEKSEY KUZNETSOV <
> > > > > > > > > > > > alkuznetsov.sb@gmail.com>
> > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Hi all! Im designing distributed transaction
> which
> > > can
> > > > be
> > > > > > > > started
> > > > > > > > > > at
> > > > > > > > > > > > one
> > > > > > > > > > > > > > node, and continued at other one. Has anybody
> > > thoughts
> > > > on
> > > > > > it
> > > > > > > ?
> > > > > > > > > > > > > > --
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > > >
> > > > > > > > > > > > > --
> > > > > > > > > > > >
> > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > >
> > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > --
> > > > > > > > > >
> > > > > > > > > > *Best Regards,*
> > > > > > > > > >
> > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > > > --
> > > > > > > >
> > > > > > > > *Best Regards,*
> > > > > > > >
> > > > > > > > *Kuznetsov Aleksey*
> > > > > > > >
> > > > > > > --
> > > > > > >
> > > > > > > *Best Regards,*
> > > > > > >
> > > > > > > *Kuznetsov Aleksey*
> > > > > > >
> > > > > >
> > > > > --
> > > > >
> > > > > *Best Regards,*
> > > > >
> > > > > *Kuznetsov Aleksey*
> > > > >
> > > >
> > > --
> > >
> > > *Best Regards,*
> > >
> > > *Kuznetsov Aleksey*
> > >
> >
> --
>
> *Best Regards,*
>
> *Kuznetsov Aleksey*
>

Re: distributed transaction of non-single coordinator

Posted by ALEKSEY KUZNETSOV <al...@gmail.com>.
The case is the following, One starts transaction in one node, and commit
this transaction in another jvm node(or rollback it remotely).

вт, 14 мар. 2017 г. в 16:30, Sergi Vladykin <se...@gmail.com>:

> Because even if you make it work for some simplistic scenario, get ready to
> write many fault tolerance tests and make sure that you TXs work gracefully
> in all modes in case of crashes. Also make sure that we do not have any
> performance drops after all your changes in existing benchmarks. All in all
> I don't believe these conditions will be met and your contribution will be
> accepted.
>
> Better solution to what problem? Sending TX to another node? The problem
> statement itself is already wrong. What business case you are trying to
> solve? I'm sure everything you need can be done in a much more simple and
> efficient way at the application level.
>
> Sergi
>
> 2017-03-14 16:03 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:
>
> > Why wrong ? You know the better solution?
> >
> > вт, 14 мар. 2017 г. в 15:46, Sergi Vladykin <se...@gmail.com>:
> >
> > > Just serializing TX object and deserializing it on another node is
> > > meaningless, because other nodes participating in the TX have to know
> > about
> > > the new coordinator. This will require protocol changes, we definitely
> > will
> > > have fault tolerance and performance issues. IMO the whole idea is
> wrong
> > > and it makes no sense to waste time on it.
> > >
> > > Sergi
> > >
> > > 2017-03-14 10:57 GMT+03:00 ALEKSEY KUZNETSOV <alkuznetsov.sb@gmail.com
> >:
> > >
> > > > IgniteTransactionState implememntation contains IgniteTxEntry's which
> > is
> > > > supposed to be transferable
> > > >
> > > > пн, 13 мар. 2017 г. в 19:32, Dmitriy Setrakyan <
> dsetrakyan@apache.org
> > >:
> > > >
> > > > > It sounds a little scary to me that we are passing transaction
> > objects
> > > > > around. Such object may contain all sorts of Ignite context. If
> some
> > > data
> > > > > needs to be passed across, we should create a special transfer
> object
> > > in
> > > > > this case.
> > > > >
> > > > > D.
> > > > >
> > > > >
> > > > > On Mon, Mar 13, 2017 at 9:10 AM, ALEKSEY KUZNETSOV <
> > > > > alkuznetsov.sb@gmail.com
> > > > > > wrote:
> > > > >
> > > > > > well, there a couple of issues preventing transaction proceeding.
> > > > > > At first, After transaction serialization and deserialization on
> > the
> > > > > remote
> > > > > > server, there is no txState. So im going to put it in
> > > > > > writeExternal()\readExternal()
> > > > > >
> > > > > > The last one is Deserialized transaction lacks of shared cache
> > > context
> > > > > > field at TransactionProxyImpl. Perhaps, it must be injected by
> > > > > > GridResourceProcessor ?
> > > > > >
> > > > > > пн, 13 мар. 2017 г. в 17:27, ALEKSEY KUZNETSOV <
> > > > alkuznetsov.sb@gmail.com
> > > > > >:
> > > > > >
> > > > > > > while starting and continuing transaction in different jvms in
> > run
> > > > into
> > > > > > > serialization exception in writeExternalMeta :
> > > > > > >
> > > > > > > @Override public void writeExternal(ObjectOutput out) throws
> > > > > IOException
> > > > > > {
> > > > > > >     writeExternalMeta(out);
> > > > > > >
> > > > > > > some meta is cannot be serialized.
> > > > > > > пт, 10 мар. 2017 г. в 17:25, Alexey Goncharuk <
> > > > > > alexey.goncharuk@gmail.com
> > > > > > > >:
> > > > > > >
> > > > > > > Aleksey,
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > I think I am starting to get what you want, but I have a few
> > > > concerns:
> > > > > > >  - What is the API for the proposed change? In your test, you
> > pass
> > > an
> > > > > > > instance of transaction created on ignite(0) to the ignite
> > instance
> > > > > > > ignite(1). This is obviously not possible in a truly
> distributed
> > > > > > > (multi-jvm) environment.
> > > > > > > - How will you synchronize cache update actions and transaction
> > > > commit?
> > > > > > > Say, you have one node that decided to commit, but another node
> > is
> > > > > still
> > > > > > > writing within this transaction. How do you make sure that two
> > > nodes
> > > > > will
> > > > > > > not call commit() and rollback() simultaneously?
> > > > > > >  - How do you make sure that either commit() or rollback() is
> > > called
> > > > if
> > > > > > an
> > > > > > > originator failed?
> > > > > > >
> > > > > > > 2017-03-10 15:38 GMT+03:00 Дмитрий Рябов <
> somefireone@gmail.com
> > >:
> > > > > > >
> > > > > > > > Alexey Goncharuk, heh, my initial understanding was that
> > > > transferring
> > > > > > of
> > > > > > > tx
> > > > > > > > ownership from one node to another will be happened
> > automatically
> > > > > when
> > > > > > > > originating node is gone down.
> > > > > > > >
> > > > > > > > 2017-03-10 15:36 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > alkuznetsov.sb@gmail.com
> > > > > > >:
> > > > > > > >
> > > > > > > > > Im aiming to span transaction on multiple threads, nodes,
> > > > > jvms(soon).
> > > > > > > So
> > > > > > > > > every node is able to rollback, or commit common
> > transaction.It
> > > > > > turned
> > > > > > > > up i
> > > > > > > > > need to transfer tx between nodes in order to commit
> > > transaction
> > > > in
> > > > > > > > > different node(in the same jvm).
> > > > > > > > >
> > > > > > > > > пт, 10 мар. 2017 г. в 15:20, Alexey Goncharuk <
> > > > > > > > alexey.goncharuk@gmail.com
> > > > > > > > > >:
> > > > > > > > >
> > > > > > > > > > Aleksey,
> > > > > > > > > >
> > > > > > > > > > Do you mean that you want a concept of transferring of tx
> > > > > ownership
> > > > > > > > from
> > > > > > > > > > one node to another? My initial understanding was that
> you
> > > want
> > > > > to
> > > > > > be
> > > > > > > > > able
> > > > > > > > > > to update keys in a transaction from multiple threads in
> > > > > parallel.
> > > > > > > > > >
> > > > > > > > > > --AG
> > > > > > > > > >
> > > > > > > > > > 2017-03-10 15:01 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > > alkuznetsov.sb@gmail.com
> > > > > > > > >:
> > > > > > > > > >
> > > > > > > > > > > Well. Consider transaction started in one node, and
> > > continued
> > > > > in
> > > > > > > > > another
> > > > > > > > > > > one.
> > > > > > > > > > > The following test describes my idea:
> > > > > > > > > > >
> > > > > > > > > > > Ignite ignite1 = ignite(0);
> > > > > > > > > > >
> > > > > > > > > > > IgniteTransactions transactions =
> ignite1.transactions();
> > > > > > > > > > >
> > > > > > > > > > > IgniteCache<String, Integer> cache =
> > > > ignite1.getOrCreateCache("
> > > > > > > > > > > testCache");
> > > > > > > > > > >
> > > > > > > > > > > Transaction tx = transactions.txStart(concurrency,
> > > > isolation);
> > > > > > > > > > >
> > > > > > > > > > > cache.put("key1", 1);
> > > > > > > > > > >
> > > > > > > > > > > cache.put("key2", 2);
> > > > > > > > > > >
> > > > > > > > > > > tx.stop();
> > > > > > > > > > >
> > > > > > > > > > > IgniteInternalFuture<Boolean> fut =
> > > GridTestUtils.runAsync(()
> > > > > ->
> > > > > > {
> > > > > > > > > > >     IgniteTransactions ts = ignite(1).transactions();
> > > > > > > > > > >     Assert.assertNull(ts.tx());
> > > > > > > > > > >     Assert.assertEquals(TransactionState.STOPPED,
> > > > tx.state());
> > > > > > > > > > >     ts.txStart(tx);
> > > > > > > > > > >     Assert.assertEquals(TransactionState.ACTIVE,
> > > > tx.state());
> > > > > > > > > > >     cache.put("key3", 3);
> > > > > > > > > > >     Assert.assertTrue(cache.remove("key2"));
> > > > > > > > > > >     tx.commit();
> > > > > > > > > > >     return true;
> > > > > > > > > > > });
> > > > > > > > > > >
> > > > > > > > > > > fut.get();
> > > > > > > > > > >
> > > > > > > > > > > Assert.assertEquals(TransactionState.COMMITTED,
> > > tx.state());
> > > > > > > > > > > Assert.assertEquals((long)1, (long)cache.get("key1"));
> > > > > > > > > > > Assert.assertEquals((long)3, (long)cache.get("key3"));
> > > > > > > > > > > Assert.assertFalse(cache.containsKey("key2"));
> > > > > > > > > > >
> > > > > > > > > > > In method *ts.txStart(...)* we just rebind *tx* to
> > current
> > > > > > thread:
> > > > > > > > > > >
> > > > > > > > > > > public void txStart(Transaction tx) {
> > > > > > > > > > >     TransactionProxyImpl transactionProxy =
> > > > > > > (TransactionProxyImpl)tx;
> > > > > > > > > > >     cctx.tm().reopenTx(transactionProxy.tx());
> > > > > > > > > > >     transactionProxy.bindToCurrentThread();
> > > > > > > > > > > }
> > > > > > > > > > >
> > > > > > > > > > > In method *reopenTx* we alter *threadMap* so that it
> > binds
> > > > > > > > transaction
> > > > > > > > > > > to current thread.
> > > > > > > > > > >
> > > > > > > > > > > How do u think about it ?
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > вт, 7 мар. 2017 г. в 22:38, Denis Magda <
> > dmagda@apache.org
> > > >:
> > > > > > > > > > >
> > > > > > > > > > > > Hi Alexey,
> > > > > > > > > > > >
> > > > > > > > > > > > Please share the rational behind this and the
> thoughts,
> > > > > design
> > > > > > > > ideas
> > > > > > > > > > you
> > > > > > > > > > > > have in mind.
> > > > > > > > > > > >
> > > > > > > > > > > > —
> > > > > > > > > > > > Denis
> > > > > > > > > > > >
> > > > > > > > > > > > > On Mar 7, 2017, at 3:19 AM, ALEKSEY KUZNETSOV <
> > > > > > > > > > > alkuznetsov.sb@gmail.com>
> > > > > > > > > > > > wrote:
> > > > > > > > > > > > >
> > > > > > > > > > > > > Hi all! Im designing distributed transaction which
> > can
> > > be
> > > > > > > started
> > > > > > > > > at
> > > > > > > > > > > one
> > > > > > > > > > > > > node, and continued at other one. Has anybody
> > thoughts
> > > on
> > > > > it
> > > > > > ?
> > > > > > > > > > > > > --
> > > > > > > > > > > > >
> > > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > > >
> > > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > > >
> > > > > > > > > > > > --
> > > > > > > > > > >
> > > > > > > > > > > *Best Regards,*
> > > > > > > > > > >
> > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > --
> > > > > > > > >
> > > > > > > > > *Best Regards,*
> > > > > > > > >
> > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > > > --
> > > > > > >
> > > > > > > *Best Regards,*
> > > > > > >
> > > > > > > *Kuznetsov Aleksey*
> > > > > > >
> > > > > > --
> > > > > >
> > > > > > *Best Regards,*
> > > > > >
> > > > > > *Kuznetsov Aleksey*
> > > > > >
> > > > >
> > > > --
> > > >
> > > > *Best Regards,*
> > > >
> > > > *Kuznetsov Aleksey*
> > > >
> > >
> > --
> >
> > *Best Regards,*
> >
> > *Kuznetsov Aleksey*
> >
>
-- 

*Best Regards,*

*Kuznetsov Aleksey*

Re: distributed transaction of non-single coordinator

Posted by Sergi Vladykin <se...@gmail.com>.
Because even if you make it work for some simplistic scenario, get ready to
write many fault tolerance tests and make sure that you TXs work gracefully
in all modes in case of crashes. Also make sure that we do not have any
performance drops after all your changes in existing benchmarks. All in all
I don't believe these conditions will be met and your contribution will be
accepted.

Better solution to what problem? Sending TX to another node? The problem
statement itself is already wrong. What business case you are trying to
solve? I'm sure everything you need can be done in a much more simple and
efficient way at the application level.

Sergi

2017-03-14 16:03 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:

> Why wrong ? You know the better solution?
>
> вт, 14 мар. 2017 г. в 15:46, Sergi Vladykin <se...@gmail.com>:
>
> > Just serializing TX object and deserializing it on another node is
> > meaningless, because other nodes participating in the TX have to know
> about
> > the new coordinator. This will require protocol changes, we definitely
> will
> > have fault tolerance and performance issues. IMO the whole idea is wrong
> > and it makes no sense to waste time on it.
> >
> > Sergi
> >
> > 2017-03-14 10:57 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:
> >
> > > IgniteTransactionState implememntation contains IgniteTxEntry's which
> is
> > > supposed to be transferable
> > >
> > > пн, 13 мар. 2017 г. в 19:32, Dmitriy Setrakyan <dsetrakyan@apache.org
> >:
> > >
> > > > It sounds a little scary to me that we are passing transaction
> objects
> > > > around. Such object may contain all sorts of Ignite context. If some
> > data
> > > > needs to be passed across, we should create a special transfer object
> > in
> > > > this case.
> > > >
> > > > D.
> > > >
> > > >
> > > > On Mon, Mar 13, 2017 at 9:10 AM, ALEKSEY KUZNETSOV <
> > > > alkuznetsov.sb@gmail.com
> > > > > wrote:
> > > >
> > > > > well, there a couple of issues preventing transaction proceeding.
> > > > > At first, After transaction serialization and deserialization on
> the
> > > > remote
> > > > > server, there is no txState. So im going to put it in
> > > > > writeExternal()\readExternal()
> > > > >
> > > > > The last one is Deserialized transaction lacks of shared cache
> > context
> > > > > field at TransactionProxyImpl. Perhaps, it must be injected by
> > > > > GridResourceProcessor ?
> > > > >
> > > > > пн, 13 мар. 2017 г. в 17:27, ALEKSEY KUZNETSOV <
> > > alkuznetsov.sb@gmail.com
> > > > >:
> > > > >
> > > > > > while starting and continuing transaction in different jvms in
> run
> > > into
> > > > > > serialization exception in writeExternalMeta :
> > > > > >
> > > > > > @Override public void writeExternal(ObjectOutput out) throws
> > > > IOException
> > > > > {
> > > > > >     writeExternalMeta(out);
> > > > > >
> > > > > > some meta is cannot be serialized.
> > > > > > пт, 10 мар. 2017 г. в 17:25, Alexey Goncharuk <
> > > > > alexey.goncharuk@gmail.com
> > > > > > >:
> > > > > >
> > > > > > Aleksey,
> > > > > >
> > > > > >
> > > > > >
> > > > > > I think I am starting to get what you want, but I have a few
> > > concerns:
> > > > > >  - What is the API for the proposed change? In your test, you
> pass
> > an
> > > > > > instance of transaction created on ignite(0) to the ignite
> instance
> > > > > > ignite(1). This is obviously not possible in a truly distributed
> > > > > > (multi-jvm) environment.
> > > > > > - How will you synchronize cache update actions and transaction
> > > commit?
> > > > > > Say, you have one node that decided to commit, but another node
> is
> > > > still
> > > > > > writing within this transaction. How do you make sure that two
> > nodes
> > > > will
> > > > > > not call commit() and rollback() simultaneously?
> > > > > >  - How do you make sure that either commit() or rollback() is
> > called
> > > if
> > > > > an
> > > > > > originator failed?
> > > > > >
> > > > > > 2017-03-10 15:38 GMT+03:00 Дмитрий Рябов <somefireone@gmail.com
> >:
> > > > > >
> > > > > > > Alexey Goncharuk, heh, my initial understanding was that
> > > transferring
> > > > > of
> > > > > > tx
> > > > > > > ownership from one node to another will be happened
> automatically
> > > > when
> > > > > > > originating node is gone down.
> > > > > > >
> > > > > > > 2017-03-10 15:36 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > alkuznetsov.sb@gmail.com
> > > > > >:
> > > > > > >
> > > > > > > > Im aiming to span transaction on multiple threads, nodes,
> > > > jvms(soon).
> > > > > > So
> > > > > > > > every node is able to rollback, or commit common
> transaction.It
> > > > > turned
> > > > > > > up i
> > > > > > > > need to transfer tx between nodes in order to commit
> > transaction
> > > in
> > > > > > > > different node(in the same jvm).
> > > > > > > >
> > > > > > > > пт, 10 мар. 2017 г. в 15:20, Alexey Goncharuk <
> > > > > > > alexey.goncharuk@gmail.com
> > > > > > > > >:
> > > > > > > >
> > > > > > > > > Aleksey,
> > > > > > > > >
> > > > > > > > > Do you mean that you want a concept of transferring of tx
> > > > ownership
> > > > > > > from
> > > > > > > > > one node to another? My initial understanding was that you
> > want
> > > > to
> > > > > be
> > > > > > > > able
> > > > > > > > > to update keys in a transaction from multiple threads in
> > > > parallel.
> > > > > > > > >
> > > > > > > > > --AG
> > > > > > > > >
> > > > > > > > > 2017-03-10 15:01 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > > alkuznetsov.sb@gmail.com
> > > > > > > >:
> > > > > > > > >
> > > > > > > > > > Well. Consider transaction started in one node, and
> > continued
> > > > in
> > > > > > > > another
> > > > > > > > > > one.
> > > > > > > > > > The following test describes my idea:
> > > > > > > > > >
> > > > > > > > > > Ignite ignite1 = ignite(0);
> > > > > > > > > >
> > > > > > > > > > IgniteTransactions transactions = ignite1.transactions();
> > > > > > > > > >
> > > > > > > > > > IgniteCache<String, Integer> cache =
> > > ignite1.getOrCreateCache("
> > > > > > > > > > testCache");
> > > > > > > > > >
> > > > > > > > > > Transaction tx = transactions.txStart(concurrency,
> > > isolation);
> > > > > > > > > >
> > > > > > > > > > cache.put("key1", 1);
> > > > > > > > > >
> > > > > > > > > > cache.put("key2", 2);
> > > > > > > > > >
> > > > > > > > > > tx.stop();
> > > > > > > > > >
> > > > > > > > > > IgniteInternalFuture<Boolean> fut =
> > GridTestUtils.runAsync(()
> > > > ->
> > > > > {
> > > > > > > > > >     IgniteTransactions ts = ignite(1).transactions();
> > > > > > > > > >     Assert.assertNull(ts.tx());
> > > > > > > > > >     Assert.assertEquals(TransactionState.STOPPED,
> > > tx.state());
> > > > > > > > > >     ts.txStart(tx);
> > > > > > > > > >     Assert.assertEquals(TransactionState.ACTIVE,
> > > tx.state());
> > > > > > > > > >     cache.put("key3", 3);
> > > > > > > > > >     Assert.assertTrue(cache.remove("key2"));
> > > > > > > > > >     tx.commit();
> > > > > > > > > >     return true;
> > > > > > > > > > });
> > > > > > > > > >
> > > > > > > > > > fut.get();
> > > > > > > > > >
> > > > > > > > > > Assert.assertEquals(TransactionState.COMMITTED,
> > tx.state());
> > > > > > > > > > Assert.assertEquals((long)1, (long)cache.get("key1"));
> > > > > > > > > > Assert.assertEquals((long)3, (long)cache.get("key3"));
> > > > > > > > > > Assert.assertFalse(cache.containsKey("key2"));
> > > > > > > > > >
> > > > > > > > > > In method *ts.txStart(...)* we just rebind *tx* to
> current
> > > > > thread:
> > > > > > > > > >
> > > > > > > > > > public void txStart(Transaction tx) {
> > > > > > > > > >     TransactionProxyImpl transactionProxy =
> > > > > > (TransactionProxyImpl)tx;
> > > > > > > > > >     cctx.tm().reopenTx(transactionProxy.tx());
> > > > > > > > > >     transactionProxy.bindToCurrentThread();
> > > > > > > > > > }
> > > > > > > > > >
> > > > > > > > > > In method *reopenTx* we alter *threadMap* so that it
> binds
> > > > > > > transaction
> > > > > > > > > > to current thread.
> > > > > > > > > >
> > > > > > > > > > How do u think about it ?
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > вт, 7 мар. 2017 г. в 22:38, Denis Magda <
> dmagda@apache.org
> > >:
> > > > > > > > > >
> > > > > > > > > > > Hi Alexey,
> > > > > > > > > > >
> > > > > > > > > > > Please share the rational behind this and the thoughts,
> > > > design
> > > > > > > ideas
> > > > > > > > > you
> > > > > > > > > > > have in mind.
> > > > > > > > > > >
> > > > > > > > > > > —
> > > > > > > > > > > Denis
> > > > > > > > > > >
> > > > > > > > > > > > On Mar 7, 2017, at 3:19 AM, ALEKSEY KUZNETSOV <
> > > > > > > > > > alkuznetsov.sb@gmail.com>
> > > > > > > > > > > wrote:
> > > > > > > > > > > >
> > > > > > > > > > > > Hi all! Im designing distributed transaction which
> can
> > be
> > > > > > started
> > > > > > > > at
> > > > > > > > > > one
> > > > > > > > > > > > node, and continued at other one. Has anybody
> thoughts
> > on
> > > > it
> > > > > ?
> > > > > > > > > > > > --
> > > > > > > > > > > >
> > > > > > > > > > > > *Best Regards,*
> > > > > > > > > > > >
> > > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > > >
> > > > > > > > > > > --
> > > > > > > > > >
> > > > > > > > > > *Best Regards,*
> > > > > > > > > >
> > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > --
> > > > > > > >
> > > > > > > > *Best Regards,*
> > > > > > > >
> > > > > > > > *Kuznetsov Aleksey*
> > > > > > > >
> > > > > > >
> > > > > >
> > > > > > --
> > > > > >
> > > > > > *Best Regards,*
> > > > > >
> > > > > > *Kuznetsov Aleksey*
> > > > > >
> > > > > --
> > > > >
> > > > > *Best Regards,*
> > > > >
> > > > > *Kuznetsov Aleksey*
> > > > >
> > > >
> > > --
> > >
> > > *Best Regards,*
> > >
> > > *Kuznetsov Aleksey*
> > >
> >
> --
>
> *Best Regards,*
>
> *Kuznetsov Aleksey*
>

Re: distributed transaction of non-single coordinator

Posted by ALEKSEY KUZNETSOV <al...@gmail.com>.
Why wrong ? You know the better solution?

вт, 14 мар. 2017 г. в 15:46, Sergi Vladykin <se...@gmail.com>:

> Just serializing TX object and deserializing it on another node is
> meaningless, because other nodes participating in the TX have to know about
> the new coordinator. This will require protocol changes, we definitely will
> have fault tolerance and performance issues. IMO the whole idea is wrong
> and it makes no sense to waste time on it.
>
> Sergi
>
> 2017-03-14 10:57 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:
>
> > IgniteTransactionState implememntation contains IgniteTxEntry's which is
> > supposed to be transferable
> >
> > пн, 13 мар. 2017 г. в 19:32, Dmitriy Setrakyan <ds...@apache.org>:
> >
> > > It sounds a little scary to me that we are passing transaction objects
> > > around. Such object may contain all sorts of Ignite context. If some
> data
> > > needs to be passed across, we should create a special transfer object
> in
> > > this case.
> > >
> > > D.
> > >
> > >
> > > On Mon, Mar 13, 2017 at 9:10 AM, ALEKSEY KUZNETSOV <
> > > alkuznetsov.sb@gmail.com
> > > > wrote:
> > >
> > > > well, there a couple of issues preventing transaction proceeding.
> > > > At first, After transaction serialization and deserialization on the
> > > remote
> > > > server, there is no txState. So im going to put it in
> > > > writeExternal()\readExternal()
> > > >
> > > > The last one is Deserialized transaction lacks of shared cache
> context
> > > > field at TransactionProxyImpl. Perhaps, it must be injected by
> > > > GridResourceProcessor ?
> > > >
> > > > пн, 13 мар. 2017 г. в 17:27, ALEKSEY KUZNETSOV <
> > alkuznetsov.sb@gmail.com
> > > >:
> > > >
> > > > > while starting and continuing transaction in different jvms in run
> > into
> > > > > serialization exception in writeExternalMeta :
> > > > >
> > > > > @Override public void writeExternal(ObjectOutput out) throws
> > > IOException
> > > > {
> > > > >     writeExternalMeta(out);
> > > > >
> > > > > some meta is cannot be serialized.
> > > > > пт, 10 мар. 2017 г. в 17:25, Alexey Goncharuk <
> > > > alexey.goncharuk@gmail.com
> > > > > >:
> > > > >
> > > > > Aleksey,
> > > > >
> > > > >
> > > > >
> > > > > I think I am starting to get what you want, but I have a few
> > concerns:
> > > > >  - What is the API for the proposed change? In your test, you pass
> an
> > > > > instance of transaction created on ignite(0) to the ignite instance
> > > > > ignite(1). This is obviously not possible in a truly distributed
> > > > > (multi-jvm) environment.
> > > > > - How will you synchronize cache update actions and transaction
> > commit?
> > > > > Say, you have one node that decided to commit, but another node is
> > > still
> > > > > writing within this transaction. How do you make sure that two
> nodes
> > > will
> > > > > not call commit() and rollback() simultaneously?
> > > > >  - How do you make sure that either commit() or rollback() is
> called
> > if
> > > > an
> > > > > originator failed?
> > > > >
> > > > > 2017-03-10 15:38 GMT+03:00 Дмитрий Рябов <so...@gmail.com>:
> > > > >
> > > > > > Alexey Goncharuk, heh, my initial understanding was that
> > transferring
> > > > of
> > > > > tx
> > > > > > ownership from one node to another will be happened automatically
> > > when
> > > > > > originating node is gone down.
> > > > > >
> > > > > > 2017-03-10 15:36 GMT+03:00 ALEKSEY KUZNETSOV <
> > > alkuznetsov.sb@gmail.com
> > > > >:
> > > > > >
> > > > > > > Im aiming to span transaction on multiple threads, nodes,
> > > jvms(soon).
> > > > > So
> > > > > > > every node is able to rollback, or commit common transaction.It
> > > > turned
> > > > > > up i
> > > > > > > need to transfer tx between nodes in order to commit
> transaction
> > in
> > > > > > > different node(in the same jvm).
> > > > > > >
> > > > > > > пт, 10 мар. 2017 г. в 15:20, Alexey Goncharuk <
> > > > > > alexey.goncharuk@gmail.com
> > > > > > > >:
> > > > > > >
> > > > > > > > Aleksey,
> > > > > > > >
> > > > > > > > Do you mean that you want a concept of transferring of tx
> > > ownership
> > > > > > from
> > > > > > > > one node to another? My initial understanding was that you
> want
> > > to
> > > > be
> > > > > > > able
> > > > > > > > to update keys in a transaction from multiple threads in
> > > parallel.
> > > > > > > >
> > > > > > > > --AG
> > > > > > > >
> > > > > > > > 2017-03-10 15:01 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > > alkuznetsov.sb@gmail.com
> > > > > > >:
> > > > > > > >
> > > > > > > > > Well. Consider transaction started in one node, and
> continued
> > > in
> > > > > > > another
> > > > > > > > > one.
> > > > > > > > > The following test describes my idea:
> > > > > > > > >
> > > > > > > > > Ignite ignite1 = ignite(0);
> > > > > > > > >
> > > > > > > > > IgniteTransactions transactions = ignite1.transactions();
> > > > > > > > >
> > > > > > > > > IgniteCache<String, Integer> cache =
> > ignite1.getOrCreateCache("
> > > > > > > > > testCache");
> > > > > > > > >
> > > > > > > > > Transaction tx = transactions.txStart(concurrency,
> > isolation);
> > > > > > > > >
> > > > > > > > > cache.put("key1", 1);
> > > > > > > > >
> > > > > > > > > cache.put("key2", 2);
> > > > > > > > >
> > > > > > > > > tx.stop();
> > > > > > > > >
> > > > > > > > > IgniteInternalFuture<Boolean> fut =
> GridTestUtils.runAsync(()
> > > ->
> > > > {
> > > > > > > > >     IgniteTransactions ts = ignite(1).transactions();
> > > > > > > > >     Assert.assertNull(ts.tx());
> > > > > > > > >     Assert.assertEquals(TransactionState.STOPPED,
> > tx.state());
> > > > > > > > >     ts.txStart(tx);
> > > > > > > > >     Assert.assertEquals(TransactionState.ACTIVE,
> > tx.state());
> > > > > > > > >     cache.put("key3", 3);
> > > > > > > > >     Assert.assertTrue(cache.remove("key2"));
> > > > > > > > >     tx.commit();
> > > > > > > > >     return true;
> > > > > > > > > });
> > > > > > > > >
> > > > > > > > > fut.get();
> > > > > > > > >
> > > > > > > > > Assert.assertEquals(TransactionState.COMMITTED,
> tx.state());
> > > > > > > > > Assert.assertEquals((long)1, (long)cache.get("key1"));
> > > > > > > > > Assert.assertEquals((long)3, (long)cache.get("key3"));
> > > > > > > > > Assert.assertFalse(cache.containsKey("key2"));
> > > > > > > > >
> > > > > > > > > In method *ts.txStart(...)* we just rebind *tx* to current
> > > > thread:
> > > > > > > > >
> > > > > > > > > public void txStart(Transaction tx) {
> > > > > > > > >     TransactionProxyImpl transactionProxy =
> > > > > (TransactionProxyImpl)tx;
> > > > > > > > >     cctx.tm().reopenTx(transactionProxy.tx());
> > > > > > > > >     transactionProxy.bindToCurrentThread();
> > > > > > > > > }
> > > > > > > > >
> > > > > > > > > In method *reopenTx* we alter *threadMap* so that it binds
> > > > > > transaction
> > > > > > > > > to current thread.
> > > > > > > > >
> > > > > > > > > How do u think about it ?
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > вт, 7 мар. 2017 г. в 22:38, Denis Magda <dmagda@apache.org
> >:
> > > > > > > > >
> > > > > > > > > > Hi Alexey,
> > > > > > > > > >
> > > > > > > > > > Please share the rational behind this and the thoughts,
> > > design
> > > > > > ideas
> > > > > > > > you
> > > > > > > > > > have in mind.
> > > > > > > > > >
> > > > > > > > > > —
> > > > > > > > > > Denis
> > > > > > > > > >
> > > > > > > > > > > On Mar 7, 2017, at 3:19 AM, ALEKSEY KUZNETSOV <
> > > > > > > > > alkuznetsov.sb@gmail.com>
> > > > > > > > > > wrote:
> > > > > > > > > > >
> > > > > > > > > > > Hi all! Im designing distributed transaction which can
> be
> > > > > started
> > > > > > > at
> > > > > > > > > one
> > > > > > > > > > > node, and continued at other one. Has anybody thoughts
> on
> > > it
> > > > ?
> > > > > > > > > > > --
> > > > > > > > > > >
> > > > > > > > > > > *Best Regards,*
> > > > > > > > > > >
> > > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > > >
> > > > > > > > > > --
> > > > > > > > >
> > > > > > > > > *Best Regards,*
> > > > > > > > >
> > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > >
> > > > > > > >
> > > > > > > --
> > > > > > >
> > > > > > > *Best Regards,*
> > > > > > >
> > > > > > > *Kuznetsov Aleksey*
> > > > > > >
> > > > > >
> > > > >
> > > > > --
> > > > >
> > > > > *Best Regards,*
> > > > >
> > > > > *Kuznetsov Aleksey*
> > > > >
> > > > --
> > > >
> > > > *Best Regards,*
> > > >
> > > > *Kuznetsov Aleksey*
> > > >
> > >
> > --
> >
> > *Best Regards,*
> >
> > *Kuznetsov Aleksey*
> >
>
-- 

*Best Regards,*

*Kuznetsov Aleksey*

Re: distributed transaction of non-single coordinator

Posted by Sergi Vladykin <se...@gmail.com>.
Just serializing TX object and deserializing it on another node is
meaningless, because other nodes participating in the TX have to know about
the new coordinator. This will require protocol changes, we definitely will
have fault tolerance and performance issues. IMO the whole idea is wrong
and it makes no sense to waste time on it.

Sergi

2017-03-14 10:57 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:

> IgniteTransactionState implememntation contains IgniteTxEntry's which is
> supposed to be transferable
>
> пн, 13 мар. 2017 г. в 19:32, Dmitriy Setrakyan <ds...@apache.org>:
>
> > It sounds a little scary to me that we are passing transaction objects
> > around. Such object may contain all sorts of Ignite context. If some data
> > needs to be passed across, we should create a special transfer object in
> > this case.
> >
> > D.
> >
> >
> > On Mon, Mar 13, 2017 at 9:10 AM, ALEKSEY KUZNETSOV <
> > alkuznetsov.sb@gmail.com
> > > wrote:
> >
> > > well, there a couple of issues preventing transaction proceeding.
> > > At first, After transaction serialization and deserialization on the
> > remote
> > > server, there is no txState. So im going to put it in
> > > writeExternal()\readExternal()
> > >
> > > The last one is Deserialized transaction lacks of shared cache context
> > > field at TransactionProxyImpl. Perhaps, it must be injected by
> > > GridResourceProcessor ?
> > >
> > > пн, 13 мар. 2017 г. в 17:27, ALEKSEY KUZNETSOV <
> alkuznetsov.sb@gmail.com
> > >:
> > >
> > > > while starting and continuing transaction in different jvms in run
> into
> > > > serialization exception in writeExternalMeta :
> > > >
> > > > @Override public void writeExternal(ObjectOutput out) throws
> > IOException
> > > {
> > > >     writeExternalMeta(out);
> > > >
> > > > some meta is cannot be serialized.
> > > > пт, 10 мар. 2017 г. в 17:25, Alexey Goncharuk <
> > > alexey.goncharuk@gmail.com
> > > > >:
> > > >
> > > > Aleksey,
> > > >
> > > >
> > > >
> > > > I think I am starting to get what you want, but I have a few
> concerns:
> > > >  - What is the API for the proposed change? In your test, you pass an
> > > > instance of transaction created on ignite(0) to the ignite instance
> > > > ignite(1). This is obviously not possible in a truly distributed
> > > > (multi-jvm) environment.
> > > > - How will you synchronize cache update actions and transaction
> commit?
> > > > Say, you have one node that decided to commit, but another node is
> > still
> > > > writing within this transaction. How do you make sure that two nodes
> > will
> > > > not call commit() and rollback() simultaneously?
> > > >  - How do you make sure that either commit() or rollback() is called
> if
> > > an
> > > > originator failed?
> > > >
> > > > 2017-03-10 15:38 GMT+03:00 Дмитрий Рябов <so...@gmail.com>:
> > > >
> > > > > Alexey Goncharuk, heh, my initial understanding was that
> transferring
> > > of
> > > > tx
> > > > > ownership from one node to another will be happened automatically
> > when
> > > > > originating node is gone down.
> > > > >
> > > > > 2017-03-10 15:36 GMT+03:00 ALEKSEY KUZNETSOV <
> > alkuznetsov.sb@gmail.com
> > > >:
> > > > >
> > > > > > Im aiming to span transaction on multiple threads, nodes,
> > jvms(soon).
> > > > So
> > > > > > every node is able to rollback, or commit common transaction.It
> > > turned
> > > > > up i
> > > > > > need to transfer tx between nodes in order to commit transaction
> in
> > > > > > different node(in the same jvm).
> > > > > >
> > > > > > пт, 10 мар. 2017 г. в 15:20, Alexey Goncharuk <
> > > > > alexey.goncharuk@gmail.com
> > > > > > >:
> > > > > >
> > > > > > > Aleksey,
> > > > > > >
> > > > > > > Do you mean that you want a concept of transferring of tx
> > ownership
> > > > > from
> > > > > > > one node to another? My initial understanding was that you want
> > to
> > > be
> > > > > > able
> > > > > > > to update keys in a transaction from multiple threads in
> > parallel.
> > > > > > >
> > > > > > > --AG
> > > > > > >
> > > > > > > 2017-03-10 15:01 GMT+03:00 ALEKSEY KUZNETSOV <
> > > > alkuznetsov.sb@gmail.com
> > > > > >:
> > > > > > >
> > > > > > > > Well. Consider transaction started in one node, and continued
> > in
> > > > > > another
> > > > > > > > one.
> > > > > > > > The following test describes my idea:
> > > > > > > >
> > > > > > > > Ignite ignite1 = ignite(0);
> > > > > > > >
> > > > > > > > IgniteTransactions transactions = ignite1.transactions();
> > > > > > > >
> > > > > > > > IgniteCache<String, Integer> cache =
> ignite1.getOrCreateCache("
> > > > > > > > testCache");
> > > > > > > >
> > > > > > > > Transaction tx = transactions.txStart(concurrency,
> isolation);
> > > > > > > >
> > > > > > > > cache.put("key1", 1);
> > > > > > > >
> > > > > > > > cache.put("key2", 2);
> > > > > > > >
> > > > > > > > tx.stop();
> > > > > > > >
> > > > > > > > IgniteInternalFuture<Boolean> fut = GridTestUtils.runAsync(()
> > ->
> > > {
> > > > > > > >     IgniteTransactions ts = ignite(1).transactions();
> > > > > > > >     Assert.assertNull(ts.tx());
> > > > > > > >     Assert.assertEquals(TransactionState.STOPPED,
> tx.state());
> > > > > > > >     ts.txStart(tx);
> > > > > > > >     Assert.assertEquals(TransactionState.ACTIVE,
> tx.state());
> > > > > > > >     cache.put("key3", 3);
> > > > > > > >     Assert.assertTrue(cache.remove("key2"));
> > > > > > > >     tx.commit();
> > > > > > > >     return true;
> > > > > > > > });
> > > > > > > >
> > > > > > > > fut.get();
> > > > > > > >
> > > > > > > > Assert.assertEquals(TransactionState.COMMITTED, tx.state());
> > > > > > > > Assert.assertEquals((long)1, (long)cache.get("key1"));
> > > > > > > > Assert.assertEquals((long)3, (long)cache.get("key3"));
> > > > > > > > Assert.assertFalse(cache.containsKey("key2"));
> > > > > > > >
> > > > > > > > In method *ts.txStart(...)* we just rebind *tx* to current
> > > thread:
> > > > > > > >
> > > > > > > > public void txStart(Transaction tx) {
> > > > > > > >     TransactionProxyImpl transactionProxy =
> > > > (TransactionProxyImpl)tx;
> > > > > > > >     cctx.tm().reopenTx(transactionProxy.tx());
> > > > > > > >     transactionProxy.bindToCurrentThread();
> > > > > > > > }
> > > > > > > >
> > > > > > > > In method *reopenTx* we alter *threadMap* so that it binds
> > > > > transaction
> > > > > > > > to current thread.
> > > > > > > >
> > > > > > > > How do u think about it ?
> > > > > > > >
> > > > > > > >
> > > > > > > > вт, 7 мар. 2017 г. в 22:38, Denis Magda <dm...@apache.org>:
> > > > > > > >
> > > > > > > > > Hi Alexey,
> > > > > > > > >
> > > > > > > > > Please share the rational behind this and the thoughts,
> > design
> > > > > ideas
> > > > > > > you
> > > > > > > > > have in mind.
> > > > > > > > >
> > > > > > > > > —
> > > > > > > > > Denis
> > > > > > > > >
> > > > > > > > > > On Mar 7, 2017, at 3:19 AM, ALEKSEY KUZNETSOV <
> > > > > > > > alkuznetsov.sb@gmail.com>
> > > > > > > > > wrote:
> > > > > > > > > >
> > > > > > > > > > Hi all! Im designing distributed transaction which can be
> > > > started
> > > > > > at
> > > > > > > > one
> > > > > > > > > > node, and continued at other one. Has anybody thoughts on
> > it
> > > ?
> > > > > > > > > > --
> > > > > > > > > >
> > > > > > > > > > *Best Regards,*
> > > > > > > > > >
> > > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > > >
> > > > > > > > > --
> > > > > > > >
> > > > > > > > *Best Regards,*
> > > > > > > >
> > > > > > > > *Kuznetsov Aleksey*
> > > > > > > >
> > > > > > >
> > > > > > --
> > > > > >
> > > > > > *Best Regards,*
> > > > > >
> > > > > > *Kuznetsov Aleksey*
> > > > > >
> > > > >
> > > >
> > > > --
> > > >
> > > > *Best Regards,*
> > > >
> > > > *Kuznetsov Aleksey*
> > > >
> > > --
> > >
> > > *Best Regards,*
> > >
> > > *Kuznetsov Aleksey*
> > >
> >
> --
>
> *Best Regards,*
>
> *Kuznetsov Aleksey*
>

Re: distributed transaction of non-single coordinator

Posted by ALEKSEY KUZNETSOV <al...@gmail.com>.
IgniteTransactionState implememntation contains IgniteTxEntry's which is
supposed to be transferable

пн, 13 мар. 2017 г. в 19:32, Dmitriy Setrakyan <ds...@apache.org>:

> It sounds a little scary to me that we are passing transaction objects
> around. Such object may contain all sorts of Ignite context. If some data
> needs to be passed across, we should create a special transfer object in
> this case.
>
> D.
>
>
> On Mon, Mar 13, 2017 at 9:10 AM, ALEKSEY KUZNETSOV <
> alkuznetsov.sb@gmail.com
> > wrote:
>
> > well, there a couple of issues preventing transaction proceeding.
> > At first, After transaction serialization and deserialization on the
> remote
> > server, there is no txState. So im going to put it in
> > writeExternal()\readExternal()
> >
> > The last one is Deserialized transaction lacks of shared cache context
> > field at TransactionProxyImpl. Perhaps, it must be injected by
> > GridResourceProcessor ?
> >
> > пн, 13 мар. 2017 г. в 17:27, ALEKSEY KUZNETSOV <alkuznetsov.sb@gmail.com
> >:
> >
> > > while starting and continuing transaction in different jvms in run into
> > > serialization exception in writeExternalMeta :
> > >
> > > @Override public void writeExternal(ObjectOutput out) throws
> IOException
> > {
> > >     writeExternalMeta(out);
> > >
> > > some meta is cannot be serialized.
> > > пт, 10 мар. 2017 г. в 17:25, Alexey Goncharuk <
> > alexey.goncharuk@gmail.com
> > > >:
> > >
> > > Aleksey,
> > >
> > >
> > >
> > > I think I am starting to get what you want, but I have a few concerns:
> > >  - What is the API for the proposed change? In your test, you pass an
> > > instance of transaction created on ignite(0) to the ignite instance
> > > ignite(1). This is obviously not possible in a truly distributed
> > > (multi-jvm) environment.
> > > - How will you synchronize cache update actions and transaction commit?
> > > Say, you have one node that decided to commit, but another node is
> still
> > > writing within this transaction. How do you make sure that two nodes
> will
> > > not call commit() and rollback() simultaneously?
> > >  - How do you make sure that either commit() or rollback() is called if
> > an
> > > originator failed?
> > >
> > > 2017-03-10 15:38 GMT+03:00 Дмитрий Рябов <so...@gmail.com>:
> > >
> > > > Alexey Goncharuk, heh, my initial understanding was that transferring
> > of
> > > tx
> > > > ownership from one node to another will be happened automatically
> when
> > > > originating node is gone down.
> > > >
> > > > 2017-03-10 15:36 GMT+03:00 ALEKSEY KUZNETSOV <
> alkuznetsov.sb@gmail.com
> > >:
> > > >
> > > > > Im aiming to span transaction on multiple threads, nodes,
> jvms(soon).
> > > So
> > > > > every node is able to rollback, or commit common transaction.It
> > turned
> > > > up i
> > > > > need to transfer tx between nodes in order to commit transaction in
> > > > > different node(in the same jvm).
> > > > >
> > > > > пт, 10 мар. 2017 г. в 15:20, Alexey Goncharuk <
> > > > alexey.goncharuk@gmail.com
> > > > > >:
> > > > >
> > > > > > Aleksey,
> > > > > >
> > > > > > Do you mean that you want a concept of transferring of tx
> ownership
> > > > from
> > > > > > one node to another? My initial understanding was that you want
> to
> > be
> > > > > able
> > > > > > to update keys in a transaction from multiple threads in
> parallel.
> > > > > >
> > > > > > --AG
> > > > > >
> > > > > > 2017-03-10 15:01 GMT+03:00 ALEKSEY KUZNETSOV <
> > > alkuznetsov.sb@gmail.com
> > > > >:
> > > > > >
> > > > > > > Well. Consider transaction started in one node, and continued
> in
> > > > > another
> > > > > > > one.
> > > > > > > The following test describes my idea:
> > > > > > >
> > > > > > > Ignite ignite1 = ignite(0);
> > > > > > >
> > > > > > > IgniteTransactions transactions = ignite1.transactions();
> > > > > > >
> > > > > > > IgniteCache<String, Integer> cache = ignite1.getOrCreateCache("
> > > > > > > testCache");
> > > > > > >
> > > > > > > Transaction tx = transactions.txStart(concurrency, isolation);
> > > > > > >
> > > > > > > cache.put("key1", 1);
> > > > > > >
> > > > > > > cache.put("key2", 2);
> > > > > > >
> > > > > > > tx.stop();
> > > > > > >
> > > > > > > IgniteInternalFuture<Boolean> fut = GridTestUtils.runAsync(()
> ->
> > {
> > > > > > >     IgniteTransactions ts = ignite(1).transactions();
> > > > > > >     Assert.assertNull(ts.tx());
> > > > > > >     Assert.assertEquals(TransactionState.STOPPED, tx.state());
> > > > > > >     ts.txStart(tx);
> > > > > > >     Assert.assertEquals(TransactionState.ACTIVE, tx.state());
> > > > > > >     cache.put("key3", 3);
> > > > > > >     Assert.assertTrue(cache.remove("key2"));
> > > > > > >     tx.commit();
> > > > > > >     return true;
> > > > > > > });
> > > > > > >
> > > > > > > fut.get();
> > > > > > >
> > > > > > > Assert.assertEquals(TransactionState.COMMITTED, tx.state());
> > > > > > > Assert.assertEquals((long)1, (long)cache.get("key1"));
> > > > > > > Assert.assertEquals((long)3, (long)cache.get("key3"));
> > > > > > > Assert.assertFalse(cache.containsKey("key2"));
> > > > > > >
> > > > > > > In method *ts.txStart(...)* we just rebind *tx* to current
> > thread:
> > > > > > >
> > > > > > > public void txStart(Transaction tx) {
> > > > > > >     TransactionProxyImpl transactionProxy =
> > > (TransactionProxyImpl)tx;
> > > > > > >     cctx.tm().reopenTx(transactionProxy.tx());
> > > > > > >     transactionProxy.bindToCurrentThread();
> > > > > > > }
> > > > > > >
> > > > > > > In method *reopenTx* we alter *threadMap* so that it binds
> > > > transaction
> > > > > > > to current thread.
> > > > > > >
> > > > > > > How do u think about it ?
> > > > > > >
> > > > > > >
> > > > > > > вт, 7 мар. 2017 г. в 22:38, Denis Magda <dm...@apache.org>:
> > > > > > >
> > > > > > > > Hi Alexey,
> > > > > > > >
> > > > > > > > Please share the rational behind this and the thoughts,
> design
> > > > ideas
> > > > > > you
> > > > > > > > have in mind.
> > > > > > > >
> > > > > > > > —
> > > > > > > > Denis
> > > > > > > >
> > > > > > > > > On Mar 7, 2017, at 3:19 AM, ALEKSEY KUZNETSOV <
> > > > > > > alkuznetsov.sb@gmail.com>
> > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > Hi all! Im designing distributed transaction which can be
> > > started
> > > > > at
> > > > > > > one
> > > > > > > > > node, and continued at other one. Has anybody thoughts on
> it
> > ?
> > > > > > > > > --
> > > > > > > > >
> > > > > > > > > *Best Regards,*
> > > > > > > > >
> > > > > > > > > *Kuznetsov Aleksey*
> > > > > > > >
> > > > > > > > --
> > > > > > >
> > > > > > > *Best Regards,*
> > > > > > >
> > > > > > > *Kuznetsov Aleksey*
> > > > > > >
> > > > > >
> > > > > --
> > > > >
> > > > > *Best Regards,*
> > > > >
> > > > > *Kuznetsov Aleksey*
> > > > >
> > > >
> > >
> > > --
> > >
> > > *Best Regards,*
> > >
> > > *Kuznetsov Aleksey*
> > >
> > --
> >
> > *Best Regards,*
> >
> > *Kuznetsov Aleksey*
> >
>
-- 

*Best Regards,*

*Kuznetsov Aleksey*

Re: distributed transaction of non-single coordinator

Posted by Dmitriy Setrakyan <ds...@apache.org>.
It sounds a little scary to me that we are passing transaction objects
around. Such object may contain all sorts of Ignite context. If some data
needs to be passed across, we should create a special transfer object in
this case.

D.


On Mon, Mar 13, 2017 at 9:10 AM, ALEKSEY KUZNETSOV <alkuznetsov.sb@gmail.com
> wrote:

> well, there a couple of issues preventing transaction proceeding.
> At first, After transaction serialization and deserialization on the remote
> server, there is no txState. So im going to put it in
> writeExternal()\readExternal()
>
> The last one is Deserialized transaction lacks of shared cache context
> field at TransactionProxyImpl. Perhaps, it must be injected by
> GridResourceProcessor ?
>
> пн, 13 мар. 2017 г. в 17:27, ALEKSEY KUZNETSOV <al...@gmail.com>:
>
> > while starting and continuing transaction in different jvms in run into
> > serialization exception in writeExternalMeta :
> >
> > @Override public void writeExternal(ObjectOutput out) throws IOException
> {
> >     writeExternalMeta(out);
> >
> > some meta is cannot be serialized.
> > пт, 10 мар. 2017 г. в 17:25, Alexey Goncharuk <
> alexey.goncharuk@gmail.com
> > >:
> >
> > Aleksey,
> >
> >
> >
> > I think I am starting to get what you want, but I have a few concerns:
> >  - What is the API for the proposed change? In your test, you pass an
> > instance of transaction created on ignite(0) to the ignite instance
> > ignite(1). This is obviously not possible in a truly distributed
> > (multi-jvm) environment.
> > - How will you synchronize cache update actions and transaction commit?
> > Say, you have one node that decided to commit, but another node is still
> > writing within this transaction. How do you make sure that two nodes will
> > not call commit() and rollback() simultaneously?
> >  - How do you make sure that either commit() or rollback() is called if
> an
> > originator failed?
> >
> > 2017-03-10 15:38 GMT+03:00 Дмитрий Рябов <so...@gmail.com>:
> >
> > > Alexey Goncharuk, heh, my initial understanding was that transferring
> of
> > tx
> > > ownership from one node to another will be happened automatically when
> > > originating node is gone down.
> > >
> > > 2017-03-10 15:36 GMT+03:00 ALEKSEY KUZNETSOV <alkuznetsov.sb@gmail.com
> >:
> > >
> > > > Im aiming to span transaction on multiple threads, nodes, jvms(soon).
> > So
> > > > every node is able to rollback, or commit common transaction.It
> turned
> > > up i
> > > > need to transfer tx between nodes in order to commit transaction in
> > > > different node(in the same jvm).
> > > >
> > > > пт, 10 мар. 2017 г. в 15:20, Alexey Goncharuk <
> > > alexey.goncharuk@gmail.com
> > > > >:
> > > >
> > > > > Aleksey,
> > > > >
> > > > > Do you mean that you want a concept of transferring of tx ownership
> > > from
> > > > > one node to another? My initial understanding was that you want to
> be
> > > > able
> > > > > to update keys in a transaction from multiple threads in parallel.
> > > > >
> > > > > --AG
> > > > >
> > > > > 2017-03-10 15:01 GMT+03:00 ALEKSEY KUZNETSOV <
> > alkuznetsov.sb@gmail.com
> > > >:
> > > > >
> > > > > > Well. Consider transaction started in one node, and continued in
> > > > another
> > > > > > one.
> > > > > > The following test describes my idea:
> > > > > >
> > > > > > Ignite ignite1 = ignite(0);
> > > > > >
> > > > > > IgniteTransactions transactions = ignite1.transactions();
> > > > > >
> > > > > > IgniteCache<String, Integer> cache = ignite1.getOrCreateCache("
> > > > > > testCache");
> > > > > >
> > > > > > Transaction tx = transactions.txStart(concurrency, isolation);
> > > > > >
> > > > > > cache.put("key1", 1);
> > > > > >
> > > > > > cache.put("key2", 2);
> > > > > >
> > > > > > tx.stop();
> > > > > >
> > > > > > IgniteInternalFuture<Boolean> fut = GridTestUtils.runAsync(() ->
> {
> > > > > >     IgniteTransactions ts = ignite(1).transactions();
> > > > > >     Assert.assertNull(ts.tx());
> > > > > >     Assert.assertEquals(TransactionState.STOPPED, tx.state());
> > > > > >     ts.txStart(tx);
> > > > > >     Assert.assertEquals(TransactionState.ACTIVE, tx.state());
> > > > > >     cache.put("key3", 3);
> > > > > >     Assert.assertTrue(cache.remove("key2"));
> > > > > >     tx.commit();
> > > > > >     return true;
> > > > > > });
> > > > > >
> > > > > > fut.get();
> > > > > >
> > > > > > Assert.assertEquals(TransactionState.COMMITTED, tx.state());
> > > > > > Assert.assertEquals((long)1, (long)cache.get("key1"));
> > > > > > Assert.assertEquals((long)3, (long)cache.get("key3"));
> > > > > > Assert.assertFalse(cache.containsKey("key2"));
> > > > > >
> > > > > > In method *ts.txStart(...)* we just rebind *tx* to current
> thread:
> > > > > >
> > > > > > public void txStart(Transaction tx) {
> > > > > >     TransactionProxyImpl transactionProxy =
> > (TransactionProxyImpl)tx;
> > > > > >     cctx.tm().reopenTx(transactionProxy.tx());
> > > > > >     transactionProxy.bindToCurrentThread();
> > > > > > }
> > > > > >
> > > > > > In method *reopenTx* we alter *threadMap* so that it binds
> > > transaction
> > > > > > to current thread.
> > > > > >
> > > > > > How do u think about it ?
> > > > > >
> > > > > >
> > > > > > вт, 7 мар. 2017 г. в 22:38, Denis Magda <dm...@apache.org>:
> > > > > >
> > > > > > > Hi Alexey,
> > > > > > >
> > > > > > > Please share the rational behind this and the thoughts, design
> > > ideas
> > > > > you
> > > > > > > have in mind.
> > > > > > >
> > > > > > > —
> > > > > > > Denis
> > > > > > >
> > > > > > > > On Mar 7, 2017, at 3:19 AM, ALEKSEY KUZNETSOV <
> > > > > > alkuznetsov.sb@gmail.com>
> > > > > > > wrote:
> > > > > > > >
> > > > > > > > Hi all! Im designing distributed transaction which can be
> > started
> > > > at
> > > > > > one
> > > > > > > > node, and continued at other one. Has anybody thoughts on it
> ?
> > > > > > > > --
> > > > > > > >
> > > > > > > > *Best Regards,*
> > > > > > > >
> > > > > > > > *Kuznetsov Aleksey*
> > > > > > >
> > > > > > > --
> > > > > >
> > > > > > *Best Regards,*
> > > > > >
> > > > > > *Kuznetsov Aleksey*
> > > > > >
> > > > >
> > > > --
> > > >
> > > > *Best Regards,*
> > > >
> > > > *Kuznetsov Aleksey*
> > > >
> > >
> >
> > --
> >
> > *Best Regards,*
> >
> > *Kuznetsov Aleksey*
> >
> --
>
> *Best Regards,*
>
> *Kuznetsov Aleksey*
>

Re: distributed transaction of non-single coordinator

Posted by ALEKSEY KUZNETSOV <al...@gmail.com>.
well, there a couple of issues preventing transaction proceeding.
At first, After transaction serialization and deserialization on the remote
server, there is no txState. So im going to put it in
writeExternal()\readExternal()

The last one is Deserialized transaction lacks of shared cache context
field at TransactionProxyImpl. Perhaps, it must be injected by
GridResourceProcessor ?

пн, 13 мар. 2017 г. в 17:27, ALEKSEY KUZNETSOV <al...@gmail.com>:

> while starting and continuing transaction in different jvms in run into
> serialization exception in writeExternalMeta :
>
> @Override public void writeExternal(ObjectOutput out) throws IOException {
>     writeExternalMeta(out);
>
> some meta is cannot be serialized.
> пт, 10 мар. 2017 г. в 17:25, Alexey Goncharuk <alexey.goncharuk@gmail.com
> >:
>
> Aleksey,
>
>
>
> I think I am starting to get what you want, but I have a few concerns:
>  - What is the API for the proposed change? In your test, you pass an
> instance of transaction created on ignite(0) to the ignite instance
> ignite(1). This is obviously not possible in a truly distributed
> (multi-jvm) environment.
> - How will you synchronize cache update actions and transaction commit?
> Say, you have one node that decided to commit, but another node is still
> writing within this transaction. How do you make sure that two nodes will
> not call commit() and rollback() simultaneously?
>  - How do you make sure that either commit() or rollback() is called if an
> originator failed?
>
> 2017-03-10 15:38 GMT+03:00 Дмитрий Рябов <so...@gmail.com>:
>
> > Alexey Goncharuk, heh, my initial understanding was that transferring of
> tx
> > ownership from one node to another will be happened automatically when
> > originating node is gone down.
> >
> > 2017-03-10 15:36 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:
> >
> > > Im aiming to span transaction on multiple threads, nodes, jvms(soon).
> So
> > > every node is able to rollback, or commit common transaction.It turned
> > up i
> > > need to transfer tx between nodes in order to commit transaction in
> > > different node(in the same jvm).
> > >
> > > пт, 10 мар. 2017 г. в 15:20, Alexey Goncharuk <
> > alexey.goncharuk@gmail.com
> > > >:
> > >
> > > > Aleksey,
> > > >
> > > > Do you mean that you want a concept of transferring of tx ownership
> > from
> > > > one node to another? My initial understanding was that you want to be
> > > able
> > > > to update keys in a transaction from multiple threads in parallel.
> > > >
> > > > --AG
> > > >
> > > > 2017-03-10 15:01 GMT+03:00 ALEKSEY KUZNETSOV <
> alkuznetsov.sb@gmail.com
> > >:
> > > >
> > > > > Well. Consider transaction started in one node, and continued in
> > > another
> > > > > one.
> > > > > The following test describes my idea:
> > > > >
> > > > > Ignite ignite1 = ignite(0);
> > > > >
> > > > > IgniteTransactions transactions = ignite1.transactions();
> > > > >
> > > > > IgniteCache<String, Integer> cache = ignite1.getOrCreateCache("
> > > > > testCache");
> > > > >
> > > > > Transaction tx = transactions.txStart(concurrency, isolation);
> > > > >
> > > > > cache.put("key1", 1);
> > > > >
> > > > > cache.put("key2", 2);
> > > > >
> > > > > tx.stop();
> > > > >
> > > > > IgniteInternalFuture<Boolean> fut = GridTestUtils.runAsync(() -> {
> > > > >     IgniteTransactions ts = ignite(1).transactions();
> > > > >     Assert.assertNull(ts.tx());
> > > > >     Assert.assertEquals(TransactionState.STOPPED, tx.state());
> > > > >     ts.txStart(tx);
> > > > >     Assert.assertEquals(TransactionState.ACTIVE, tx.state());
> > > > >     cache.put("key3", 3);
> > > > >     Assert.assertTrue(cache.remove("key2"));
> > > > >     tx.commit();
> > > > >     return true;
> > > > > });
> > > > >
> > > > > fut.get();
> > > > >
> > > > > Assert.assertEquals(TransactionState.COMMITTED, tx.state());
> > > > > Assert.assertEquals((long)1, (long)cache.get("key1"));
> > > > > Assert.assertEquals((long)3, (long)cache.get("key3"));
> > > > > Assert.assertFalse(cache.containsKey("key2"));
> > > > >
> > > > > In method *ts.txStart(...)* we just rebind *tx* to current thread:
> > > > >
> > > > > public void txStart(Transaction tx) {
> > > > >     TransactionProxyImpl transactionProxy =
> (TransactionProxyImpl)tx;
> > > > >     cctx.tm().reopenTx(transactionProxy.tx());
> > > > >     transactionProxy.bindToCurrentThread();
> > > > > }
> > > > >
> > > > > In method *reopenTx* we alter *threadMap* so that it binds
> > transaction
> > > > > to current thread.
> > > > >
> > > > > How do u think about it ?
> > > > >
> > > > >
> > > > > вт, 7 мар. 2017 г. в 22:38, Denis Magda <dm...@apache.org>:
> > > > >
> > > > > > Hi Alexey,
> > > > > >
> > > > > > Please share the rational behind this and the thoughts, design
> > ideas
> > > > you
> > > > > > have in mind.
> > > > > >
> > > > > > —
> > > > > > Denis
> > > > > >
> > > > > > > On Mar 7, 2017, at 3:19 AM, ALEKSEY KUZNETSOV <
> > > > > alkuznetsov.sb@gmail.com>
> > > > > > wrote:
> > > > > > >
> > > > > > > Hi all! Im designing distributed transaction which can be
> started
> > > at
> > > > > one
> > > > > > > node, and continued at other one. Has anybody thoughts on it ?
> > > > > > > --
> > > > > > >
> > > > > > > *Best Regards,*
> > > > > > >
> > > > > > > *Kuznetsov Aleksey*
> > > > > >
> > > > > > --
> > > > >
> > > > > *Best Regards,*
> > > > >
> > > > > *Kuznetsov Aleksey*
> > > > >
> > > >
> > > --
> > >
> > > *Best Regards,*
> > >
> > > *Kuznetsov Aleksey*
> > >
> >
>
> --
>
> *Best Regards,*
>
> *Kuznetsov Aleksey*
>
-- 

*Best Regards,*

*Kuznetsov Aleksey*

Re: distributed transaction of non-single coordinator

Posted by ALEKSEY KUZNETSOV <al...@gmail.com>.
while starting and continuing transaction in different jvms in run into
serialization exception in writeExternalMeta :

@Override public void writeExternal(ObjectOutput out) throws IOException {
    writeExternalMeta(out);

some meta is cannot be serialized.
пт, 10 мар. 2017 г. в 17:25, Alexey Goncharuk <al...@gmail.com>:

> Aleksey,
>
> I think I am starting to get what you want, but I have a few concerns:
>  - What is the API for the proposed change? In your test, you pass an
> instance of transaction created on ignite(0) to the ignite instance
> ignite(1). This is obviously not possible in a truly distributed
> (multi-jvm) environment.
> - How will you synchronize cache update actions and transaction commit?
> Say, you have one node that decided to commit, but another node is still
> writing within this transaction. How do you make sure that two nodes will
> not call commit() and rollback() simultaneously?
>  - How do you make sure that either commit() or rollback() is called if an
> originator failed?
>
> 2017-03-10 15:38 GMT+03:00 Дмитрий Рябов <so...@gmail.com>:
>
> > Alexey Goncharuk, heh, my initial understanding was that transferring of
> tx
> > ownership from one node to another will be happened automatically when
> > originating node is gone down.
> >
> > 2017-03-10 15:36 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:
> >
> > > Im aiming to span transaction on multiple threads, nodes, jvms(soon).
> So
> > > every node is able to rollback, or commit common transaction.It turned
> > up i
> > > need to transfer tx between nodes in order to commit transaction in
> > > different node(in the same jvm).
> > >
> > > пт, 10 мар. 2017 г. в 15:20, Alexey Goncharuk <
> > alexey.goncharuk@gmail.com
> > > >:
> > >
> > > > Aleksey,
> > > >
> > > > Do you mean that you want a concept of transferring of tx ownership
> > from
> > > > one node to another? My initial understanding was that you want to be
> > > able
> > > > to update keys in a transaction from multiple threads in parallel.
> > > >
> > > > --AG
> > > >
> > > > 2017-03-10 15:01 GMT+03:00 ALEKSEY KUZNETSOV <
> alkuznetsov.sb@gmail.com
> > >:
> > > >
> > > > > Well. Consider transaction started in one node, and continued in
> > > another
> > > > > one.
> > > > > The following test describes my idea:
> > > > >
> > > > > Ignite ignite1 = ignite(0);
> > > > >
> > > > > IgniteTransactions transactions = ignite1.transactions();
> > > > >
> > > > > IgniteCache<String, Integer> cache = ignite1.getOrCreateCache("
> > > > > testCache");
> > > > >
> > > > > Transaction tx = transactions.txStart(concurrency, isolation);
> > > > >
> > > > > cache.put("key1", 1);
> > > > >
> > > > > cache.put("key2", 2);
> > > > >
> > > > > tx.stop();
> > > > >
> > > > > IgniteInternalFuture<Boolean> fut = GridTestUtils.runAsync(() -> {
> > > > >     IgniteTransactions ts = ignite(1).transactions();
> > > > >     Assert.assertNull(ts.tx());
> > > > >     Assert.assertEquals(TransactionState.STOPPED, tx.state());
> > > > >     ts.txStart(tx);
> > > > >     Assert.assertEquals(TransactionState.ACTIVE, tx.state());
> > > > >     cache.put("key3", 3);
> > > > >     Assert.assertTrue(cache.remove("key2"));
> > > > >     tx.commit();
> > > > >     return true;
> > > > > });
> > > > >
> > > > > fut.get();
> > > > >
> > > > > Assert.assertEquals(TransactionState.COMMITTED, tx.state());
> > > > > Assert.assertEquals((long)1, (long)cache.get("key1"));
> > > > > Assert.assertEquals((long)3, (long)cache.get("key3"));
> > > > > Assert.assertFalse(cache.containsKey("key2"));
> > > > >
> > > > > In method *ts.txStart(...)* we just rebind *tx* to current thread:
> > > > >
> > > > > public void txStart(Transaction tx) {
> > > > >     TransactionProxyImpl transactionProxy =
> (TransactionProxyImpl)tx;
> > > > >     cctx.tm().reopenTx(transactionProxy.tx());
> > > > >     transactionProxy.bindToCurrentThread();
> > > > > }
> > > > >
> > > > > In method *reopenTx* we alter *threadMap* so that it binds
> > transaction
> > > > > to current thread.
> > > > >
> > > > > How do u think about it ?
> > > > >
> > > > >
> > > > > вт, 7 мар. 2017 г. в 22:38, Denis Magda <dm...@apache.org>:
> > > > >
> > > > > > Hi Alexey,
> > > > > >
> > > > > > Please share the rational behind this and the thoughts, design
> > ideas
> > > > you
> > > > > > have in mind.
> > > > > >
> > > > > > —
> > > > > > Denis
> > > > > >
> > > > > > > On Mar 7, 2017, at 3:19 AM, ALEKSEY KUZNETSOV <
> > > > > alkuznetsov.sb@gmail.com>
> > > > > > wrote:
> > > > > > >
> > > > > > > Hi all! Im designing distributed transaction which can be
> started
> > > at
> > > > > one
> > > > > > > node, and continued at other one. Has anybody thoughts on it ?
> > > > > > > --
> > > > > > >
> > > > > > > *Best Regards,*
> > > > > > >
> > > > > > > *Kuznetsov Aleksey*
> > > > > >
> > > > > > --
> > > > >
> > > > > *Best Regards,*
> > > > >
> > > > > *Kuznetsov Aleksey*
> > > > >
> > > >
> > > --
> > >
> > > *Best Regards,*
> > >
> > > *Kuznetsov Aleksey*
> > >
> >
>
-- 

*Best Regards,*

*Kuznetsov Aleksey*

Re: distributed transaction of non-single coordinator

Posted by Дмитрий Рябов <so...@gmail.com>.
What about to send special message to random/choosen node when we start
transaction? And when rollback procedure begins - this second node will
check state of originating node and if it is down then second node became
originating?

2017-03-10 17:25 GMT+03:00 Alexey Goncharuk <al...@gmail.com>:

> Aleksey,
>
> I think I am starting to get what you want, but I have a few concerns:
>  - What is the API for the proposed change? In your test, you pass an
> instance of transaction created on ignite(0) to the ignite instance
> ignite(1). This is obviously not possible in a truly distributed
> (multi-jvm) environment.
> - How will you synchronize cache update actions and transaction commit?
> Say, you have one node that decided to commit, but another node is still
> writing within this transaction. How do you make sure that two nodes will
> not call commit() and rollback() simultaneously?
>  - How do you make sure that either commit() or rollback() is called if an
> originator failed?
>
> 2017-03-10 15:38 GMT+03:00 Дмитрий Рябов <so...@gmail.com>:
>
> > Alexey Goncharuk, heh, my initial understanding was that transferring of
> tx
> > ownership from one node to another will be happened automatically when
> > originating node is gone down.
> >
> > 2017-03-10 15:36 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:
> >
> > > Im aiming to span transaction on multiple threads, nodes, jvms(soon).
> So
> > > every node is able to rollback, or commit common transaction.It turned
> > up i
> > > need to transfer tx between nodes in order to commit transaction in
> > > different node(in the same jvm).
> > >
> > > пт, 10 мар. 2017 г. в 15:20, Alexey Goncharuk <
> > alexey.goncharuk@gmail.com
> > > >:
> > >
> > > > Aleksey,
> > > >
> > > > Do you mean that you want a concept of transferring of tx ownership
> > from
> > > > one node to another? My initial understanding was that you want to be
> > > able
> > > > to update keys in a transaction from multiple threads in parallel.
> > > >
> > > > --AG
> > > >
> > > > 2017-03-10 15:01 GMT+03:00 ALEKSEY KUZNETSOV <
> alkuznetsov.sb@gmail.com
> > >:
> > > >
> > > > > Well. Consider transaction started in one node, and continued in
> > > another
> > > > > one.
> > > > > The following test describes my idea:
> > > > >
> > > > > Ignite ignite1 = ignite(0);
> > > > >
> > > > > IgniteTransactions transactions = ignite1.transactions();
> > > > >
> > > > > IgniteCache<String, Integer> cache = ignite1.getOrCreateCache("
> > > > > testCache");
> > > > >
> > > > > Transaction tx = transactions.txStart(concurrency, isolation);
> > > > >
> > > > > cache.put("key1", 1);
> > > > >
> > > > > cache.put("key2", 2);
> > > > >
> > > > > tx.stop();
> > > > >
> > > > > IgniteInternalFuture<Boolean> fut = GridTestUtils.runAsync(() -> {
> > > > >     IgniteTransactions ts = ignite(1).transactions();
> > > > >     Assert.assertNull(ts.tx());
> > > > >     Assert.assertEquals(TransactionState.STOPPED, tx.state());
> > > > >     ts.txStart(tx);
> > > > >     Assert.assertEquals(TransactionState.ACTIVE, tx.state());
> > > > >     cache.put("key3", 3);
> > > > >     Assert.assertTrue(cache.remove("key2"));
> > > > >     tx.commit();
> > > > >     return true;
> > > > > });
> > > > >
> > > > > fut.get();
> > > > >
> > > > > Assert.assertEquals(TransactionState.COMMITTED, tx.state());
> > > > > Assert.assertEquals((long)1, (long)cache.get("key1"));
> > > > > Assert.assertEquals((long)3, (long)cache.get("key3"));
> > > > > Assert.assertFalse(cache.containsKey("key2"));
> > > > >
> > > > > In method *ts.txStart(...)* we just rebind *tx* to current thread:
> > > > >
> > > > > public void txStart(Transaction tx) {
> > > > >     TransactionProxyImpl transactionProxy =
> (TransactionProxyImpl)tx;
> > > > >     cctx.tm().reopenTx(transactionProxy.tx());
> > > > >     transactionProxy.bindToCurrentThread();
> > > > > }
> > > > >
> > > > > In method *reopenTx* we alter *threadMap* so that it binds
> > transaction
> > > > > to current thread.
> > > > >
> > > > > How do u think about it ?
> > > > >
> > > > >
> > > > > вт, 7 мар. 2017 г. в 22:38, Denis Magda <dm...@apache.org>:
> > > > >
> > > > > > Hi Alexey,
> > > > > >
> > > > > > Please share the rational behind this and the thoughts, design
> > ideas
> > > > you
> > > > > > have in mind.
> > > > > >
> > > > > > —
> > > > > > Denis
> > > > > >
> > > > > > > On Mar 7, 2017, at 3:19 AM, ALEKSEY KUZNETSOV <
> > > > > alkuznetsov.sb@gmail.com>
> > > > > > wrote:
> > > > > > >
> > > > > > > Hi all! Im designing distributed transaction which can be
> started
> > > at
> > > > > one
> > > > > > > node, and continued at other one. Has anybody thoughts on it ?
> > > > > > > --
> > > > > > >
> > > > > > > *Best Regards,*
> > > > > > >
> > > > > > > *Kuznetsov Aleksey*
> > > > > >
> > > > > > --
> > > > >
> > > > > *Best Regards,*
> > > > >
> > > > > *Kuznetsov Aleksey*
> > > > >
> > > >
> > > --
> > >
> > > *Best Regards,*
> > >
> > > *Kuznetsov Aleksey*
> > >
> >
>

Re: distributed transaction of non-single coordinator

Posted by ALEKSEY KUZNETSOV <al...@gmail.com>.
   - Its a draft test, in my next one i will try to serialize transaction
   and send it across nodes
   - thats where STOPPED status comes in handy.You cannot restart
   transaction in another node unless it was stopped. And you cannot commit if
   transaction status is STOPPED. Further tests should be written to assure it
   works.
   - could you provide a simple scenario ?


пт, 10 мар. 2017 г. в 17:25, Alexey Goncharuk <al...@gmail.com>:

> Aleksey,
>
> I think I am starting to get what you want, but I have a few concerns:
>  - What is the API for the proposed change? In your test, you pass an
> instance of transaction created on ignite(0) to the ignite instance
> ignite(1). This is obviously not possible in a truly distributed
> (multi-jvm) environment.
> - How will you synchronize cache update actions and transaction commit?
> Say, you have one node that decided to commit, but another node is still
> writing within this transaction. How do you make sure that two nodes will
> not call commit() and rollback() simultaneously?
>  - How do you make sure that either commit() or rollback() is called if an
> originator failed?
>
> 2017-03-10 15:38 GMT+03:00 Дмитрий Рябов <so...@gmail.com>:
>
> > Alexey Goncharuk, heh, my initial understanding was that transferring of
> tx
> > ownership from one node to another will be happened automatically when
> > originating node is gone down.
> >
> > 2017-03-10 15:36 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:
> >
> > > Im aiming to span transaction on multiple threads, nodes, jvms(soon).
> So
> > > every node is able to rollback, or commit common transaction.It turned
> > up i
> > > need to transfer tx between nodes in order to commit transaction in
> > > different node(in the same jvm).
> > >
> > > пт, 10 мар. 2017 г. в 15:20, Alexey Goncharuk <
> > alexey.goncharuk@gmail.com
> > > >:
>
>    - > >
>
> > > > Aleksey,
> > > >
> > > > Do you mean that you want a concept of transferring of tx ownership
> > from
> > > > one node to another? My initial understanding was that you want to be
> > > able
> > > > to update keys in a transaction from multiple threads in parallel.
> > > >
> > > > --AG
> > > >
> > > > 2017-03-10 15:01 GMT+03:00 ALEKSEY KUZNETSOV <
> alkuznetsov.sb@gmail.com
> > >:
> > > >
> > > > > Well. Consider transaction started in one node, and continued in
> > > another
> > > > > one.
> > > > > The following test describes my idea:
> > > > >
> > > > > Ignite ignite1 = ignite(0);
> > > > >
> > > > > IgniteTransactions transactions = ignite1.transactions();
> > > > >
> > > > > IgniteCache<String, Integer> cache = ignite1.getOrCreateCache("
> > > > > testCache");
> > > > >
> > > > > Transaction tx = transactions.txStart(concurrency, isolation);
> > > > >
> > > > > cache.put("key1", 1);
> > > > >
> > > > > cache.put("key2", 2);
> > > > >
> > > > > tx.stop();
> > > > >
> > > > > IgniteInternalFuture<Boolean> fut = GridTestUtils.runAsync(() -> {
> > > > >     IgniteTransactions ts = ignite(1).transactions();
> > > > >     Assert.assertNull(ts.tx());
> > > > >     Assert.assertEquals(TransactionState.STOPPED, tx.state());
> > > > >     ts.txStart(tx);
> > > > >     Assert.assertEquals(TransactionState.ACTIVE, tx.state());
> > > > >     cache.put("key3", 3);
> > > > >     Assert.assertTrue(cache.remove("key2"));
> > > > >     tx.commit();
> > > > >     return true;
> > > > > });
> > > > >
> > > > > fut.get();
> > > > >
> > > > > Assert.assertEquals(TransactionState.COMMITTED, tx.state());
> > > > > Assert.assertEquals((long)1, (long)cache.get("key1"));
> > > > > Assert.assertEquals((long)3, (long)cache.get("key3"));
> > > > > Assert.assertFalse(cache.containsKey("key2"));
> > > > >
> > > > > In method *ts.txStart(...)* we just rebind *tx* to current thread:
> > > > >
> > > > > public void txStart(Transaction tx) {
> > > > >     TransactionProxyImpl transactionProxy =
> (TransactionProxyImpl)tx;
> > > > >     cctx.tm().reopenTx(transactionProxy.tx());
> > > > >     transactionProxy.bindToCurrentThread();
> > > > > }
> > > > >
> > > > > In method *reopenTx* we alter *threadMap* so that it binds
> > transaction
> > > > > to current thread.
> > > > >
> > > > > How do u think about it ?
> > > > >
> > > > >
> > > > > вт, 7 мар. 2017 г. в 22:38, Denis Magda <dm...@apache.org>:
> > > > >
> > > > > > Hi Alexey,
> > > > > >
> > > > > > Please share the rational behind this and the thoughts, design
> > ideas
> > > > you
> > > > > > have in mind.
> > > > > >
> > > > > > —
> > > > > > Denis
> > > > > >
> > > > > > > On Mar 7, 2017, at 3:19 AM, ALEKSEY KUZNETSOV <
> > > > > alkuznetsov.sb@gmail.com>
> > > > > > wrote:
> > > > > > >
> > > > > > > Hi all! Im designing distributed transaction which can be
> started
> > > at
> > > > > one
> > > > > > > node, and continued at other one. Has anybody thoughts on it ?
> > > > > > > --
> > > > > > >
> > > > > > > *Best Regards,*
> > > > > > >
> > > > > > > *Kuznetsov Aleksey*
> > > > > >
> > > > > > --
> > > > >
> > > > > *Best Regards,*
> > > > >
> > > > > *Kuznetsov Aleksey*
> > > > >
> > > >
> > > --
> > >
> > > *Best Regards,*
> > >
> > > *Kuznetsov Aleksey*
> > >
> >
>
-- 

*Best Regards,*

*Kuznetsov Aleksey*

Re: distributed transaction of non-single coordinator

Posted by Alexey Goncharuk <al...@gmail.com>.
Aleksey,

I think I am starting to get what you want, but I have a few concerns:
 - What is the API for the proposed change? In your test, you pass an
instance of transaction created on ignite(0) to the ignite instance
ignite(1). This is obviously not possible in a truly distributed
(multi-jvm) environment.
- How will you synchronize cache update actions and transaction commit?
Say, you have one node that decided to commit, but another node is still
writing within this transaction. How do you make sure that two nodes will
not call commit() and rollback() simultaneously?
 - How do you make sure that either commit() or rollback() is called if an
originator failed?

2017-03-10 15:38 GMT+03:00 Дмитрий Рябов <so...@gmail.com>:

> Alexey Goncharuk, heh, my initial understanding was that transferring of tx
> ownership from one node to another will be happened automatically when
> originating node is gone down.
>
> 2017-03-10 15:36 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:
>
> > Im aiming to span transaction on multiple threads, nodes, jvms(soon). So
> > every node is able to rollback, or commit common transaction.It turned
> up i
> > need to transfer tx between nodes in order to commit transaction in
> > different node(in the same jvm).
> >
> > пт, 10 мар. 2017 г. в 15:20, Alexey Goncharuk <
> alexey.goncharuk@gmail.com
> > >:
> >
> > > Aleksey,
> > >
> > > Do you mean that you want a concept of transferring of tx ownership
> from
> > > one node to another? My initial understanding was that you want to be
> > able
> > > to update keys in a transaction from multiple threads in parallel.
> > >
> > > --AG
> > >
> > > 2017-03-10 15:01 GMT+03:00 ALEKSEY KUZNETSOV <alkuznetsov.sb@gmail.com
> >:
> > >
> > > > Well. Consider transaction started in one node, and continued in
> > another
> > > > one.
> > > > The following test describes my idea:
> > > >
> > > > Ignite ignite1 = ignite(0);
> > > >
> > > > IgniteTransactions transactions = ignite1.transactions();
> > > >
> > > > IgniteCache<String, Integer> cache = ignite1.getOrCreateCache("
> > > > testCache");
> > > >
> > > > Transaction tx = transactions.txStart(concurrency, isolation);
> > > >
> > > > cache.put("key1", 1);
> > > >
> > > > cache.put("key2", 2);
> > > >
> > > > tx.stop();
> > > >
> > > > IgniteInternalFuture<Boolean> fut = GridTestUtils.runAsync(() -> {
> > > >     IgniteTransactions ts = ignite(1).transactions();
> > > >     Assert.assertNull(ts.tx());
> > > >     Assert.assertEquals(TransactionState.STOPPED, tx.state());
> > > >     ts.txStart(tx);
> > > >     Assert.assertEquals(TransactionState.ACTIVE, tx.state());
> > > >     cache.put("key3", 3);
> > > >     Assert.assertTrue(cache.remove("key2"));
> > > >     tx.commit();
> > > >     return true;
> > > > });
> > > >
> > > > fut.get();
> > > >
> > > > Assert.assertEquals(TransactionState.COMMITTED, tx.state());
> > > > Assert.assertEquals((long)1, (long)cache.get("key1"));
> > > > Assert.assertEquals((long)3, (long)cache.get("key3"));
> > > > Assert.assertFalse(cache.containsKey("key2"));
> > > >
> > > > In method *ts.txStart(...)* we just rebind *tx* to current thread:
> > > >
> > > > public void txStart(Transaction tx) {
> > > >     TransactionProxyImpl transactionProxy = (TransactionProxyImpl)tx;
> > > >     cctx.tm().reopenTx(transactionProxy.tx());
> > > >     transactionProxy.bindToCurrentThread();
> > > > }
> > > >
> > > > In method *reopenTx* we alter *threadMap* so that it binds
> transaction
> > > > to current thread.
> > > >
> > > > How do u think about it ?
> > > >
> > > >
> > > > вт, 7 мар. 2017 г. в 22:38, Denis Magda <dm...@apache.org>:
> > > >
> > > > > Hi Alexey,
> > > > >
> > > > > Please share the rational behind this and the thoughts, design
> ideas
> > > you
> > > > > have in mind.
> > > > >
> > > > > —
> > > > > Denis
> > > > >
> > > > > > On Mar 7, 2017, at 3:19 AM, ALEKSEY KUZNETSOV <
> > > > alkuznetsov.sb@gmail.com>
> > > > > wrote:
> > > > > >
> > > > > > Hi all! Im designing distributed transaction which can be started
> > at
> > > > one
> > > > > > node, and continued at other one. Has anybody thoughts on it ?
> > > > > > --
> > > > > >
> > > > > > *Best Regards,*
> > > > > >
> > > > > > *Kuznetsov Aleksey*
> > > > >
> > > > > --
> > > >
> > > > *Best Regards,*
> > > >
> > > > *Kuznetsov Aleksey*
> > > >
> > >
> > --
> >
> > *Best Regards,*
> >
> > *Kuznetsov Aleksey*
> >
>

Re: distributed transaction of non-single coordinator

Posted by Дмитрий Рябов <so...@gmail.com>.
Alexey Goncharuk, heh, my initial understanding was that transferring of tx
ownership from one node to another will be happened automatically when
originating node is gone down.

2017-03-10 15:36 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:

> Im aiming to span transaction on multiple threads, nodes, jvms(soon). So
> every node is able to rollback, or commit common transaction.It turned up i
> need to transfer tx between nodes in order to commit transaction in
> different node(in the same jvm).
>
> пт, 10 мар. 2017 г. в 15:20, Alexey Goncharuk <alexey.goncharuk@gmail.com
> >:
>
> > Aleksey,
> >
> > Do you mean that you want a concept of transferring of tx ownership from
> > one node to another? My initial understanding was that you want to be
> able
> > to update keys in a transaction from multiple threads in parallel.
> >
> > --AG
> >
> > 2017-03-10 15:01 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:
> >
> > > Well. Consider transaction started in one node, and continued in
> another
> > > one.
> > > The following test describes my idea:
> > >
> > > Ignite ignite1 = ignite(0);
> > >
> > > IgniteTransactions transactions = ignite1.transactions();
> > >
> > > IgniteCache<String, Integer> cache = ignite1.getOrCreateCache("
> > > testCache");
> > >
> > > Transaction tx = transactions.txStart(concurrency, isolation);
> > >
> > > cache.put("key1", 1);
> > >
> > > cache.put("key2", 2);
> > >
> > > tx.stop();
> > >
> > > IgniteInternalFuture<Boolean> fut = GridTestUtils.runAsync(() -> {
> > >     IgniteTransactions ts = ignite(1).transactions();
> > >     Assert.assertNull(ts.tx());
> > >     Assert.assertEquals(TransactionState.STOPPED, tx.state());
> > >     ts.txStart(tx);
> > >     Assert.assertEquals(TransactionState.ACTIVE, tx.state());
> > >     cache.put("key3", 3);
> > >     Assert.assertTrue(cache.remove("key2"));
> > >     tx.commit();
> > >     return true;
> > > });
> > >
> > > fut.get();
> > >
> > > Assert.assertEquals(TransactionState.COMMITTED, tx.state());
> > > Assert.assertEquals((long)1, (long)cache.get("key1"));
> > > Assert.assertEquals((long)3, (long)cache.get("key3"));
> > > Assert.assertFalse(cache.containsKey("key2"));
> > >
> > > In method *ts.txStart(...)* we just rebind *tx* to current thread:
> > >
> > > public void txStart(Transaction tx) {
> > >     TransactionProxyImpl transactionProxy = (TransactionProxyImpl)tx;
> > >     cctx.tm().reopenTx(transactionProxy.tx());
> > >     transactionProxy.bindToCurrentThread();
> > > }
> > >
> > > In method *reopenTx* we alter *threadMap* so that it binds transaction
> > > to current thread.
> > >
> > > How do u think about it ?
> > >
> > >
> > > вт, 7 мар. 2017 г. в 22:38, Denis Magda <dm...@apache.org>:
> > >
> > > > Hi Alexey,
> > > >
> > > > Please share the rational behind this and the thoughts, design ideas
> > you
> > > > have in mind.
> > > >
> > > > —
> > > > Denis
> > > >
> > > > > On Mar 7, 2017, at 3:19 AM, ALEKSEY KUZNETSOV <
> > > alkuznetsov.sb@gmail.com>
> > > > wrote:
> > > > >
> > > > > Hi all! Im designing distributed transaction which can be started
> at
> > > one
> > > > > node, and continued at other one. Has anybody thoughts on it ?
> > > > > --
> > > > >
> > > > > *Best Regards,*
> > > > >
> > > > > *Kuznetsov Aleksey*
> > > >
> > > > --
> > >
> > > *Best Regards,*
> > >
> > > *Kuznetsov Aleksey*
> > >
> >
> --
>
> *Best Regards,*
>
> *Kuznetsov Aleksey*
>

Re: distributed transaction of non-single coordinator

Posted by ALEKSEY KUZNETSOV <al...@gmail.com>.
Im aiming to span transaction on multiple threads, nodes, jvms(soon). So
every node is able to rollback, or commit common transaction.It turned up i
need to transfer tx between nodes in order to commit transaction in
different node(in the same jvm).

пт, 10 мар. 2017 г. в 15:20, Alexey Goncharuk <al...@gmail.com>:

> Aleksey,
>
> Do you mean that you want a concept of transferring of tx ownership from
> one node to another? My initial understanding was that you want to be able
> to update keys in a transaction from multiple threads in parallel.
>
> --AG
>
> 2017-03-10 15:01 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:
>
> > Well. Consider transaction started in one node, and continued in another
> > one.
> > The following test describes my idea:
> >
> > Ignite ignite1 = ignite(0);
> >
> > IgniteTransactions transactions = ignite1.transactions();
> >
> > IgniteCache<String, Integer> cache = ignite1.getOrCreateCache("
> > testCache");
> >
> > Transaction tx = transactions.txStart(concurrency, isolation);
> >
> > cache.put("key1", 1);
> >
> > cache.put("key2", 2);
> >
> > tx.stop();
> >
> > IgniteInternalFuture<Boolean> fut = GridTestUtils.runAsync(() -> {
> >     IgniteTransactions ts = ignite(1).transactions();
> >     Assert.assertNull(ts.tx());
> >     Assert.assertEquals(TransactionState.STOPPED, tx.state());
> >     ts.txStart(tx);
> >     Assert.assertEquals(TransactionState.ACTIVE, tx.state());
> >     cache.put("key3", 3);
> >     Assert.assertTrue(cache.remove("key2"));
> >     tx.commit();
> >     return true;
> > });
> >
> > fut.get();
> >
> > Assert.assertEquals(TransactionState.COMMITTED, tx.state());
> > Assert.assertEquals((long)1, (long)cache.get("key1"));
> > Assert.assertEquals((long)3, (long)cache.get("key3"));
> > Assert.assertFalse(cache.containsKey("key2"));
> >
> > In method *ts.txStart(...)* we just rebind *tx* to current thread:
> >
> > public void txStart(Transaction tx) {
> >     TransactionProxyImpl transactionProxy = (TransactionProxyImpl)tx;
> >     cctx.tm().reopenTx(transactionProxy.tx());
> >     transactionProxy.bindToCurrentThread();
> > }
> >
> > In method *reopenTx* we alter *threadMap* so that it binds transaction
> > to current thread.
> >
> > How do u think about it ?
> >
> >
> > вт, 7 мар. 2017 г. в 22:38, Denis Magda <dm...@apache.org>:
> >
> > > Hi Alexey,
> > >
> > > Please share the rational behind this and the thoughts, design ideas
> you
> > > have in mind.
> > >
> > > —
> > > Denis
> > >
> > > > On Mar 7, 2017, at 3:19 AM, ALEKSEY KUZNETSOV <
> > alkuznetsov.sb@gmail.com>
> > > wrote:
> > > >
> > > > Hi all! Im designing distributed transaction which can be started at
> > one
> > > > node, and continued at other one. Has anybody thoughts on it ?
> > > > --
> > > >
> > > > *Best Regards,*
> > > >
> > > > *Kuznetsov Aleksey*
> > >
> > > --
> >
> > *Best Regards,*
> >
> > *Kuznetsov Aleksey*
> >
>
-- 

*Best Regards,*

*Kuznetsov Aleksey*

Re: distributed transaction of non-single coordinator

Posted by Alexey Goncharuk <al...@gmail.com>.
Aleksey,

Do you mean that you want a concept of transferring of tx ownership from
one node to another? My initial understanding was that you want to be able
to update keys in a transaction from multiple threads in parallel.

--AG

2017-03-10 15:01 GMT+03:00 ALEKSEY KUZNETSOV <al...@gmail.com>:

> Well. Consider transaction started in one node, and continued in another
> one.
> The following test describes my idea:
>
> Ignite ignite1 = ignite(0);
>
> IgniteTransactions transactions = ignite1.transactions();
>
> IgniteCache<String, Integer> cache = ignite1.getOrCreateCache("
> testCache");
>
> Transaction tx = transactions.txStart(concurrency, isolation);
>
> cache.put("key1", 1);
>
> cache.put("key2", 2);
>
> tx.stop();
>
> IgniteInternalFuture<Boolean> fut = GridTestUtils.runAsync(() -> {
>     IgniteTransactions ts = ignite(1).transactions();
>     Assert.assertNull(ts.tx());
>     Assert.assertEquals(TransactionState.STOPPED, tx.state());
>     ts.txStart(tx);
>     Assert.assertEquals(TransactionState.ACTIVE, tx.state());
>     cache.put("key3", 3);
>     Assert.assertTrue(cache.remove("key2"));
>     tx.commit();
>     return true;
> });
>
> fut.get();
>
> Assert.assertEquals(TransactionState.COMMITTED, tx.state());
> Assert.assertEquals((long)1, (long)cache.get("key1"));
> Assert.assertEquals((long)3, (long)cache.get("key3"));
> Assert.assertFalse(cache.containsKey("key2"));
>
> In method *ts.txStart(...)* we just rebind *tx* to current thread:
>
> public void txStart(Transaction tx) {
>     TransactionProxyImpl transactionProxy = (TransactionProxyImpl)tx;
>     cctx.tm().reopenTx(transactionProxy.tx());
>     transactionProxy.bindToCurrentThread();
> }
>
> In method *reopenTx* we alter *threadMap* so that it binds transaction
> to current thread.
>
> How do u think about it ?
>
>
> вт, 7 мар. 2017 г. в 22:38, Denis Magda <dm...@apache.org>:
>
> > Hi Alexey,
> >
> > Please share the rational behind this and the thoughts, design ideas you
> > have in mind.
> >
> > —
> > Denis
> >
> > > On Mar 7, 2017, at 3:19 AM, ALEKSEY KUZNETSOV <
> alkuznetsov.sb@gmail.com>
> > wrote:
> > >
> > > Hi all! Im designing distributed transaction which can be started at
> one
> > > node, and continued at other one. Has anybody thoughts on it ?
> > > --
> > >
> > > *Best Regards,*
> > >
> > > *Kuznetsov Aleksey*
> >
> > --
>
> *Best Regards,*
>
> *Kuznetsov Aleksey*
>

Re: distributed transaction of non-single coordinator

Posted by ALEKSEY KUZNETSOV <al...@gmail.com>.
Well. Consider transaction started in one node, and continued in another
one.
The following test describes my idea:

Ignite ignite1 = ignite(0);

IgniteTransactions transactions = ignite1.transactions();

IgniteCache<String, Integer> cache = ignite1.getOrCreateCache("testCache");

Transaction tx = transactions.txStart(concurrency, isolation);

cache.put("key1", 1);

cache.put("key2", 2);

tx.stop();

IgniteInternalFuture<Boolean> fut = GridTestUtils.runAsync(() -> {
    IgniteTransactions ts = ignite(1).transactions();
    Assert.assertNull(ts.tx());
    Assert.assertEquals(TransactionState.STOPPED, tx.state());
    ts.txStart(tx);
    Assert.assertEquals(TransactionState.ACTIVE, tx.state());
    cache.put("key3", 3);
    Assert.assertTrue(cache.remove("key2"));
    tx.commit();
    return true;
});

fut.get();

Assert.assertEquals(TransactionState.COMMITTED, tx.state());
Assert.assertEquals((long)1, (long)cache.get("key1"));
Assert.assertEquals((long)3, (long)cache.get("key3"));
Assert.assertFalse(cache.containsKey("key2"));

In method *ts.txStart(...)* we just rebind *tx* to current thread:

public void txStart(Transaction tx) {
    TransactionProxyImpl transactionProxy = (TransactionProxyImpl)tx;
    cctx.tm().reopenTx(transactionProxy.tx());
    transactionProxy.bindToCurrentThread();
}

In method *reopenTx* we alter *threadMap* so that it binds transaction
to current thread.

How do u think about it ?


вт, 7 мар. 2017 г. в 22:38, Denis Magda <dm...@apache.org>:

> Hi Alexey,
>
> Please share the rational behind this and the thoughts, design ideas you
> have in mind.
>
> —
> Denis
>
> > On Mar 7, 2017, at 3:19 AM, ALEKSEY KUZNETSOV <al...@gmail.com>
> wrote:
> >
> > Hi all! Im designing distributed transaction which can be started at one
> > node, and continued at other one. Has anybody thoughts on it ?
> > --
> >
> > *Best Regards,*
> >
> > *Kuznetsov Aleksey*
>
> --

*Best Regards,*

*Kuznetsov Aleksey*

Re: distributed transaction of non-single coordinator

Posted by Denis Magda <dm...@apache.org>.
Hi Alexey,

Please share the rational behind this and the thoughts, design ideas you have in mind.

—
Denis

> On Mar 7, 2017, at 3:19 AM, ALEKSEY KUZNETSOV <al...@gmail.com> wrote:
> 
> Hi all! Im designing distributed transaction which can be started at one
> node, and continued at other one. Has anybody thoughts on it ?
> -- 
> 
> *Best Regards,*
> 
> *Kuznetsov Aleksey*