You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ignite.apache.org by Andrey Mashenkov <an...@gmail.com> on 2017/03/01 14:45:24 UTC

Re: Same Affinity For Same Key On All Caches

Crossposting to dev list.

I've made a test.
It looks ok for  Rendevouz A, partition distribution for caches with
similar settings and same Rendevouz AF keep same.
But FairAF partition distribution can differed for two caches that one was
created before and second - after rebalancing.

So, collocation is not guarateed for same key and similar caches with same
Fair AF.

PFA repro.

If it is a bug?

On Tue, Feb 28, 2017 at 3:38 PM, Alper Tekinalp <al...@evam.com> wrote:

> Hi.
>
> I guess I was wrong about the problem. The issue does not occur when
> different nodes creates the caches but if partitions reassigned.
>
> Say I created cache1 on node1, then added node2. Partitions for cache1
> will be reassigned. Then I create cache2 (regardless of node). Partitions
> assignments for cache1 and cache2 are not same.
>
> When partitions reassigned ctx.previousAssignment(part); refers to the
> node that creates the cache:
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *previousAssignment: [127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.42]assignment:
> [127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42]backups: 1tiers: 2partition set for
> tier:0PartSet [nodeId=5dff841e-c578-476d-8996-39618d39790b, size=16,
> parts=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]]PartSet
> [nodeId=192f1ddb-89ed-417f-91ae-4cd16b5b1b69, size=0, parts=[]]partition
> set for tier:1PartSet [nodeId=5dff841e-c578-476d-8996-39618d39790b, size=0,
> parts=[]]PartSet [nodeId=192f1ddb-89ed-417f-91ae-4cd16b5b1b69, size=0,
> parts=[]]*
>
> *Full mapping for partitions:*
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *[127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42]*
> No pendings for tier 0 and then it tries to rebalance partitions and
> mapping becomes:
>
> *Full mapping for partitions:*
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *[127.0.0.1, 192.168.1.28][127.0.0.1, 192.168.1.28][127.0.0.1,
> 192.168.1.28][127.0.0.1, 192.168.1.28][127.0.0.1, 192.168.1.28][127.0.0.1,
> 192.168.1.28][127.0.0.1, 192.168.1.28][127.0.0.1, 192.168.1.28][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42]*
> After going through tier 1 for pendings, which is all, mapping becomes:
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *Full mapping for partitions:[127.0.0.1, 192.168.1.28] => [127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.28] => [127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.28] => [127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.28] => [127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.28] => [127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.28] => [127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.28] => [127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.28] => [127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42] => [127.0.0.1,
> 192.168.1.28][127.0.0.1, 192.168.1.42] => [127.0.0.1,
> 192.168.1.28][127.0.0.1, 192.168.1.42] => [127.0.0.1,
> 192.168.1.28][127.0.0.1, 192.168.1.42] => [127.0.0.1,
> 192.168.1.28][127.0.0.1, 192.168.1.42] => [127.0.0.1,
> 192.168.1.28][127.0.0.1, 192.168.1.42] => [127.0.0.1,
> 192.168.1.28][127.0.0.1, 192.168.1.42] => [127.0.0.1,
> 192.168.1.28][127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]*
> But if I destroy and recreate cache previous assignments are all null:
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *previousAssignment:
> nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullassignment:
> [][][][][][][][][][][][][][][][]backups: 1tiers: 2partition set for
> tier:0PartSet [nodeId=5dff841e-c578-476d-8996-39618d39790b, size=0,
> parts=[]]PartSet [nodeId=192f1ddb-89ed-417f-91ae-4cd16b5b1b69, size=0,
> parts=[]]partition set for tier:1PartSet
> [nodeId=5dff841e-c578-476d-8996-39618d39790b, size=0, parts=[]]PartSet
> [nodeId=192f1ddb-89ed-417f-91ae-4cd16b5b1b69, size=0, parts=[]]*
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *Full mapping for partitions: [][][][][][][][][][][][][][][][]*
>
>
> And after that it assign partitions as in round robin:
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *Full mapping for partitions:[127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.28][127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.28][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.28][127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.28][127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.28][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.28][127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.28][127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.28]*
> And after tier 1 assignments:
>
> *Full mapping for partitions:*
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28][127.0.0.1,
> 192.168.1.28] => [127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.42] =>
> [127.0.0.1, 192.168.1.28][127.0.0.1, 192.168.1.28] => [127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42] => [127.0.0.1,
> 192.168.1.28][127.0.0.1, 192.168.1.28] => [127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42] => [127.0.0.1,
> 192.168.1.28][127.0.0.1, 192.168.1.28] => [127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42] => [127.0.0.1,
> 192.168.1.28][127.0.0.1, 192.168.1.28] => [127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42] => [127.0.0.1,
> 192.168.1.28][127.0.0.1, 192.168.1.28] => [127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42] => [127.0.0.1,
> 192.168.1.28][127.0.0.1, 192.168.1.28] => [127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42] => [127.0.0.1,
> 192.168.1.28][127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]*
>
> That is what I found while debugging. Sorry for verbose mail.
>
>
> On Tue, Feb 28, 2017 at 9:56 AM, Alper Tekinalp <al...@evam.com> wrote:
>
>> Hi Val,
>>
>> We are using fair affinity function because we want to keep data more
>> balanced among nodes. When I change "new FairAffinityFunction(128)"  with
>> "new RendezvousAffinityFunction(false, 128)" I could not reproduce the
>> problem.
>>
>>
>> On Tue, Feb 28, 2017 at 7:15 AM, vkulichenko <
>> valentin.kulichenko@gmail.com> wrote:
>>
>>> Andrey,
>>>
>>> Is there an explanation for this? If this all is true, it sounds like a
>>> bug
>>> to me, and pretty serious one.
>>>
>>> Alper, what is the reason for using fair affinity function? Do you have
>>> the
>>> same behavior with rendezvous (the default one)?
>>>
>>> -Val
>>>
>>>
>>>
>>> --
>>> View this message in context: http://apache-ignite-users.705
>>> 18.x6.nabble.com/Same-Affinity-For-Same-Key-On-All-Caches-tp
>>> 10829p10933.html
>>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>>
>>
>>
>>
>> --
>> Alper Tekinalp
>>
>> Software Developer
>> Evam Streaming Analytics
>>
>> Atatürk Mah. Turgut Özal Bulv.
>> Gardenya 5 Plaza K:6 Ataşehir
>> 34758 İSTANBUL
>>
>> Tel:  +90 216 455 01 53 Fax: +90 216 455 01 54
>> www.evam.com.tr
>> <http://www.evam.com>
>>
>
>
>
> --
> Alper Tekinalp
>
> Software Developer
> Evam Streaming Analytics
>
> Atatürk Mah. Turgut Özal Bulv.
> Gardenya 5 Plaza K:6 Ataşehir
> 34758 İSTANBUL
>
> Tel:  +90 216 455 01 53 Fax: +90 216 455 01 54
> www.evam.com.tr
> <http://www.evam.com>
>



-- 
Best regards,
Andrey V. Mashenkov

Re: Same Affinity For Same Key On All Caches

Posted by Andrey Mashenkov <an...@gmail.com>.
Good catch, Taras!

+1 for balanced Rendezvous AF instead of Fair AF.


On Wed, Mar 15, 2017 at 1:29 PM, Taras Ledkov <tl...@gridgain.com> wrote:

> Folks,
>
> I worked on issue https://issues.apache.org/jira/browse/IGNITE-3018 that
> is related to performance of Rendezvous AF.
>
> But Wang/Jenkins hash integer hash distribution is worse then MD5. So, i
> try to use simple partition balancer close
> to Fair AF for Rendezvous AF.
> Take a look at the heatmaps of distributions at the issue. e.g.:
> - Compare of current Rendezvous AF and new Rendezvous AF based of
> Wang/Jenkins hash: https://issues.apache.org/jira/secure/attachment/
> 12858701/004.png
> - Compare of current Rendezvous AF and new Rendezvous AF based of
> Wang/Jenkins hash with partition balancer: https://issues.apache.org/
> jira/secure/attachment/12858690/balanced.004.png
>
> When the balancer is enabled the distribution of partitions by nodes looks
> like close to even distribution
> but in this case there is not guarantee that a partition doesn't move from
> one node to another
> when node leave topology.
> It is not guarantee but we try to minimize it because sorted array of
> nodes is used (like in for pure-Rendezvous AF).
>
> I think we can use new fast Rendezvous AF and use 'useBalancer' flag
> instead of Fair AF.
>
>
> On 03.03.2017 1:56, Denis Magda wrote:
>
> What??? Unbelievable. It sounds like a design flaw to me. Any ideas how to
> fix?
>
> —
> Denis
>
> On Mar 2, 2017, at 2:43 PM, Valentin Kulichenko <
> valentin.kulichenko@gmail.com> wrote:
>
> Adding back the dev list.
>
> Folks,
>
> Are there any opinions on the problem discussed here? Do we really need
> FairAffinityFunction if it can't guarantee cross-cache collocation?
>
> -Val
>
> On Thu, Mar 2, 2017 at 2:41 PM, vkulichenko <valentin.kulichenko@gmail.com
> > wrote:
>
>> Hi Alex,
>>
>> I see your point. Can you please outline its advantages vs rendezvous
>> function?
>>
>> In my view issue discussed here makes it pretty much useless in vast
>> majority of use cases, and very error-prone in all others.
>>
>> -Val
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/Same-Affinity-For-Same-Key-On-All-Caches-
>> tp10829p11006.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>
>
> --
> Taras Ledkov
> Mail-To: tledkov@gridgain.com
>
>


-- 
Best regards,
Andrey V. Mashenkov

Re: Same Affinity For Same Key On All Caches

Posted by Taras Ledkov <tl...@gridgain.com>.
Folks,

I worked on issue https://issues.apache.org/jira/browse/IGNITE-3018 that 
is related to performance of Rendezvous AF.

But Wang/Jenkins hash integer hash distribution is worse then MD5. So, i 
try to use simple partition balancer close
to Fair AF for Rendezvous AF.

Take a look at the heatmaps of distributions at the issue. e.g.:
- Compare of current Rendezvous AF and new Rendezvous AF based of 
Wang/Jenkins hash: 
https://issues.apache.org/jira/secure/attachment/12858701/004.png
- Compare of current Rendezvous AF and new Rendezvous AF based of 
Wang/Jenkins hash with partition balancer: 
https://issues.apache.org/jira/secure/attachment/12858690/balanced.004.png

When the balancer is enabled the distribution of partitions by nodes 
looks like close to even distribution
but in this case there is not guarantee that a partition doesn't move 
from one node to another
when node leave topology.
It is not guarantee but we try to minimize it because sorted array of 
nodes is used (like in for pure-Rendezvous AF).

I think we can use new fast Rendezvous AF and use 'useBalancer' flag 
instead of Fair AF.

On 03.03.2017 1:56, Denis Magda wrote:
> What??? Unbelievable. It sounds like a design flaw to me. Any ideas 
> how to fix?
>
> \u2014
> Denis
>
>> On Mar 2, 2017, at 2:43 PM, Valentin Kulichenko 
>> <valentin.kulichenko@gmail.com 
>> <ma...@gmail.com>> wrote:
>>
>> Adding back the dev list.
>>
>> Folks,
>>
>> Are there any opinions on the problem discussed here? Do we really 
>> need FairAffinityFunction if it can't guarantee cross-cache collocation?
>>
>> -Val
>>
>> On Thu, Mar 2, 2017 at 2:41 PM, vkulichenko 
>> <valentin.kulichenko@gmail.com 
>> <ma...@gmail.com>> wrote:
>>
>>     Hi Alex,
>>
>>     I see your point. Can you please outline its advantages vs rendezvous
>>     function?
>>
>>     In my view issue discussed here makes it pretty much useless in vast
>>     majority of use cases, and very error-prone in all others.
>>
>>     -Val
>>
>>
>>
>>     --
>>     View this message in context:
>>     http://apache-ignite-users.70518.x6.nabble.com/Same-Affinity-For-Same-Key-On-All-Caches-tp10829p11006.html
>>     <http://apache-ignite-users.70518.x6.nabble.com/Same-Affinity-For-Same-Key-On-All-Caches-tp10829p11006.html>
>>     Sent from the Apache Ignite Users mailing list archive at
>>     Nabble.com <http://Nabble.com>.
>>
>>
>

-- 
Taras Ledkov
Mail-To: tledkov@gridgain.com


Re: Same Affinity For Same Key On All Caches

Posted by Denis Magda <dm...@apache.org>.
What??? Unbelievable. It sounds like a design flaw to me. Any ideas how to fix?

—
Denis

> On Mar 2, 2017, at 2:43 PM, Valentin Kulichenko <va...@gmail.com> wrote:
> 
> Adding back the dev list.
> 
> Folks,
> 
> Are there any opinions on the problem discussed here? Do we really need FairAffinityFunction if it can't guarantee cross-cache collocation?
> 
> -Val
> 
> On Thu, Mar 2, 2017 at 2:41 PM, vkulichenko <valentin.kulichenko@gmail.com <ma...@gmail.com>> wrote:
> Hi Alex,
> 
> I see your point. Can you please outline its advantages vs rendezvous
> function?
> 
> In my view issue discussed here makes it pretty much useless in vast
> majority of use cases, and very error-prone in all others.
> 
> -Val
> 
> 
> 
> --
> View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Same-Affinity-For-Same-Key-On-All-Caches-tp10829p11006.html <http://apache-ignite-users.70518.x6.nabble.com/Same-Affinity-For-Same-Key-On-All-Caches-tp10829p11006.html>
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
> 


Re: Same Affinity For Same Key On All Caches

Posted by Denis Magda <dm...@apache.org>.
What??? Unbelievable. It sounds like a design flaw to me. Any ideas how to fix?

—
Denis

> On Mar 2, 2017, at 2:43 PM, Valentin Kulichenko <va...@gmail.com> wrote:
> 
> Adding back the dev list.
> 
> Folks,
> 
> Are there any opinions on the problem discussed here? Do we really need FairAffinityFunction if it can't guarantee cross-cache collocation?
> 
> -Val
> 
> On Thu, Mar 2, 2017 at 2:41 PM, vkulichenko <valentin.kulichenko@gmail.com <ma...@gmail.com>> wrote:
> Hi Alex,
> 
> I see your point. Can you please outline its advantages vs rendezvous
> function?
> 
> In my view issue discussed here makes it pretty much useless in vast
> majority of use cases, and very error-prone in all others.
> 
> -Val
> 
> 
> 
> --
> View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Same-Affinity-For-Same-Key-On-All-Caches-tp10829p11006.html <http://apache-ignite-users.70518.x6.nabble.com/Same-Affinity-For-Same-Key-On-All-Caches-tp10829p11006.html>
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
> 


Re: Same Affinity For Same Key On All Caches

Posted by Valentin Kulichenko <va...@gmail.com>.
Adding back the dev list.

Folks,

Are there any opinions on the problem discussed here? Do we really need
FairAffinityFunction if it can't guarantee cross-cache collocation?

-Val

On Thu, Mar 2, 2017 at 2:41 PM, vkulichenko <va...@gmail.com>
wrote:

> Hi Alex,
>
> I see your point. Can you please outline its advantages vs rendezvous
> function?
>
> In my view issue discussed here makes it pretty much useless in vast
> majority of use cases, and very error-prone in all others.
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Same-Affinity-For-Same-Key-On-All-
> Caches-tp10829p11006.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>

Re: Same Affinity For Same Key On All Caches

Posted by Valentin Kulichenko <va...@gmail.com>.
Adding back the dev list.

Folks,

Are there any opinions on the problem discussed here? Do we really need
FairAffinityFunction if it can't guarantee cross-cache collocation?

-Val

On Thu, Mar 2, 2017 at 2:41 PM, vkulichenko <va...@gmail.com>
wrote:

> Hi Alex,
>
> I see your point. Can you please outline its advantages vs rendezvous
> function?
>
> In my view issue discussed here makes it pretty much useless in vast
> majority of use cases, and very error-prone in all others.
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Same-Affinity-For-Same-Key-On-All-
> Caches-tp10829p11006.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>

Re: Same Affinity For Same Key On All Caches

Posted by vkulichenko <va...@gmail.com>.
Hi Alex,

I see your point. Can you please outline its advantages vs rendezvous
function?

In my view issue discussed here makes it pretty much useless in vast
majority of use cases, and very error-prone in all others.

-Val



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Same-Affinity-For-Same-Key-On-All-Caches-tp10829p11006.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: Same Affinity For Same Key On All Caches

Posted by Alexey Goncharuk <al...@gmail.com>.
This does not look like a bug to me. Rendezvous affinity function is
stateless, while FairAffinityFunction relies on the previous partition
distribution among nodes, thus it IS stateful. The partition distribution
would be the same if caches were created on the same cluster topology and
then a sequence of topology changes was applied.

2017-03-02 0:14 GMT+03:00 vkulichenko <va...@gmail.com>:

> Andrew,
>
> Yes, I believe it's a bug, let's create a ticket.
>
> Do you have any idea why this happens? The function doesn't have any state,
> so I don't see any difference between two its instances on same node for
> different caches, and two instances on different nodes for the same cache.
> This makes me think that mapping inconsistency can occur in the latter case
> as well, and if so, it's a very critical issue.
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Same-Affinity-For-Same-Key-On-All-
> Caches-tp10829p10979.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>

Re: Same Affinity For Same Key On All Caches

Posted by Alper Tekinalp <al...@evam.com>.
Hi.

I created a bug ticket for that:
https://issues.apache.org/jira/browse/IGNITE-4765

Val, the problem here is fair affinity function calculates partition
mappings based on previous assignments. When rebalancing partitions
previous assignments for a cache is known and new assignment calculated
based on that. But when you create a new cache there is no previous
assignments and the calculation is different.


On Thu, Mar 2, 2017 at 12:14 AM, vkulichenko <va...@gmail.com>
wrote:

> Andrew,
>
> Yes, I believe it's a bug, let's create a ticket.
>
> Do you have any idea why this happens? The function doesn't have any state,
> so I don't see any difference between two its instances on same node for
> different caches, and two instances on different nodes for the same cache.
> This makes me think that mapping inconsistency can occur in the latter case
> as well, and if so, it's a very critical issue.
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Same-Affinity-For-Same-Key-On-All-
> Caches-tp10829p10979.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Alper Tekinalp

Software Developer
Evam Streaming Analytics

Atatürk Mah. Turgut Özal Bulv.
Gardenya 5 Plaza K:6 Ataşehir
34758 İSTANBUL

Tel:  +90 216 455 01 53 Fax: +90 216 455 01 54
www.evam.com.tr
<http://www.evam.com>

Re: Same Affinity For Same Key On All Caches

Posted by vkulichenko <va...@gmail.com>.
Andrew,

Yes, I believe it's a bug, let's create a ticket.

Do you have any idea why this happens? The function doesn't have any state,
so I don't see any difference between two its instances on same node for
different caches, and two instances on different nodes for the same cache.
This makes me think that mapping inconsistency can occur in the latter case
as well, and if so, it's a very critical issue.

-Val



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Same-Affinity-For-Same-Key-On-All-Caches-tp10829p10979.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.