You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ignite.apache.org by Alper Tekinalp <al...@evam.com> on 2017/03/01 14:32:58 UTC

Re: Same Affinity For Same Key On All Caches

Hi.

So do you think that kind of behaviour is a bug or is it as it has to be?
Will there be a ticket or shoul I handle it on my own?

Regards.

On Tue, Feb 28, 2017 at 3:38 PM, Alper Tekinalp <al...@evam.com> wrote:

> Hi.
>
> I guess I was wrong about the problem. The issue does not occur when
> different nodes creates the caches but if partitions reassigned.
>
> Say I created cache1 on node1, then added node2. Partitions for cache1
> will be reassigned. Then I create cache2 (regardless of node). Partitions
> assignments for cache1 and cache2 are not same.
>
> When partitions reassigned ctx.previousAssignment(part); refers to the
> node that creates the cache:
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *previousAssignment: [127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.42]assignment:
> [127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42]backups: 1tiers: 2partition set for
> tier:0PartSet [nodeId=5dff841e-c578-476d-8996-39618d39790b, size=16,
> parts=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]]PartSet
> [nodeId=192f1ddb-89ed-417f-91ae-4cd16b5b1b69, size=0, parts=[]]partition
> set for tier:1PartSet [nodeId=5dff841e-c578-476d-8996-39618d39790b, size=0,
> parts=[]]PartSet [nodeId=192f1ddb-89ed-417f-91ae-4cd16b5b1b69, size=0,
> parts=[]]*
>
> *Full mapping for partitions:*
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *[127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42]*
> No pendings for tier 0 and then it tries to rebalance partitions and
> mapping becomes:
>
> *Full mapping for partitions:*
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *[127.0.0.1, 192.168.1.28][127.0.0.1, 192.168.1.28][127.0.0.1,
> 192.168.1.28][127.0.0.1, 192.168.1.28][127.0.0.1, 192.168.1.28][127.0.0.1,
> 192.168.1.28][127.0.0.1, 192.168.1.28][127.0.0.1, 192.168.1.28][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42]*
> After going through tier 1 for pendings, which is all, mapping becomes:
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *Full mapping for partitions:[127.0.0.1, 192.168.1.28] => [127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.28] => [127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.28] => [127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.28] => [127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.28] => [127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.28] => [127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.28] => [127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.28] => [127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42] => [127.0.0.1,
> 192.168.1.28][127.0.0.1, 192.168.1.42] => [127.0.0.1,
> 192.168.1.28][127.0.0.1, 192.168.1.42] => [127.0.0.1,
> 192.168.1.28][127.0.0.1, 192.168.1.42] => [127.0.0.1,
> 192.168.1.28][127.0.0.1, 192.168.1.42] => [127.0.0.1,
> 192.168.1.28][127.0.0.1, 192.168.1.42] => [127.0.0.1,
> 192.168.1.28][127.0.0.1, 192.168.1.42] => [127.0.0.1,
> 192.168.1.28][127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]*
> But if I destroy and recreate cache previous assignments are all null:
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *previousAssignment:
> nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullassignment:
> [][][][][][][][][][][][][][][][]backups: 1tiers: 2partition set for
> tier:0PartSet [nodeId=5dff841e-c578-476d-8996-39618d39790b, size=0,
> parts=[]]PartSet [nodeId=192f1ddb-89ed-417f-91ae-4cd16b5b1b69, size=0,
> parts=[]]partition set for tier:1PartSet
> [nodeId=5dff841e-c578-476d-8996-39618d39790b, size=0, parts=[]]PartSet
> [nodeId=192f1ddb-89ed-417f-91ae-4cd16b5b1b69, size=0, parts=[]]*
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *Full mapping for partitions: [][][][][][][][][][][][][][][][]*
>
>
> And after that it assign partitions as in round robin:
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *Full mapping for partitions:[127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.28][127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.28][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.28][127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.28][127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.28][127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.28][127.0.0.1, 192.168.1.42][127.0.0.1,
> 192.168.1.28][127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.28]*
> And after tier 1 assignments:
>
> *Full mapping for partitions:*
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28][127.0.0.1,
> 192.168.1.28] => [127.0.0.1, 192.168.1.42][127.0.0.1, 192.168.1.42] =>
> [127.0.0.1, 192.168.1.28][127.0.0.1, 192.168.1.28] => [127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42] => [127.0.0.1,
> 192.168.1.28][127.0.0.1, 192.168.1.28] => [127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42] => [127.0.0.1,
> 192.168.1.28][127.0.0.1, 192.168.1.28] => [127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42] => [127.0.0.1,
> 192.168.1.28][127.0.0.1, 192.168.1.28] => [127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42] => [127.0.0.1,
> 192.168.1.28][127.0.0.1, 192.168.1.28] => [127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42] => [127.0.0.1,
> 192.168.1.28][127.0.0.1, 192.168.1.28] => [127.0.0.1,
> 192.168.1.42][127.0.0.1, 192.168.1.42] => [127.0.0.1,
> 192.168.1.28][127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]*
>
> That is what I found while debugging. Sorry for verbose mail.
>
>
> On Tue, Feb 28, 2017 at 9:56 AM, Alper Tekinalp <al...@evam.com> wrote:
>
>> Hi Val,
>>
>> We are using fair affinity function because we want to keep data more
>> balanced among nodes. When I change "new FairAffinityFunction(128)"  with
>> "new RendezvousAffinityFunction(false, 128)" I could not reproduce the
>> problem.
>>
>>
>> On Tue, Feb 28, 2017 at 7:15 AM, vkulichenko <
>> valentin.kulichenko@gmail.com> wrote:
>>
>>> Andrey,
>>>
>>> Is there an explanation for this? If this all is true, it sounds like a
>>> bug
>>> to me, and pretty serious one.
>>>
>>> Alper, what is the reason for using fair affinity function? Do you have
>>> the
>>> same behavior with rendezvous (the default one)?
>>>
>>> -Val
>>>
>>>
>>>
>>> --
>>> View this message in context: http://apache-ignite-users.705
>>> 18.x6.nabble.com/Same-Affinity-For-Same-Key-On-All-Caches-tp
>>> 10829p10933.html
>>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>>
>>
>>
>>
>> --
>> Alper Tekinalp
>>
>> Software Developer
>> Evam Streaming Analytics
>>
>> Atatürk Mah. Turgut Özal Bulv.
>> Gardenya 5 Plaza K:6 Ataşehir
>> 34758 İSTANBUL
>>
>> Tel:  +90 216 455 01 53 Fax: +90 216 455 01 54
>> www.evam.com.tr
>> <http://www.evam.com>
>>
>
>
>
> --
> Alper Tekinalp
>
> Software Developer
> Evam Streaming Analytics
>
> Atatürk Mah. Turgut Özal Bulv.
> Gardenya 5 Plaza K:6 Ataşehir
> 34758 İSTANBUL
>
> Tel:  +90 216 455 01 53 Fax: +90 216 455 01 54
> www.evam.com.tr
> <http://www.evam.com>
>



-- 
Alper Tekinalp

Software Developer
Evam Streaming Analytics

Atatürk Mah. Turgut Özal Bulv.
Gardenya 5 Plaza K:6 Ataşehir
34758 İSTANBUL

Tel:  +90 216 455 01 53 Fax: +90 216 455 01 54
www.evam.com.tr
<http://www.evam.com>