You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ignite.apache.org by "Taras Ledkov (JIRA)" <ji...@apache.org> on 2017/01/10 08:48:58 UTC

[jira] [Commented] (IGNITE-3018) Cache affinity calculation is slow with large nodes number

    [ https://issues.apache.org/jira/browse/IGNITE-3018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15814358#comment-15814358 ] 

Taras Ledkov commented on IGNITE-3018:
--------------------------------------

[Pull request|https://github.com/apache/ignite/pull/684]

> Cache affinity calculation is slow with large nodes number
> ----------------------------------------------------------
>
>                 Key: IGNITE-3018
>                 URL: https://issues.apache.org/jira/browse/IGNITE-3018
>             Project: Ignite
>          Issue Type: Bug
>          Components: cache
>            Reporter: Semen Boikov
>            Assignee: Taras Ledkov
>             Fix For: 2.0
>
>         Attachments: 003.png, 064.png, 100.png, 128.png, 200.png, 300.png, 400.png, 500.png, 600.png
>
>
> With large number of cache server nodes (> 200)  RendezvousAffinityFunction and FairAffinityFunction work pretty slow .
> For RendezvousAffinityFunction.assignPartitions can take hundredes of milliseconds, for FairAffinityFunction it can take seconds.
> For RendezvousAffinityFunction most time is spent in MD5 hash calculation and nodes list sorting. As optimization we can try to cache {partion, node} MD5 hash or try another hash function. Also several minor optimizations are possible (avoid unncecessary allocations, only one thread local 'get', etc).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)