You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ignite.apache.org by "Semen Boikov (JIRA)" <ji...@apache.org> on 2016/04/18 08:10:25 UTC
[jira] [Created] (IGNITE-3018) Cache affinity calculation is slow
with large nodes number
Semen Boikov created IGNITE-3018:
------------------------------------
Summary: Cache affinity calculation is slow with large nodes number
Key: IGNITE-3018
URL: https://issues.apache.org/jira/browse/IGNITE-3018
Project: Ignite
Issue Type: Bug
Components: cache
Reporter: Semen Boikov
Assignee: Semen Boikov
Priority: Critical
Fix For: 1.6
With large number of cache server nodes (> 200) RendezvousAffinityFunction and FairAffinityFunction work pretty slow .
For RendezvousAffinityFunction.assignPartitions can take hundredes of milliseconds, for FairAffinityFunction it can take seconds.
For RendezvousAffinityFunction most time is spent in MD5 hash calculation and nodes list sorting. As optimization we can try to cache {partion, node} MD5 hash or try another hash function. Also several minor optimizations are possible (avoid unncecessary allocations, only one thread local 'get', etc).
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)