You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ignite.apache.org by "Konstantin Bolyandra (JIRA)" <ji...@apache.org> on 2019/01/28 21:00:00 UTC

[jira] [Comment Edited] (IGNITE-10920) Optimize HistoryAffinityAssignment heap usage.

    [ https://issues.apache.org/jira/browse/IGNITE-10920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16754348#comment-16754348 ] 

Konstantin Bolyandra edited comment on IGNITE-10920 at 1/28/19 8:59 PM:
------------------------------------------------------------------------

[~ascherbakov], thanks for review.
 # Done.
 # Studiyng JOL benchark revelead what proposed optimization was almost useless because main heap consumer was Object overhead due to many instances of ArrayLists. I tried to remove many instances by storing node primary-backup order in char[] array, because partition count couldn't exceed 2 bytes. Also it seems good because of strong locality. Such optimization has diminishing returns in case of many backups, so I disabled it for replicated caches. Please check PR for details. TC run is in progress. Below JOL affinity cache heap size calculation results for configuration with 32768 partitions and 32 nodes added to topology, it shows almost 4x heap size reduction:

 
{noformat}
Heap usage [optimized=false, parts=32768, nodeCnt=32, backups=2, footprint:
COUNT AVG SUM DESCRIPTION
1115802 42 47500664 [Ljava.lang.Object;
1 16 16 java.lang.Object
1115802 24 26779248 java.util.ArrayList
93 24 2232 java.util.Collections$UnmodifiableRandomAccessList
1 48 48 java.util.concurrent.ConcurrentSkipListMap
3 32 96 java.util.concurrent.ConcurrentSkipListMap$HeadIndex
24 24 576 java.util.concurrent.ConcurrentSkipListMap$Index
63 24 1512 java.util.concurrent.ConcurrentSkipListMap$Node
62 24 1488 org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion
62 40 2480 org.apache.ignite.internal.processors.affinity.HistoryAffinityAssignment
2231913 74288360 (total)

Heap usage [optimized=true, parts=32768, nodeCnt=32, backups=2, footprint:
COUNT AVG SUM DESCRIPTION
99963 144 14457240 [C
31 24327 754160 [Ljava.util.HashMap$Node;
62 85 5328 [Lorg.apache.ignite.cluster.ClusterNode;
99664 16 1594624 java.lang.Integer
1 16 16 java.lang.Object
62 48 2976 java.util.HashMap
99901 32 3196832 java.util.HashMap$Node
1 48 48 java.util.concurrent.ConcurrentSkipListMap
4 32 128 java.util.concurrent.ConcurrentSkipListMap$HeadIndex
32 24 768 java.util.concurrent.ConcurrentSkipListMap$Index
63 24 1512 java.util.concurrent.ConcurrentSkipListMap$Node
62 24 1488 org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion
62 40 2480 org.apache.ignite.internal.processors.affinity.HistoryAffinityAssignment
62 32 1984 org.apache.ignite.internal.processors.affinity.HistoryAffinityAssignment$1
31 32 992 org.apache.ignite.internal.processors.affinity.HistoryAffinityAssignment$2
300001 20020576 (total)

Optimization: optimized=20020576, deoptimized=74288360 rate: 3.71

{noformat}


was (Author: kbolyandra):
[~ascherbakov], thanks for review.
 # Done.
 # Studiyng JOL benchark revelead what proposed optimization was almost useless because main heap consumer was Object overhead due to many instances of ArrayLists. I tried to remove many instances by storing node primary-backup order in char[] array, because partition count couldn't exceed 2 bytes. Also it seems good because of strong locality. Such optimization has diminishing returns in case of many backups, so I disabled for replicated caches. Please check PR for details. TC run is in progress. Below JOL affinity cache heap size calculation results for configuration with 32768 partitions and 32 nodes added to topology, it shows almost 4x heap size reduction:

 
{noformat}
Heap usage [optimized=false, parts=32768, nodeCnt=32, backups=2, footprint:
COUNT AVG SUM DESCRIPTION
1115802 42 47500664 [Ljava.lang.Object;
1 16 16 java.lang.Object
1115802 24 26779248 java.util.ArrayList
93 24 2232 java.util.Collections$UnmodifiableRandomAccessList
1 48 48 java.util.concurrent.ConcurrentSkipListMap
3 32 96 java.util.concurrent.ConcurrentSkipListMap$HeadIndex
24 24 576 java.util.concurrent.ConcurrentSkipListMap$Index
63 24 1512 java.util.concurrent.ConcurrentSkipListMap$Node
62 24 1488 org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion
62 40 2480 org.apache.ignite.internal.processors.affinity.HistoryAffinityAssignment
2231913 74288360 (total)

Heap usage [optimized=true, parts=32768, nodeCnt=32, backups=2, footprint:
COUNT AVG SUM DESCRIPTION
99963 144 14457240 [C
31 24327 754160 [Ljava.util.HashMap$Node;
62 85 5328 [Lorg.apache.ignite.cluster.ClusterNode;
99664 16 1594624 java.lang.Integer
1 16 16 java.lang.Object
62 48 2976 java.util.HashMap
99901 32 3196832 java.util.HashMap$Node
1 48 48 java.util.concurrent.ConcurrentSkipListMap
4 32 128 java.util.concurrent.ConcurrentSkipListMap$HeadIndex
32 24 768 java.util.concurrent.ConcurrentSkipListMap$Index
63 24 1512 java.util.concurrent.ConcurrentSkipListMap$Node
62 24 1488 org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion
62 40 2480 org.apache.ignite.internal.processors.affinity.HistoryAffinityAssignment
62 32 1984 org.apache.ignite.internal.processors.affinity.HistoryAffinityAssignment$1
31 32 992 org.apache.ignite.internal.processors.affinity.HistoryAffinityAssignment$2
300001 20020576 (total)

Optimization: optimized=20020576, deoptimized=74288360 rate: 3.71

{noformat}

> Optimize HistoryAffinityAssignment heap usage.
> ----------------------------------------------
>
>                 Key: IGNITE-10920
>                 URL: https://issues.apache.org/jira/browse/IGNITE-10920
>             Project: Ignite
>          Issue Type: Improvement
>            Reporter: Alexei Scherbakov
>            Assignee: Konstantin Bolyandra
>            Priority: Major
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> With large topology and large amount of caches/partitions many server discovery events may quickly produce large affinity history, eating gigabytes of heap.
> Solution: implement some kind of a compression for affinity cache map.
> On example, affinity history could be stored as delta to some previous version.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)