You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ignite.apache.org by "Sergey Kosarev (JIRA)" <ji...@apache.org> on 2018/07/30 23:57:00 UTC

[jira] [Updated] (IGNITE-9135) TcpDiscovery - High Workload in Stable topology

     [ https://issues.apache.org/jira/browse/IGNITE-9135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sergey Kosarev updated IGNITE-9135:
-----------------------------------
    Attachment: IMG_20180731_015439_HDR.jpg
                IMG_20180731_014146_HDR.jpg

> TcpDiscovery - High Workload in Stable topology
> -----------------------------------------------
>
>                 Key: IGNITE-9135
>                 URL: https://issues.apache.org/jira/browse/IGNITE-9135
>             Project: Ignite
>          Issue Type: Bug
>            Reporter: Sergey Kosarev
>            Priority: Major
>         Attachments: IMG_20180731_014146_HDR.jpg, IMG_20180731_015439_HDR.jpg
>
>
> On High topology (about 200 servers/ 50 clients) we see often  via jmx (TcpDiscoverySpiMBean) high MessageWorkerQueueSize peaks (>100) in stable cluster topology. Also very high number (about 250000) of ProcesedMessages, ReceivedMessages for TcpDiscoveryStatusCheckMessage, whereas TcpDiscoveryMetricsUpdateMessage is about 110000.
> it looks like
> org.apache.ignite.spi.discovery.tcp.ServerImpl.RingMessageWorker#metricsCheckFreq
> value does not depend on topology size
> private long metricsCheckFreq = 3 * spi.metricsUpdateFreq + 50;
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)