You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ignite.apache.org by sarahsamji <sa...@blackrock.com> on 2021/04/13 19:39:45 UTC
Cannot start ignite nodes with shared memory - Ignite version
2.10.0
Hi,
*
Tested with:*
- Ignite version - 2.10.0
- Multinode cluster on single/different JVM - localhost
- First node successfully starts, shared memory endpoint also starts but
cluster formation fails moving ahead and is indefinitely stuck at
GridCachePartitionExchangeManager - Failed to wait for initial partition map
exchange. Possible reasons are:
^-- Transactions in deadlock.
^-- Long running transactions (ignore if this is the case).
^-- Unreleased explicit locks.
WARN 2021-04-13T10:53:13,531 : [main] GridCachePartitionExchangeManager -
Still waiting for initial partition map exchange.
*Logs:*
INFO 2021-04-13T12:21:20,628 :
[tcp-disco-srvr-[:36830]-#3%NODE_I1%-#61%NODE_I1%] TcpDiscoverySpi - TCP
discovery accepted incoming connection [rmtAddr=/0:0:0:0:0:0:0:1,
rmtPort=54827]
INFO 2021-04-13T12:21:20,628 :
[tcp-disco-srvr-[:36830]-#3%NODE_I1%-#61%NODE_I1%] TcpDiscoverySpi - TCP
discovery spawning a new thread for connection [rmtAddr=/0:0:0:0:0:0:0:1,
rmtPort=54827]
INFO 2021-04-13T12:21:20,628 :
[tcp-disco-sock-reader-[]-#6%NODE_I1%-#95%NODE_I1%] TcpDiscoverySpi -
Started serving remote node connection [rmtAddr=/0:0:0:0:0:0:0:1:54827,
rmtPort=54827]
INFO 2021-04-13T12:21:20,633 :
[tcp-disco-sock-reader-[]-#6%NODE_I1%-#95%NODE_I1%] TcpDiscoverySpi -
Received ping request from the remote node
[rmtNodeId=6d8863ec-7499-40d4-ad47-48a075adfed9,
rmtAddr=/0:0:0:0:0:0:0:1:54827, rmtPort=54827]
INFO 2021-04-13T12:21:20,633 :
[tcp-disco-sock-reader-[]-#6%NODE_I1%-#95%NODE_I1%] TcpDiscoverySpi -
Finished writing ping response
[rmtNodeId=6d8863ec-7499-40d4-ad47-48a075adfed9,
rmtAddr=/0:0:0:0:0:0:0:1:54827, rmtPort=54827]
INFO 2021-04-13T12:21:20,633 :
[tcp-disco-sock-reader-[]-#6%NODE_I1%-#95%NODE_I1%] TcpDiscoverySpi -
Finished serving remote node connection [rmtAddr=/0:0:0:0:0:0:0:1:54827,
rmtPort=54827
INFO 2021-04-13T12:21:21,642 :
[tcp-disco-srvr-[:36830]-#3%NODE_I1%-#61%NODE_I1%] TcpDiscoverySpi - TCP
discovery accepted incoming connection [rmtAddr=/0:0:0:0:0:0:0:1,
rmtPort=54828]
INFO 2021-04-13T12:21:21,642 :
[tcp-disco-srvr-[:36830]-#3%NODE_I1%-#61%NODE_I1%] TcpDiscoverySpi - TCP
discovery spawning a new thread for connection [rmtAddr=/0:0:0:0:0:0:0:1,
rmtPort=54828]
INFO 2021-04-13T12:21:21,642 :
[tcp-disco-sock-reader-[]-#7%NODE_I1%-#96%NODE_I1%] TcpDiscoverySpi -
Started serving remote node connection [rmtAddr=/0:0:0:0:0:0:0:1:54828,
rmtPort=54828]
INFO 2021-04-13T12:21:21,646 :
[tcp-disco-sock-reader-[]-#7%NODE_I1%-#96%NODE_I1%] TcpDiscoverySpi -
Received ping request from the remote node
[rmtNodeId=6d8863ec-7499-40d4-ad47-48a075adfed9,
rmtAddr=/0:0:0:0:0:0:0:1:54828, rmtPort=54828]
INFO 2021-04-13T12:21:21,646 :
[tcp-disco-sock-reader-[]-#7%NODE_I1%-#96%NODE_I1%] TcpDiscoverySpi -
Finished writing ping response
[rmtNodeId=6d8863ec-7499-40d4-ad47-48a075adfed9,
rmtAddr=/0:0:0:0:0:0:0:1:54828, rmtPort=54828]
INFO 2021-04-13T12:21:21,646 :
[tcp-disco-sock-reader-[]-#7%NODE_I1%-#96%NODE_I1%] TcpDiscoverySpi -
Finished serving remote node connection [rmtAddr=/0:0:0:0:0:0:0:1:54828,
rmtPort=54828
INFO 2021-04-13T12:21:22,655 :
[tcp-disco-srvr-[:36830]-#3%NODE_I1%-#61%NODE_I1%] TcpDiscoverySpi - TCP
discovery accepted incoming connection [rmtAddr=/0:0:0:0:0:0:0:1,
rmtPort=54829]
INFO 2021-04-13T12:21:22,655 :
[tcp-disco-srvr-[:36830]-#3%NODE_I1%-#61%NODE_I1%] TcpDiscoverySpi - TCP
discovery spawning a new thread for connection [rmtAddr=/0:0:0:0:0:0:0:1,
rmtPort=54829]
INFO 2021-04-13T12:21:22,656 :
[tcp-disco-sock-reader-[]-#8%NODE_I1%-#97%NODE_I1%] TcpDiscoverySpi -
Started serving remote node connection [rmtAddr=/0:0:0:0:0:0:0:1:54829,
rmtPort=54829]
INFO 2021-04-13T12:21:22,659 :
[tcp-disco-sock-reader-[]-#8%NODE_I1%-#97%NODE_I1%] TcpDiscoverySpi -
Received ping request from the remote node
[rmtNodeId=6d8863ec-7499-40d4-ad47-48a075adfed9,
rmtAddr=/0:0:0:0:0:0:0:1:54829, rmtPort=54829]
INFO 2021-04-13T12:21:22,659 :
[tcp-disco-sock-reader-[]-#8%NODE_I1%-#97%NODE_I1%] TcpDiscoverySpi -
Finished writing ping response
[rmtNodeId=6d8863ec-7499-40d4-ad47-48a075adfed9,
rmtAddr=/0:0:0:0:0:0:0:1:54829, rmtPort=54829]
INFO 2021-04-13T12:21:22,659 :
[tcp-disco-sock-reader-[]-#8%NODE_I1%-#97%NODE_I1%] TcpDiscoverySpi -
Finished serving remote node connection [rmtAddr=/0:0:0:0:0:0:0:1:54829,
rmtPort=54829
WARN 2021-04-13T12:21:30,044 : [services-deployment-worker-#76%NODE_I1%]
ServiceDeploymentManager - Failed to wait service deployment process or
timeout had been reached, timeout=10000,
taskDepId=ServiceDeploymentProcessId [topVer=AffinityTopologyVersion
[topVer=2, minorTopVer=0], reqId=null]
WARN 2021-04-13T12:21:30,049 : [exchange-worker-#64%NODE_I1%] diagnostic
- Failed to wait for partition map exchange [topVer=AffinityTopologyVersion
[topVer=2, minorTopVer=0], node=ada9dea1-579a-4846-b831-64144fb1f370].
Dumping pending objects that might be the cause:
WARN 2021-04-13T12:21:30,049 : [exchange-worker-#64%NODE_I1%] diagnostic
- Ready affinity version: AffinityTopologyVersion [topVer=1, minorTopVer=8]
WARN 2021-04-13T12:21:30,054 : [exchange-worker-#64%NODE_I1%] diagnostic
- Last exchange future: GridDhtPartitionsExchangeFuture
[firstDiscoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode
[id=6d8863ec-7499-40d4-ad47-48a075adfed9,
consistentId=0:0:0:0:0:0:0:1%lo0,127.0.0.1,192.168.0.9,2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36831,
addrs=ArrayList [0:0:0:0:0:0:0:1%lo0, 127.0.0.1, 192.168.0.9,
2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,
2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,
2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0], sockAddrs=HashSet
[/2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36831,
/0:0:0:0:0:0:0:1%lo0:36831, /127.0.0.1:36831, /192.168.0.9:36831,
/2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0:36831,
/2601:647:4a00:a5d0:858:20c1:3958:1fd%en0:36831], discPort=36831, order=2,
intOrder=2, lastExchangeTime=1618341679815, loc=false,
ver=2.10.0#20210310-sha1:bc24f6ba, isClient=false], topVer=2,
msgTemplate=null, span=o.a.i.i.processors.tracing.NoopSpan@319fd3e5,
nodeId8=ada9dea1, msg=Node joined, type=NODE_JOINED, tstamp=1618341680036],
crd=TcpDiscoveryNode [id=ada9dea1-579a-4846-b831-64144fb1f370,
consistentId=0:0:0:0:0:0:0:1%lo0,127.0.0.1,192.168.0.9,2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
addrs=ArrayList [0:0:0:0:0:0:0:1%lo0, 127.0.0.1, 192.168.0.9,
2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,
2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,
2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0], sockAddrs=HashSet
[/2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
/2601:647:4a00:a5d0:858:20c1:3958:1fd%en0:36830,
/2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0:36830,
/0:0:0:0:0:0:0:1%lo0:36830, /127.0.0.1:36830, /192.168.0.9:36830],
discPort=36830, order=1, intOrder=1, lastExchangeTime=1618341689823,
loc=true, ver=2.10.0#20210310-sha1:bc24f6ba, isClient=false],
exchId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=2,
minorTopVer=0], discoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode
[id=6d8863ec-7499-40d4-ad47-48a075adfed9,
consistentId=0:0:0:0:0:0:0:1%lo0,127.0.0.1,192.168.0.9,2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36831,
addrs=ArrayList [0:0:0:0:0:0:0:1%lo0, 127.0.0.1, 192.168.0.9,
2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,
2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,
2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0], sockAddrs=HashSet
[/2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36831,
/0:0:0:0:0:0:0:1%lo0:36831, /127.0.0.1:36831, /192.168.0.9:36831,
/2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0:36831,
/2601:647:4a00:a5d0:858:20c1:3958:1fd%en0:36831], discPort=36831, order=2,
intOrder=2, lastExchangeTime=1618341679815, loc=false,
ver=2.10.0#20210310-sha1:bc24f6ba, isClient=false], topVer=2,
msgTemplate=null, span=o.a.i.i.processors.tracing.NoopSpan@319fd3e5,
nodeId8=ada9dea1, msg=Node joined, type=NODE_JOINED, tstamp=1618341680036],
nodeId=6d8863ec, evt=NODE_JOINED], added=true, exchangeType=ALL,
initFut=GridFutureAdapter [ignoreInterrupts=false, state=DONE, res=true,
hash=1297028313], init=true, lastVer=null,
partReleaseFut=PartitionReleaseFuture [topVer=AffinityTopologyVersion
[topVer=2, minorTopVer=0], futures=[ExplicitLockReleaseFuture
[topVer=AffinityTopologyVersion [topVer=2, minorTopVer=0], futures=[]],
AtomicUpdateReleaseFuture [topVer=AffinityTopologyVersion [topVer=2,
minorTopVer=0], futures=[]], DataStreamerReleaseFuture
[topVer=AffinityTopologyVersion [topVer=2, minorTopVer=0], futures=[]],
LocalTxReleaseFuture [topVer=AffinityTopologyVersion [topVer=2,
minorTopVer=0], futures=[]], AllTxReleaseFuture
[topVer=AffinityTopologyVersion [topVer=2, minorTopVer=0],
futures=[RemoteTxReleaseFuture [topVer=AffinityTopologyVersion [topVer=2,
minorTopVer=0], futures=[]]]]]], exchActions=ExchangeActions
[startCaches=null, stopCaches=null, startGrps=[], stopGrps=[],
resetParts=null, stateChangeRequest=StateChangeRequest
[msg=ChangeGlobalStateMessage
[id=a6a6dacc871-b191809b-1e52-4d4f-9184-65bd75648b2c,
reqId=57d42417-cda9-4b3d-b705-c8f5b9de2ba1,
initiatingNodeId=ada9dea1-579a-4846-b831-64144fb1f370, state=ACTIVE,
baselineTopology=BaselineTopology [id=0, branchingHash=1481172057,
branchingType='New BaselineTopology',
baselineNodes=[0:0:0:0:0:0:0:1%lo0,127.0.0.1,192.168.0.9,2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
0:0:0:0:0:0:0:1%lo0,127.0.0.1,192.168.0.9,2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36831]],
forceChangeBaselineTopology=true, timestamp=1618341680041,
forceDeactivation=true],
prevBltHistItem=o.a.i.i.processors.cluster.BaselineTopologyHistoryItem@3cbd70b8,
prevState=ACTIVE, topVer=null]], affChangeMsg=null, centralizedAff=false,
forceAffReassignment=false, exchangeLocE=null,
cacheChangeFailureMsgSent=false, done=false, state=CRD,
registerCachesFuture=GridFinishedFuture [resFlag=2],
startTime=1618341680041, initTime=1618341680041, rebalancedInfo=null,
affinityReassign=false, span=o.a.i.i.processors.tracing.NoopSpan@319fd3e5,
evtLatch=0, remaining=HashSet [6d8863ec-7499-40d4-ad47-48a075adfed9],
mergedJoinExchMsgs=null, awaitMergedMsgs=0, super=GridFutureAdapter
[ignoreInterrupts=false, state=INIT, res=null, hash=1896798853]]
WARN 2021-04-13T12:21:30,054 : [exchange-worker-#64%NODE_I1%]
GridCachePartitionExchangeManager - First 10 pending exchange futures
[total=1]
WARN 2021-04-13T12:21:30,054 : [exchange-worker-#64%NODE_I1%]
GridCachePartitionExchangeManager - >>> GridDhtPartitionsExchangeFuture
[topVer=AffinityTopologyVersion [topVer=2, minorTopVer=1],
evt=DISCOVERY_CUSTOM_EVT, evtNode=TcpDiscoveryNode
[id=ada9dea1-579a-4846-b831-64144fb1f370,
consistentId=0:0:0:0:0:0:0:1%lo0,127.0.0.1,192.168.0.9,2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
addrs=ArrayList [0:0:0:0:0:0:0:1%lo0, 127.0.0.1, 192.168.0.9,
2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,
2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,
2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0], sockAddrs=HashSet
[/2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
/2601:647:4a00:a5d0:858:20c1:3958:1fd%en0:36830,
/2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0:36830,
/0:0:0:0:0:0:0:1%lo0:36830, /127.0.0.1:36830, /192.168.0.9:36830],
discPort=36830, order=1, intOrder=1, lastExchangeTime=1618341689823,
loc=true, ver=2.10.0#20210310-sha1:bc24f6ba, isClient=false],
rebalanced=false, done=false, newCrdFut=null]
WARN 2021-04-13T12:21:30,057 : [exchange-worker-#64%NODE_I1%] diagnostic
- Last 10 exchange futures (total: 11):
WARN 2021-04-13T12:21:30,057 : [exchange-worker-#64%NODE_I1%] diagnostic
- >>> GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion
[topVer=2, minorTopVer=1], evt=DISCOVERY_CUSTOM_EVT,
evtNode=TcpDiscoveryNode [id=ada9dea1-579a-4846-b831-64144fb1f370,
consistentId=0:0:0:0:0:0:0:1%lo0,127.0.0.1,192.168.0.9,2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
addrs=ArrayList [0:0:0:0:0:0:0:1%lo0, 127.0.0.1, 192.168.0.9,
2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,
2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,
2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0], sockAddrs=HashSet
[/2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
/2601:647:4a00:a5d0:858:20c1:3958:1fd%en0:36830,
/2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0:36830,
/0:0:0:0:0:0:0:1%lo0:36830, /127.0.0.1:36830, /192.168.0.9:36830],
discPort=36830, order=1, intOrder=1, lastExchangeTime=1618341689823,
loc=true, ver=2.10.0#20210310-sha1:bc24f6ba, isClient=false],
rebalanced=false, done=false, newCrdFut=null]
WARN 2021-04-13T12:21:30,057 : [exchange-worker-#64%NODE_I1%] diagnostic
- >>> GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion
[topVer=2, minorTopVer=0], evt=NODE_JOINED, evtNode=TcpDiscoveryNode
[id=6d8863ec-7499-40d4-ad47-48a075adfed9,
consistentId=0:0:0:0:0:0:0:1%lo0,127.0.0.1,192.168.0.9,2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36831,
addrs=ArrayList [0:0:0:0:0:0:0:1%lo0, 127.0.0.1, 192.168.0.9,
2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,
2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,
2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0], sockAddrs=HashSet
[/2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36831,
/0:0:0:0:0:0:0:1%lo0:36831, /127.0.0.1:36831, /192.168.0.9:36831,
/2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0:36831,
/2601:647:4a00:a5d0:858:20c1:3958:1fd%en0:36831], discPort=36831, order=2,
intOrder=2, lastExchangeTime=1618341679815, loc=false,
ver=2.10.0#20210310-sha1:bc24f6ba, isClient=false], rebalanced=false,
done=false, newCrdFut=null]
WARN 2021-04-13T12:21:30,057 : [exchange-worker-#64%NODE_I1%] diagnostic
- >>> GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion
[topVer=1, minorTopVer=8], evt=DISCOVERY_CUSTOM_EVT,
evtNode=TcpDiscoveryNode [id=ada9dea1-579a-4846-b831-64144fb1f370,
consistentId=0:0:0:0:0:0:0:1%lo0,127.0.0.1,192.168.0.9,2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
addrs=ArrayList [0:0:0:0:0:0:0:1%lo0, 127.0.0.1, 192.168.0.9,
2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,
2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,
2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0], sockAddrs=HashSet
[/2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
/2601:647:4a00:a5d0:858:20c1:3958:1fd%en0:36830,
/2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0:36830,
/0:0:0:0:0:0:0:1%lo0:36830, /127.0.0.1:36830, /192.168.0.9:36830],
discPort=36830, order=1, intOrder=1, lastExchangeTime=1618341689823,
loc=true, ver=2.10.0#20210310-sha1:bc24f6ba, isClient=false],
rebalanced=true, done=true, newCrdFut=null]
WARN 2021-04-13T12:21:30,057 : [exchange-worker-#64%NODE_I1%] diagnostic
- >>> GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion
[topVer=1, minorTopVer=7], evt=DISCOVERY_CUSTOM_EVT,
evtNode=TcpDiscoveryNode [id=ada9dea1-579a-4846-b831-64144fb1f370,
consistentId=0:0:0:0:0:0:0:1%lo0,127.0.0.1,192.168.0.9,2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
addrs=ArrayList [0:0:0:0:0:0:0:1%lo0, 127.0.0.1, 192.168.0.9,
2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,
2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,
2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0], sockAddrs=HashSet
[/2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
/2601:647:4a00:a5d0:858:20c1:3958:1fd%en0:36830,
/2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0:36830,
/0:0:0:0:0:0:0:1%lo0:36830, /127.0.0.1:36830, /192.168.0.9:36830],
discPort=36830, order=1, intOrder=1, lastExchangeTime=1618341689823,
loc=true, ver=2.10.0#20210310-sha1:bc24f6ba, isClient=false],
rebalanced=true, done=true, newCrdFut=null]
WARN 2021-04-13T12:21:30,057 : [exchange-worker-#64%NODE_I1%] diagnostic
- >>> GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion
[topVer=1, minorTopVer=6], evt=DISCOVERY_CUSTOM_EVT,
evtNode=TcpDiscoveryNode [id=ada9dea1-579a-4846-b831-64144fb1f370,
consistentId=0:0:0:0:0:0:0:1%lo0,127.0.0.1,192.168.0.9,2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
addrs=ArrayList [0:0:0:0:0:0:0:1%lo0, 127.0.0.1, 192.168.0.9,
2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,
2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,
2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0], sockAddrs=HashSet
[/2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
/2601:647:4a00:a5d0:858:20c1:3958:1fd%en0:36830,
/2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0:36830,
/0:0:0:0:0:0:0:1%lo0:36830, /127.0.0.1:36830, /192.168.0.9:36830],
discPort=36830, order=1, intOrder=1, lastExchangeTime=1618341689823,
loc=true, ver=2.10.0#20210310-sha1:bc24f6ba, isClient=false],
rebalanced=true, done=true, newCrdFut=null]
WARN 2021-04-13T12:21:30,057 : [exchange-worker-#64%NODE_I1%] diagnostic
- >>> GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion
[topVer=1, minorTopVer=5], evt=DISCOVERY_CUSTOM_EVT,
evtNode=TcpDiscoveryNode [id=ada9dea1-579a-4846-b831-64144fb1f370,
consistentId=0:0:0:0:0:0:0:1%lo0,127.0.0.1,192.168.0.9,2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
addrs=ArrayList [0:0:0:0:0:0:0:1%lo0, 127.0.0.1, 192.168.0.9,
2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,
2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,
2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0], sockAddrs=HashSet
[/2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
/2601:647:4a00:a5d0:858:20c1:3958:1fd%en0:36830,
/2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0:36830,
/0:0:0:0:0:0:0:1%lo0:36830, /127.0.0.1:36830, /192.168.0.9:36830],
discPort=36830, order=1, intOrder=1, lastExchangeTime=1618341689823,
loc=true, ver=2.10.0#20210310-sha1:bc24f6ba, isClient=false],
rebalanced=true, done=true, newCrdFut=null]
WARN 2021-04-13T12:21:30,058 : [exchange-worker-#64%NODE_I1%] diagnostic
- >>> GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion
[topVer=1, minorTopVer=4], evt=DISCOVERY_CUSTOM_EVT,
evtNode=TcpDiscoveryNode [id=ada9dea1-579a-4846-b831-64144fb1f370,
consistentId=0:0:0:0:0:0:0:1%lo0,127.0.0.1,192.168.0.9,2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
addrs=ArrayList [0:0:0:0:0:0:0:1%lo0, 127.0.0.1, 192.168.0.9,
2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,
2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,
2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0], sockAddrs=HashSet
[/2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
/2601:647:4a00:a5d0:858:20c1:3958:1fd%en0:36830,
/2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0:36830,
/0:0:0:0:0:0:0:1%lo0:36830, /127.0.0.1:36830, /192.168.0.9:36830],
discPort=36830, order=1, intOrder=1, lastExchangeTime=1618341689823,
loc=true, ver=2.10.0#20210310-sha1:bc24f6ba, isClient=false],
rebalanced=true, done=true, newCrdFut=null]
WARN 2021-04-13T12:21:30,058 : [exchange-worker-#64%NODE_I1%] diagnostic
- >>> GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion
[topVer=1, minorTopVer=3], evt=DISCOVERY_CUSTOM_EVT,
evtNode=TcpDiscoveryNode [id=ada9dea1-579a-4846-b831-64144fb1f370,
consistentId=0:0:0:0:0:0:0:1%lo0,127.0.0.1,192.168.0.9,2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
addrs=ArrayList [0:0:0:0:0:0:0:1%lo0, 127.0.0.1, 192.168.0.9,
2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,
2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,
2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0], sockAddrs=HashSet
[/2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
/2601:647:4a00:a5d0:858:20c1:3958:1fd%en0:36830,
/2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0:36830,
/0:0:0:0:0:0:0:1%lo0:36830, /127.0.0.1:36830, /192.168.0.9:36830],
discPort=36830, order=1, intOrder=1, lastExchangeTime=1618341689823,
loc=true, ver=2.10.0#20210310-sha1:bc24f6ba, isClient=false],
rebalanced=true, done=true, newCrdFut=null]
WARN 2021-04-13T12:21:30,058 : [exchange-worker-#64%NODE_I1%] diagnostic
- >>> GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion
[topVer=1, minorTopVer=2], evt=DISCOVERY_CUSTOM_EVT,
evtNode=TcpDiscoveryNode [id=ada9dea1-579a-4846-b831-64144fb1f370,
consistentId=0:0:0:0:0:0:0:1%lo0,127.0.0.1,192.168.0.9,2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
addrs=ArrayList [0:0:0:0:0:0:0:1%lo0, 127.0.0.1, 192.168.0.9,
2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,
2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,
2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0], sockAddrs=HashSet
[/2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
/2601:647:4a00:a5d0:858:20c1:3958:1fd%en0:36830,
/2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0:36830,
/0:0:0:0:0:0:0:1%lo0:36830, /127.0.0.1:36830, /192.168.0.9:36830],
discPort=36830, order=1, intOrder=1, lastExchangeTime=1618341689823,
loc=true, ver=2.10.0#20210310-sha1:bc24f6ba, isClient=false],
rebalanced=true, done=true, newCrdFut=null]
WARN 2021-04-13T12:21:30,058 : [exchange-worker-#64%NODE_I1%] diagnostic
- >>> GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion
[topVer=1, minorTopVer=1], evt=DISCOVERY_CUSTOM_EVT,
evtNode=TcpDiscoveryNode [id=ada9dea1-579a-4846-b831-64144fb1f370,
consistentId=0:0:0:0:0:0:0:1%lo0,127.0.0.1,192.168.0.9,2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
addrs=ArrayList [0:0:0:0:0:0:0:1%lo0, 127.0.0.1, 192.168.0.9,
2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,
2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,
2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0], sockAddrs=HashSet
[/2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
/2601:647:4a00:a5d0:858:20c1:3958:1fd%en0:36830,
/2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0:36830,
/0:0:0:0:0:0:0:1%lo0:36830, /127.0.0.1:36830, /192.168.0.9:36830],
discPort=36830, order=1, intOrder=1, lastExchangeTime=1618341689823,
loc=true, ver=2.10.0#20210310-sha1:bc24f6ba, isClient=false],
rebalanced=true, done=true, newCrdFut=null]
WARN 2021-04-13T12:21:30,058 : [exchange-worker-#64%NODE_I1%] diagnostic
- Latch manager state: ExchangeLatchManager [serverLatches=ConcurrentHashMap
{}, clientLatches=ConcurrentHashMap {}]
ERROR 2021-04-13T12:21:30,062 : [exchange-worker-#64%NODE_I1%]
TcpCommunicationSpi - Failed to send message to remote node
[node=TcpDiscoveryNode [id=6d8863ec-7499-40d4-ad47-48a075adfed9,
consistentId=0:0:0:0:0:0:0:1%lo0,127.0.0.1,192.168.0.9,2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36831,
addrs=ArrayList [0:0:0:0:0:0:0:1%lo0, 127.0.0.1, 192.168.0.9,
2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,
2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,
2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0], sockAddrs=HashSet
[/2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36831,
/0:0:0:0:0:0:0:1%lo0:36831, /127.0.0.1:36831, /192.168.0.9:36831,
/2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0:36831,
/2601:647:4a00:a5d0:858:20c1:3958:1fd%en0:36831], discPort=36831, order=2,
intOrder=2, lastExchangeTime=1618341679815, loc=false,
ver=2.10.0#20210310-sha1:bc24f6ba, isClient=false], msg=GridIoMessage
[plc=2, topic=TOPIC_INTERNAL_DIAGNOSTIC, topicOrd=27, ordered=false,
timeout=0, skipOnTimeout=false, msg=IgniteDiagnosticMessage [flags=1,
futId=0]]]
org.apache.ignite.IgniteCheckedException: null
at org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7587)
~[ignite-core-2.10.0.jar:2.10.0]
at
org.apache.ignite.internal.util.future.GridFutureAdapter.resolve(GridFutureAdapter.java:260)
~[ignite-core-2.10.0.jar:2.10.0]
at
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:209)
~[ignite-core-2.10.0.jar:2.10.0]
at
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:160)
~[ignite-core-2.10.0.jar:2.10.0]
at
org.apache.ignite.spi.communication.tcp.internal.ConnectionClientPool.reserveClient(ConnectionClientPool.java:289)
~[ignite-core-2.10.0.jar:2.10.0]
at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.sendMessage0(TcpCommunicationSpi.java:1186)
[ignite-core-2.10.0.jar:2.10.0]
at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.sendMessage(TcpCommunicationSpi.java:1133)
[ignite-core-2.10.0.jar:2.10.0]
at
org.apache.ignite.internal.managers.communication.GridIoManager.send(GridIoManager.java:2101)
[ignite-core-2.10.0.jar:2.10.0]
at
org.apache.ignite.internal.managers.communication.GridIoManager.sendToGridTopic(GridIoManager.java:2184)
[ignite-core-2.10.0.jar:2.10.0]
at
org.apache.ignite.internal.processors.cluster.ClusterProcessor.sendDiagnosticMessage(ClusterProcessor.java:935)
[ignite-core-2.10.0.jar:2.10.0]
at
org.apache.ignite.internal.processors.cluster.ClusterProcessor.requestDiagnosticInfo(ClusterProcessor.java:877)
[ignite-core-2.10.0.jar:2.10.0]
at
org.apache.ignite.internal.IgniteDiagnosticPrepareContext.send(IgniteDiagnosticPrepareContext.java:131)
[ignite-core-2.10.0.jar:2.10.0]
at
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager.dumpDebugInfo(GridCachePartitionExchangeManager.java:2188)
[ignite-core-2.10.0.jar:2.10.0]
at
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:3423)
[ignite-core-2.10.0.jar:2.10.0]
at
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:3195)
[ignite-core-2.10.0.jar:2.10.0]
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
[ignite-core-2.10.0.jar:2.10.0]
at java.lang.Thread.run(Thread.java:844) [?:?]
Caused by: java.lang.NullPointerException
at
org.apache.ignite.spi.communication.tcp.internal.ConnectionClientPool.createShmemClient(ConnectionClientPool.java:521)
~[ignite-core-2.10.0.jar:2.10.0]
at
org.apache.ignite.spi.communication.tcp.internal.ConnectionClientPool.createCommunicationClient(ConnectionClientPool.java:428)
~[ignite-core-2.10.0.jar:2.10.0]
at
org.apache.ignite.spi.communication.tcp.internal.ConnectionClientPool.reserveClient(ConnectionClientPool.java:228)
~[ignite-core-2.10.0.jar:2.10.0]
... 12 more
ERROR 2021-04-13T12:21:30,063 : [exchange-worker-#64%NODE_I1%] diagnostic
- Failed to send diagnostic message: class o.a.i.IgniteCheckedException:
Failed to send message (node may have left the grid or TCP connection cannot
be established due to firewall issues) [node=TcpDiscoveryNode
[id=6d8863ec-7499-40d4-ad47-48a075adfed9,
consistentId=0:0:0:0:0:0:0:1%lo0,127.0.0.1,192.168.0.9,2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36831,
addrs=ArrayList [0:0:0:0:0:0:0:1%lo0, 127.0.0.1, 192.168.0.9,
2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,
2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,
2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0], sockAddrs=HashSet
[/2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36831,
/0:0:0:0:0:0:0:1%lo0:36831, /127.0.0.1:36831, /192.168.0.9:36831,
/2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0:36831,
/2601:647:4a00:a5d0:858:20c1:3958:1fd%en0:36831], discPort=36831, order=2,
intOrder=2, lastExchangeTime=1618341679815, loc=false,
ver=2.10.0#20210310-sha1:bc24f6ba, isClient=false],
topic=TOPIC_INTERNAL_DIAGNOSTIC, msg=IgniteDiagnosticMessage [flags=1,
futId=0], policy=2]
INFO 2021-04-13T12:21:30,067 : [exchange-worker-#64%NODE_I1%] diagnostic
- Exchange future on coordinator waiting for server response
[node=6d8863ec-7499-40d4-ad47-48a075adfed9, topVer=AffinityTopologyVersion
[topVer=2, minorTopVer=0]]
Remote node information:
Failed to send diagnostic message: class
org.apache.ignite.IgniteCheckedException: Failed to send message (node may
have left the grid or TCP connection cannot be established due to firewall
issues) [node=TcpDiscoveryNode [id=6d8863ec-7499-40d4-ad47-48a075adfed9,
consistentId=0:0:0:0:0:0:0:1%lo0,127.0.0.1,192.168.0.9,2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36831,
addrs=ArrayList [0:0:0:0:0:0:0:1%lo0, 127.0.0.1, 192.168.0.9,
2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,
2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,
2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0], sockAddrs=HashSet
[/2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36831,
/0:0:0:0:0:0:0:1%lo0:36831, /127.0.0.1:36831, /192.168.0.9:36831,
/2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0:36831,
/2601:647:4a00:a5d0:858:20c1:3958:1fd%en0:36831], discPort=36831, order=2,
intOrder=2, lastExchangeTime=1618341679815, loc=false,
ver=2.10.0#20210310-sha1:bc24f6ba, isClient=false],
topic=TOPIC_INTERNAL_DIAGNOSTIC, msg=IgniteDiagnosticMessage [flags=1,
futId=0], policy=2]
Local communication statistics:
Communication SPI statistics [rmtNode=6d8863ec-7499-40d4-ad47-48a075adfed9]
Communication SPI recovery descriptors:
Communication SPI clients:
NIO sessions statistics:
*Debugging* to the source of the NullPointer I see it's at this point:
https://github.com/apache/ignite/blob/bc24f6baf3e9b4f98cf98cc5df67fb5deb5ceb6c/modules/core/src/main/java/org/apache/ignite/spi/communication/tcp/internal/ConnectionClientPool.java#L521
<https://github.com/apache/ignite/blob/bc24f6baf3e9b4f98cf98cc5df67fb5deb5ceb6c/modules/core/src/main/java/org/apache/ignite/spi/communication/tcp/internal/ConnectionClientPool.java#L521>
And the value of msgFormatterSupplier is explicitly set as null in:
https://github.com/apache/ignite/blob/da8a6bb4756c998aa99494d395752be96d841ec8/modules/core/src/main/java/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpi.java#L752
The cluster forms successfully without the shared memory in 2.10.0 and
worked both with and without shared memory on our previous version 2.6.0. Is
there some configuration I am missing to communicate on shared memory?
Thanks in advance!
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Cannot start ignite nodes with shared memory - Ignite version
2.10.0
Posted by sarahsamji <sa...@blackrock.com>.
public class SampleTest {
static String localhost;
private static Random random;
final static int discPort = 36101;
final static int commPort = 37601;
final static int sharedMemPort = 37235;
final static int clusterSize = 2;
private static final int messageQueueLimit = 8192;
private static final long tcpCommTimeout = 10_000L;
private static final int igniteCommPortRange = 5;
final static int igniteDiscPortRange = clusterSize * 2;
final static int igniteDiscMaxPort = discPort + igniteDiscPortRange;
private static final long discoveryNetworkTimeout = 20_000L;
private static final int default_threadpool_size = 8;
private static final int igniteLongRunningThreadPoolSize = 8;
private static final int igniteStreamerThreadPoolSize = 8;
private static final String igniteDiscHost = "127.0.0.1";
final static String tmpdir = System.getProperty("java.io.tmpdir");
public static final int[] EVENT_TYPES = {
EventType.EVT_CACHE_OBJECT_EXPIRED,
EventType.EVT_NODE_JOINED,
EventType.EVT_NODE_LEFT,
EventType.EVT_NODE_SEGMENTED,
EventType.EVT_CLIENT_NODE_DISCONNECTED,
EventType.EVT_CLIENT_NODE_RECONNECTED,
EventType.EVT_NODE_FAILED,
EventType.EVT_CACHE_REBALANCE_PART_DATA_LOST};
@BeforeAll
static void beforeAll() {
IgniteUtils.setCurrentIgniteName("");
ThreadContext.clearAll();
System.setProperty("ignite.cluster.encryption.disabled", "true");
}
@Test
void test() {
int httpPort = PortFinder.getRandomPort(37000, 37500);
localhost = "localhost:" + httpPort;
try {
startCluster();
final Ignite ignite = Ignition.ignite("NODE_I1");
assertNotNull(ignite);
TestDomainSvcImpl.resetCounters();
} finally {
IgniteUtils.setCurrentIgniteName(null);
ThreadContext.clearAll();
}
}
private static void startCluster() {
random = new Random();
Thread.interrupted();
startNode("NODE_1");
for (int i = 2; i <= clusterSize; i++) {
startNode("NODE" + i);
}
Ignite i1 = Ignition.ignite("NODE_1");
assertEquals(clusterSize, i1.cluster().nodes().size());
}
private static void startNode(String instanceName) {
IgniteConfiguration igniteConfig = buildIgniteConfig(instanceName);
Ignite ignite = Ignition.start(igniteConfig);
}
private static IgniteConfiguration buildIgniteConfig(String
instanceName) {
IgniteConfiguration cfg = new IgniteConfigBuilder().build();
return cfg
.setIgniteInstanceName(instanceName)
.setPeerClassLoadingEnabled(false)
.setWorkDirectory(Paths.get(tmpdir, "firecracker", "ignite",
"work", String.valueOf(random.nextInt(1000000))).toString())
.setFailureDetectionTimeout(30_000L)
.setMetricsLogFrequency(300000L)
.setDataStorageConfiguration(createDataStorageConfiguration())
.setIncludeEventTypes(EVENT_TYPES)
.setCommunicationSpi(createCommunicationSpi())
.setDiscoverySpi(createDiscoverySpi())
.setPublicThreadPoolSize(default_threadpool_size)
.setDataStreamerThreadPoolSize(igniteStreamerThreadPoolSize)
.setSystemThreadPoolSize(default_threadpool_size)
.setServiceThreadPoolSize(default_threadpool_size)
.setStripedPoolSize(default_threadpool_size)
.setExecutorConfiguration(createExecutorConfig("IgniteLongRunning",
igniteLongRunningThreadPoolSize))
.setConnectorConfiguration(null)
.setClientConnectorConfiguration(null)
.setBinaryConfiguration(createBinaryConfiguration())
.setFailureHandler(new StopNodeOrHaltFailureHandler(false,
0));
}
static DataStorageConfiguration createDataStorageConfiguration() {
int evictionThreshold = 90;
String DEFAULT_MEMORY_REGION = "Default_Region";
long memSize = 200 * 1024 * 1024;;
DataRegionConfiguration regionConfig = new DataRegionConfiguration()
.setName(DEFAULT_MEMORY_REGION)
.setInitialSize(memSize)
.setMaxSize(memSize)
.setMetricsEnabled(true)
.setPageEvictionMode(DataPageEvictionMode.DISABLED) //
Only enable eviction if we can find a way to control it
.setEvictionThreshold(evictionThreshold / 100);
DataStorageConfiguration dataStorageConfiguration = new
DataStorageConfiguration()
.setDefaultDataRegionConfiguration(regionConfig);
dataStorageConfiguration.setMetricsEnabled(true);
return dataStorageConfiguration;
}
static CommunicationSpi createCommunicationSpi() {
return new TcpCommunicationSpi()
.setSocketWriteTimeout(tcpCommTimeout)
.setConnectTimeout(tcpCommTimeout)
.setLocalPort(commPort)
.setLocalPortRange(igniteCommPortRange)
.setSharedMemoryPort(sharedMemPort)
.setMessageQueueLimit(messageQueueLimit);
}
static DiscoverySpi createDiscoverySpi() {
return new TcpDiscoverySpi()
.setLocalPort(discPort)
.setLocalPortRange(igniteDiscPortRange)
.setNetworkTimeout(discoveryNetworkTimeout)
.setIpFinder(createIpFinder(igniteDiscMaxPort));
}
static TcpDiscoveryIpFinder createIpFinder(final int igniteDiscMaxPort)
{
return new
TcpDiscoveryVmIpFinder().setAddresses(List.of(igniteDiscHost + ":" +
discPort + ".." + igniteDiscMaxPort));
}
static ExecutorConfiguration createExecutorConfig(String name, int
threadCount) {
return new ExecutorConfiguration(name)
.setSize(threadCount);
}
static BinaryConfiguration createBinaryConfiguration() {
return new BinaryConfiguration().setCompactFooter(false);
}
}
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Cannot start ignite nodes with shared memory - Ignite version 2.10.0
Posted by Ilya Kazakov <ka...@gmail.com>.
Also, as I know, shmem is not recommended to use. Shmem will not be used in
Ignite 3.
пн, 21 июн. 2021 г. в 15:30, Ilya Kazakov <ka...@gmail.com>:
> Hello, as I can see no one from the community wants to take this ticket.
> Try to ask on dev-list: dev@ignite.apache.org
>
> ср, 2 июн. 2021 г. в 05:11, sarahsamji <sa...@blackrock.com>:
>
>> Hi, is there a fix expected for this issue in the next release?
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
Re: Cannot start ignite nodes with shared memory - Ignite version 2.10.0
Posted by Ilya Kazakov <ka...@gmail.com>.
Hello, as I can see no one from the community wants to take this ticket.
Try to ask on dev-list: dev@ignite.apache.org
ср, 2 июн. 2021 г. в 05:11, sarahsamji <sa...@blackrock.com>:
> Hi, is there a fix expected for this issue in the next release?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
Re: Cannot start ignite nodes with shared memory - Ignite version
2.10.0
Posted by sarahsamji <sa...@blackrock.com>.
Hi, is there a fix expected for this issue in the next release?
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Cannot start ignite nodes with shared memory - Ignite version
2.10.0
Posted by sarahsamji <sa...@blackrock.com>.
Thankyou Ilya.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Cannot start ignite nodes with shared memory - Ignite version 2.10.0
Posted by Ilya Kazakov <ka...@gmail.com>.
Hello. It looks like a bug. I have reported it:
https://issues.apache.org/jira/browse/IGNITE-14634
Ilya
пт, 16 апр. 2021 г. в 05:09, sarahsamji <sa...@blackrock.com>:
> Sample
>
> import org.apache.ignite.Ignite;
> import org.apache.ignite.Ignition;
> import org.apache.ignite.configuration.*;
> import org.apache.ignite.events.EventType;
> import org.apache.ignite.failure.StopNodeOrHaltFailureHandler;
> import org.apache.ignite.internal.util.IgniteUtils;
> import org.apache.ignite.spi.communication.CommunicationSpi;
> import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi;
> import org.apache.ignite.spi.discovery.DiscoverySpi;
> import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
> import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
> import
> org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
> import org.apache.logging.log4j.ThreadContext;
> import org.junit.jupiter.api.BeforeAll;
> import org.junit.jupiter.api.Test;
>
> import java.nio.file.Paths;
> import java.util.List;
> import java.util.Random;
> import static org.junit.jupiter.api.Assertions.*;
>
> public class SampleTest {
> static String localhost;
> private static Random random;
> final static int discPort = 36101;
> final static int commPort = 37601;
> final static int sharedMemPort = 37235;
> final static int clusterSize = 2;
> private static final int messageQueueLimit = 8192;
> private static final long tcpCommTimeout = 10_000L;
> private static final int igniteCommPortRange = 5;
> final static int igniteDiscPortRange = clusterSize * 2;
> final static int igniteDiscMaxPort = discPort + igniteDiscPortRange;
> private static final long discoveryNetworkTimeout = 20_000L;
> private static final int default_threadpool_size = 8;
> private static final int igniteLongRunningThreadPoolSize = 8;
> private static final int igniteStreamerThreadPoolSize = 8;
>
> private static final String igniteDiscHost = "127.0.0.1";
> final static String tmpdir = System.getProperty("java.io.tmpdir");
> public static final int[] EVENT_TYPES = {
> EventType.EVT_CACHE_OBJECT_EXPIRED,
> EventType.EVT_NODE_JOINED,
> EventType.EVT_NODE_LEFT,
> EventType.EVT_NODE_SEGMENTED,
> EventType.EVT_CLIENT_NODE_DISCONNECTED,
> EventType.EVT_CLIENT_NODE_RECONNECTED,
> EventType.EVT_NODE_FAILED,
> EventType.EVT_CACHE_REBALANCE_PART_DATA_LOST};
>
>
> @BeforeAll
> static void beforeAll() {
> IgniteUtils.setCurrentIgniteName("");
> ThreadContext.clearAll();
> System.setProperty("ignite.cluster.encryption.disabled", "true");
> }
>
> @Test
> void test() {
> int httpPort = 37500;
> localhost = "localhost:" + httpPort;
> try {
> startCluster();
> final Ignite ignite = Ignition.ignite("NODE_I1");
> assertNotNull(ignite);
> } finally {
> IgniteUtils.setCurrentIgniteName(null);
> ThreadContext.clearAll();
> }
> }
>
> private static void startCluster() {
> random = new Random();
> Thread.interrupted();
>
> startNode("NODE_1");
>
> for (int i = 2; i <= clusterSize; i++) {
> startNode("NODE" + i);
> }
>
> Ignite i1 = Ignition.ignite("NODE_1");
> assertEquals(clusterSize, i1.cluster().nodes().size());
> }
>
> private static void startNode(String instanceName) {
> IgniteConfiguration igniteConfig = new
> IgniteConfiguration().setIgniteInstanceName(instanceName)
> .setPeerClassLoadingEnabled(false)
> .setWorkDirectory(Paths.get(tmpdir,"ignite", "work",
> String.valueOf(random.nextInt(1000000))).toString())
> .setFailureDetectionTimeout(30_000L)
> .setMetricsLogFrequency(300000L)
>
> .setDataStorageConfiguration(createDataStorageConfiguration())
> .setIncludeEventTypes(EVENT_TYPES)
> .setCommunicationSpi(createCommunicationSpi())
> .setDiscoverySpi(createDiscoverySpi())
> .setPublicThreadPoolSize(default_threadpool_size)
>
> .setDataStreamerThreadPoolSize(igniteStreamerThreadPoolSize)
> .setSystemThreadPoolSize(default_threadpool_size)
> .setServiceThreadPoolSize(default_threadpool_size)
> .setStripedPoolSize(default_threadpool_size)
>
> .setExecutorConfiguration(createExecutorConfig("IgniteLongRunning",
> igniteLongRunningThreadPoolSize))
> .setConnectorConfiguration(null)
> .setClientConnectorConfiguration(null)
> .setBinaryConfiguration(createBinaryConfiguration())
> .setFailureHandler(new StopNodeOrHaltFailureHandler(false,
> 0));;
> Ignite ignite = Ignition.getOrStart(igniteConfig);
> }
>
> static DataStorageConfiguration createDataStorageConfiguration() {
>
> int evictionThreshold = 90;
> String DEFAULT_MEMORY_REGION = "Default_Region";
> long memSize = 200 * 1024 * 1024;;
>
> DataRegionConfiguration regionConfig = new
> DataRegionConfiguration()
> .setName(DEFAULT_MEMORY_REGION)
> .setInitialSize(memSize)
> .setMaxSize(memSize)
> .setMetricsEnabled(true)
> .setPageEvictionMode(DataPageEvictionMode.DISABLED)
> .setEvictionThreshold(evictionThreshold / 100);
>
> DataStorageConfiguration dataStorageConfiguration = new
> DataStorageConfiguration()
> .setDefaultDataRegionConfiguration(regionConfig);
>
> dataStorageConfiguration.setMetricsEnabled(true);
>
> return dataStorageConfiguration;
> }
>
> static CommunicationSpi createCommunicationSpi() {
>
> return new TcpCommunicationSpi()
> .setSocketWriteTimeout(tcpCommTimeout)
> .setConnectTimeout(tcpCommTimeout)
> .setLocalPort(commPort)
> .setLocalPortRange(igniteCommPortRange)
> .setSharedMemoryPort(sharedMemPort)
> .setMessageQueueLimit(messageQueueLimit);
> }
>
> static DiscoverySpi createDiscoverySpi() {
>
> return new TcpDiscoverySpi()
> .setLocalPort(discPort)
> .setLocalPortRange(igniteDiscPortRange)
> .setNetworkTimeout(discoveryNetworkTimeout)
> .setIpFinder(createIpFinder(igniteDiscMaxPort));
> }
>
> static TcpDiscoveryIpFinder createIpFinder(final int igniteDiscMaxPort)
> {
> return new
> TcpDiscoveryVmIpFinder().setAddresses(List.of(igniteDiscHost + ":" +
> discPort + ".." + igniteDiscMaxPort));
> }
>
> static ExecutorConfiguration createExecutorConfig(String name, int
> threadCount) {
> return new ExecutorConfiguration(name)
> .setSize(threadCount);
> }
>
> static BinaryConfiguration createBinaryConfiguration() {
> return new BinaryConfiguration().setCompactFooter(false);
> }
> }
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
Re: Cannot start ignite nodes with shared memory - Ignite version
2.10.0
Posted by sarahsamji <sa...@blackrock.com>.
Sample
import org.apache.ignite.Ignite;
import org.apache.ignite.Ignition;
import org.apache.ignite.configuration.*;
import org.apache.ignite.events.EventType;
import org.apache.ignite.failure.StopNodeOrHaltFailureHandler;
import org.apache.ignite.internal.util.IgniteUtils;
import org.apache.ignite.spi.communication.CommunicationSpi;
import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi;
import org.apache.ignite.spi.discovery.DiscoverySpi;
import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
import
org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
import org.apache.logging.log4j.ThreadContext;
import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.Test;
import java.nio.file.Paths;
import java.util.List;
import java.util.Random;
import static org.junit.jupiter.api.Assertions.*;
public class SampleTest {
static String localhost;
private static Random random;
final static int discPort = 36101;
final static int commPort = 37601;
final static int sharedMemPort = 37235;
final static int clusterSize = 2;
private static final int messageQueueLimit = 8192;
private static final long tcpCommTimeout = 10_000L;
private static final int igniteCommPortRange = 5;
final static int igniteDiscPortRange = clusterSize * 2;
final static int igniteDiscMaxPort = discPort + igniteDiscPortRange;
private static final long discoveryNetworkTimeout = 20_000L;
private static final int default_threadpool_size = 8;
private static final int igniteLongRunningThreadPoolSize = 8;
private static final int igniteStreamerThreadPoolSize = 8;
private static final String igniteDiscHost = "127.0.0.1";
final static String tmpdir = System.getProperty("java.io.tmpdir");
public static final int[] EVENT_TYPES = {
EventType.EVT_CACHE_OBJECT_EXPIRED,
EventType.EVT_NODE_JOINED,
EventType.EVT_NODE_LEFT,
EventType.EVT_NODE_SEGMENTED,
EventType.EVT_CLIENT_NODE_DISCONNECTED,
EventType.EVT_CLIENT_NODE_RECONNECTED,
EventType.EVT_NODE_FAILED,
EventType.EVT_CACHE_REBALANCE_PART_DATA_LOST};
@BeforeAll
static void beforeAll() {
IgniteUtils.setCurrentIgniteName("");
ThreadContext.clearAll();
System.setProperty("ignite.cluster.encryption.disabled", "true");
}
@Test
void test() {
int httpPort = 37500;
localhost = "localhost:" + httpPort;
try {
startCluster();
final Ignite ignite = Ignition.ignite("NODE_I1");
assertNotNull(ignite);
} finally {
IgniteUtils.setCurrentIgniteName(null);
ThreadContext.clearAll();
}
}
private static void startCluster() {
random = new Random();
Thread.interrupted();
startNode("NODE_1");
for (int i = 2; i <= clusterSize; i++) {
startNode("NODE" + i);
}
Ignite i1 = Ignition.ignite("NODE_1");
assertEquals(clusterSize, i1.cluster().nodes().size());
}
private static void startNode(String instanceName) {
IgniteConfiguration igniteConfig = new
IgniteConfiguration().setIgniteInstanceName(instanceName)
.setPeerClassLoadingEnabled(false)
.setWorkDirectory(Paths.get(tmpdir,"ignite", "work",
String.valueOf(random.nextInt(1000000))).toString())
.setFailureDetectionTimeout(30_000L)
.setMetricsLogFrequency(300000L)
.setDataStorageConfiguration(createDataStorageConfiguration())
.setIncludeEventTypes(EVENT_TYPES)
.setCommunicationSpi(createCommunicationSpi())
.setDiscoverySpi(createDiscoverySpi())
.setPublicThreadPoolSize(default_threadpool_size)
.setDataStreamerThreadPoolSize(igniteStreamerThreadPoolSize)
.setSystemThreadPoolSize(default_threadpool_size)
.setServiceThreadPoolSize(default_threadpool_size)
.setStripedPoolSize(default_threadpool_size)
.setExecutorConfiguration(createExecutorConfig("IgniteLongRunning",
igniteLongRunningThreadPoolSize))
.setConnectorConfiguration(null)
.setClientConnectorConfiguration(null)
.setBinaryConfiguration(createBinaryConfiguration())
.setFailureHandler(new StopNodeOrHaltFailureHandler(false,
0));;
Ignite ignite = Ignition.getOrStart(igniteConfig);
}
static DataStorageConfiguration createDataStorageConfiguration() {
int evictionThreshold = 90;
String DEFAULT_MEMORY_REGION = "Default_Region";
long memSize = 200 * 1024 * 1024;;
DataRegionConfiguration regionConfig = new DataRegionConfiguration()
.setName(DEFAULT_MEMORY_REGION)
.setInitialSize(memSize)
.setMaxSize(memSize)
.setMetricsEnabled(true)
.setPageEvictionMode(DataPageEvictionMode.DISABLED)
.setEvictionThreshold(evictionThreshold / 100);
DataStorageConfiguration dataStorageConfiguration = new
DataStorageConfiguration()
.setDefaultDataRegionConfiguration(regionConfig);
dataStorageConfiguration.setMetricsEnabled(true);
return dataStorageConfiguration;
}
static CommunicationSpi createCommunicationSpi() {
return new TcpCommunicationSpi()
.setSocketWriteTimeout(tcpCommTimeout)
.setConnectTimeout(tcpCommTimeout)
.setLocalPort(commPort)
.setLocalPortRange(igniteCommPortRange)
.setSharedMemoryPort(sharedMemPort)
.setMessageQueueLimit(messageQueueLimit);
}
static DiscoverySpi createDiscoverySpi() {
return new TcpDiscoverySpi()
.setLocalPort(discPort)
.setLocalPortRange(igniteDiscPortRange)
.setNetworkTimeout(discoveryNetworkTimeout)
.setIpFinder(createIpFinder(igniteDiscMaxPort));
}
static TcpDiscoveryIpFinder createIpFinder(final int igniteDiscMaxPort)
{
return new
TcpDiscoveryVmIpFinder().setAddresses(List.of(igniteDiscHost + ":" +
discPort + ".." + igniteDiscMaxPort));
}
static ExecutorConfiguration createExecutorConfig(String name, int
threadCount) {
return new ExecutorConfiguration(name)
.setSize(threadCount);
}
static BinaryConfiguration createBinaryConfiguration() {
return new BinaryConfiguration().setCompactFooter(false);
}
}
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Cannot start ignite nodes with shared memory - Ignite version 2.10.0
Posted by Ilya Kazakov <ka...@gmail.com>.
Hello. It will be great if you can share some simple reproducer.
ср, 14 апр. 2021 г. в 03:39, sarahsamji <sa...@blackrock.com>:
> Hi,
>
> *
> Tested with:*
> - Ignite version - 2.10.0
> - Multinode cluster on single/different JVM - localhost
> - First node successfully starts, shared memory endpoint also starts but
> cluster formation fails moving ahead and is indefinitely stuck at
> GridCachePartitionExchangeManager - Failed to wait for initial partition
> map
> exchange. Possible reasons are:
> ^-- Transactions in deadlock.
> ^-- Long running transactions (ignore if this is the case).
> ^-- Unreleased explicit locks.
> WARN 2021-04-13T10:53:13,531 : [main] GridCachePartitionExchangeManager
> -
> Still waiting for initial partition map exchange.
>
> *Logs:*
>
> INFO 2021-04-13T12:21:20,628 :
> [tcp-disco-srvr-[:36830]-#3%NODE_I1%-#61%NODE_I1%] TcpDiscoverySpi - TCP
> discovery accepted incoming connection [rmtAddr=/0:0:0:0:0:0:0:1,
> rmtPort=54827]
> INFO 2021-04-13T12:21:20,628 :
> [tcp-disco-srvr-[:36830]-#3%NODE_I1%-#61%NODE_I1%] TcpDiscoverySpi - TCP
> discovery spawning a new thread for connection [rmtAddr=/0:0:0:0:0:0:0:1,
> rmtPort=54827]
> INFO 2021-04-13T12:21:20,628 :
> [tcp-disco-sock-reader-[]-#6%NODE_I1%-#95%NODE_I1%] TcpDiscoverySpi -
> Started serving remote node connection [rmtAddr=/0:0:0:0:0:0:0:1:54827,
> rmtPort=54827]
> INFO 2021-04-13T12:21:20,633 :
> [tcp-disco-sock-reader-[]-#6%NODE_I1%-#95%NODE_I1%] TcpDiscoverySpi -
> Received ping request from the remote node
> [rmtNodeId=6d8863ec-7499-40d4-ad47-48a075adfed9,
> rmtAddr=/0:0:0:0:0:0:0:1:54827, rmtPort=54827]
> INFO 2021-04-13T12:21:20,633 :
> [tcp-disco-sock-reader-[]-#6%NODE_I1%-#95%NODE_I1%] TcpDiscoverySpi -
> Finished writing ping response
> [rmtNodeId=6d8863ec-7499-40d4-ad47-48a075adfed9,
> rmtAddr=/0:0:0:0:0:0:0:1:54827, rmtPort=54827]
> INFO 2021-04-13T12:21:20,633 :
> [tcp-disco-sock-reader-[]-#6%NODE_I1%-#95%NODE_I1%] TcpDiscoverySpi -
> Finished serving remote node connection [rmtAddr=/0:0:0:0:0:0:0:1:54827,
> rmtPort=54827
> INFO 2021-04-13T12:21:21,642 :
> [tcp-disco-srvr-[:36830]-#3%NODE_I1%-#61%NODE_I1%] TcpDiscoverySpi - TCP
> discovery accepted incoming connection [rmtAddr=/0:0:0:0:0:0:0:1,
> rmtPort=54828]
> INFO 2021-04-13T12:21:21,642 :
> [tcp-disco-srvr-[:36830]-#3%NODE_I1%-#61%NODE_I1%] TcpDiscoverySpi - TCP
> discovery spawning a new thread for connection [rmtAddr=/0:0:0:0:0:0:0:1,
> rmtPort=54828]
> INFO 2021-04-13T12:21:21,642 :
> [tcp-disco-sock-reader-[]-#7%NODE_I1%-#96%NODE_I1%] TcpDiscoverySpi -
> Started serving remote node connection [rmtAddr=/0:0:0:0:0:0:0:1:54828,
> rmtPort=54828]
> INFO 2021-04-13T12:21:21,646 :
> [tcp-disco-sock-reader-[]-#7%NODE_I1%-#96%NODE_I1%] TcpDiscoverySpi -
> Received ping request from the remote node
> [rmtNodeId=6d8863ec-7499-40d4-ad47-48a075adfed9,
> rmtAddr=/0:0:0:0:0:0:0:1:54828, rmtPort=54828]
> INFO 2021-04-13T12:21:21,646 :
> [tcp-disco-sock-reader-[]-#7%NODE_I1%-#96%NODE_I1%] TcpDiscoverySpi -
> Finished writing ping response
> [rmtNodeId=6d8863ec-7499-40d4-ad47-48a075adfed9,
> rmtAddr=/0:0:0:0:0:0:0:1:54828, rmtPort=54828]
> INFO 2021-04-13T12:21:21,646 :
> [tcp-disco-sock-reader-[]-#7%NODE_I1%-#96%NODE_I1%] TcpDiscoverySpi -
> Finished serving remote node connection [rmtAddr=/0:0:0:0:0:0:0:1:54828,
> rmtPort=54828
> INFO 2021-04-13T12:21:22,655 :
> [tcp-disco-srvr-[:36830]-#3%NODE_I1%-#61%NODE_I1%] TcpDiscoverySpi - TCP
> discovery accepted incoming connection [rmtAddr=/0:0:0:0:0:0:0:1,
> rmtPort=54829]
> INFO 2021-04-13T12:21:22,655 :
> [tcp-disco-srvr-[:36830]-#3%NODE_I1%-#61%NODE_I1%] TcpDiscoverySpi - TCP
> discovery spawning a new thread for connection [rmtAddr=/0:0:0:0:0:0:0:1,
> rmtPort=54829]
> INFO 2021-04-13T12:21:22,656 :
> [tcp-disco-sock-reader-[]-#8%NODE_I1%-#97%NODE_I1%] TcpDiscoverySpi -
> Started serving remote node connection [rmtAddr=/0:0:0:0:0:0:0:1:54829,
> rmtPort=54829]
> INFO 2021-04-13T12:21:22,659 :
> [tcp-disco-sock-reader-[]-#8%NODE_I1%-#97%NODE_I1%] TcpDiscoverySpi -
> Received ping request from the remote node
> [rmtNodeId=6d8863ec-7499-40d4-ad47-48a075adfed9,
> rmtAddr=/0:0:0:0:0:0:0:1:54829, rmtPort=54829]
> INFO 2021-04-13T12:21:22,659 :
> [tcp-disco-sock-reader-[]-#8%NODE_I1%-#97%NODE_I1%] TcpDiscoverySpi -
> Finished writing ping response
> [rmtNodeId=6d8863ec-7499-40d4-ad47-48a075adfed9,
> rmtAddr=/0:0:0:0:0:0:0:1:54829, rmtPort=54829]
> INFO 2021-04-13T12:21:22,659 :
> [tcp-disco-sock-reader-[]-#8%NODE_I1%-#97%NODE_I1%] TcpDiscoverySpi -
> Finished serving remote node connection [rmtAddr=/0:0:0:0:0:0:0:1:54829,
> rmtPort=54829
> WARN 2021-04-13T12:21:30,044 : [services-deployment-worker-#76%NODE_I1%]
> ServiceDeploymentManager - Failed to wait service deployment process or
> timeout had been reached, timeout=10000,
> taskDepId=ServiceDeploymentProcessId [topVer=AffinityTopologyVersion
> [topVer=2, minorTopVer=0], reqId=null]
> WARN 2021-04-13T12:21:30,049 : [exchange-worker-#64%NODE_I1%] diagnostic
> - Failed to wait for partition map exchange [topVer=AffinityTopologyVersion
> [topVer=2, minorTopVer=0], node=ada9dea1-579a-4846-b831-64144fb1f370].
> Dumping pending objects that might be the cause:
> WARN 2021-04-13T12:21:30,049 : [exchange-worker-#64%NODE_I1%] diagnostic
> - Ready affinity version: AffinityTopologyVersion [topVer=1, minorTopVer=8]
> WARN 2021-04-13T12:21:30,054 : [exchange-worker-#64%NODE_I1%] diagnostic
> - Last exchange future: GridDhtPartitionsExchangeFuture
> [firstDiscoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode
> [id=6d8863ec-7499-40d4-ad47-48a075adfed9,
>
> consistentId=0:0:0:0:0:0:0:1%lo0,127.0.0.1,192.168.0.9,2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36831,
> addrs=ArrayList [0:0:0:0:0:0:0:1%lo0, 127.0.0.1, 192.168.0.9,
> 2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,
> 2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,
> 2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0], sockAddrs=HashSet
> [/2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36831,
> /0:0:0:0:0:0:0:1%lo0:36831, /127.0.0.1:36831, /192.168.0.9:36831,
> /2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0:36831,
> /2601:647:4a00:a5d0:858:20c1:3958:1fd%en0:36831], discPort=36831, order=2,
> intOrder=2, lastExchangeTime=1618341679815, loc=false,
> ver=2.10.0#20210310-sha1:bc24f6ba, isClient=false], topVer=2,
> msgTemplate=null, span=o.a.i.i.processors.tracing.NoopSpan@319fd3e5,
> nodeId8=ada9dea1, msg=Node joined, type=NODE_JOINED, tstamp=1618341680036],
> crd=TcpDiscoveryNode [id=ada9dea1-579a-4846-b831-64144fb1f370,
>
> consistentId=0:0:0:0:0:0:0:1%lo0,127.0.0.1,192.168.0.9,2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
> addrs=ArrayList [0:0:0:0:0:0:0:1%lo0, 127.0.0.1, 192.168.0.9,
> 2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,
> 2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,
> 2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0], sockAddrs=HashSet
> [/2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
> /2601:647:4a00:a5d0:858:20c1:3958:1fd%en0:36830,
> /2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0:36830,
> /0:0:0:0:0:0:0:1%lo0:36830, /127.0.0.1:36830, /192.168.0.9:36830],
> discPort=36830, order=1, intOrder=1, lastExchangeTime=1618341689823,
> loc=true, ver=2.10.0#20210310-sha1:bc24f6ba, isClient=false],
> exchId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion
> [topVer=2,
> minorTopVer=0], discoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode
> [id=6d8863ec-7499-40d4-ad47-48a075adfed9,
>
> consistentId=0:0:0:0:0:0:0:1%lo0,127.0.0.1,192.168.0.9,2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36831,
> addrs=ArrayList [0:0:0:0:0:0:0:1%lo0, 127.0.0.1, 192.168.0.9,
> 2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,
> 2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,
> 2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0], sockAddrs=HashSet
> [/2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36831,
> /0:0:0:0:0:0:0:1%lo0:36831, /127.0.0.1:36831, /192.168.0.9:36831,
> /2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0:36831,
> /2601:647:4a00:a5d0:858:20c1:3958:1fd%en0:36831], discPort=36831, order=2,
> intOrder=2, lastExchangeTime=1618341679815, loc=false,
> ver=2.10.0#20210310-sha1:bc24f6ba, isClient=false], topVer=2,
> msgTemplate=null, span=o.a.i.i.processors.tracing.NoopSpan@319fd3e5,
> nodeId8=ada9dea1, msg=Node joined, type=NODE_JOINED, tstamp=1618341680036],
> nodeId=6d8863ec, evt=NODE_JOINED], added=true, exchangeType=ALL,
> initFut=GridFutureAdapter [ignoreInterrupts=false, state=DONE, res=true,
> hash=1297028313], init=true, lastVer=null,
> partReleaseFut=PartitionReleaseFuture [topVer=AffinityTopologyVersion
> [topVer=2, minorTopVer=0], futures=[ExplicitLockReleaseFuture
> [topVer=AffinityTopologyVersion [topVer=2, minorTopVer=0], futures=[]],
> AtomicUpdateReleaseFuture [topVer=AffinityTopologyVersion [topVer=2,
> minorTopVer=0], futures=[]], DataStreamerReleaseFuture
> [topVer=AffinityTopologyVersion [topVer=2, minorTopVer=0], futures=[]],
> LocalTxReleaseFuture [topVer=AffinityTopologyVersion [topVer=2,
> minorTopVer=0], futures=[]], AllTxReleaseFuture
> [topVer=AffinityTopologyVersion [topVer=2, minorTopVer=0],
> futures=[RemoteTxReleaseFuture [topVer=AffinityTopologyVersion [topVer=2,
> minorTopVer=0], futures=[]]]]]], exchActions=ExchangeActions
> [startCaches=null, stopCaches=null, startGrps=[], stopGrps=[],
> resetParts=null, stateChangeRequest=StateChangeRequest
> [msg=ChangeGlobalStateMessage
> [id=a6a6dacc871-b191809b-1e52-4d4f-9184-65bd75648b2c,
> reqId=57d42417-cda9-4b3d-b705-c8f5b9de2ba1,
> initiatingNodeId=ada9dea1-579a-4846-b831-64144fb1f370, state=ACTIVE,
> baselineTopology=BaselineTopology [id=0, branchingHash=1481172057,
> branchingType='New BaselineTopology',
>
> baselineNodes=[0:0:0:0:0:0:0:1%lo0,127.0.0.1,192.168.0.9,2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
>
> 0:0:0:0:0:0:0:1%lo0,127.0.0.1,192.168.0.9,2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36831]],
> forceChangeBaselineTopology=true, timestamp=1618341680041,
> forceDeactivation=true],
>
> prevBltHistItem=o.a.i.i.processors.cluster.BaselineTopologyHistoryItem@3cbd70b8
> ,
> prevState=ACTIVE, topVer=null]], affChangeMsg=null, centralizedAff=false,
> forceAffReassignment=false, exchangeLocE=null,
> cacheChangeFailureMsgSent=false, done=false, state=CRD,
> registerCachesFuture=GridFinishedFuture [resFlag=2],
> startTime=1618341680041, initTime=1618341680041, rebalancedInfo=null,
> affinityReassign=false, span=o.a.i.i.processors.tracing.NoopSpan@319fd3e5,
> evtLatch=0, remaining=HashSet [6d8863ec-7499-40d4-ad47-48a075adfed9],
> mergedJoinExchMsgs=null, awaitMergedMsgs=0, super=GridFutureAdapter
> [ignoreInterrupts=false, state=INIT, res=null, hash=1896798853]]
> WARN 2021-04-13T12:21:30,054 : [exchange-worker-#64%NODE_I1%]
> GridCachePartitionExchangeManager - First 10 pending exchange futures
> [total=1]
> WARN 2021-04-13T12:21:30,054 : [exchange-worker-#64%NODE_I1%]
> GridCachePartitionExchangeManager - >>> GridDhtPartitionsExchangeFuture
> [topVer=AffinityTopologyVersion [topVer=2, minorTopVer=1],
> evt=DISCOVERY_CUSTOM_EVT, evtNode=TcpDiscoveryNode
> [id=ada9dea1-579a-4846-b831-64144fb1f370,
>
> consistentId=0:0:0:0:0:0:0:1%lo0,127.0.0.1,192.168.0.9,2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
> addrs=ArrayList [0:0:0:0:0:0:0:1%lo0, 127.0.0.1, 192.168.0.9,
> 2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,
> 2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,
> 2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0], sockAddrs=HashSet
> [/2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
> /2601:647:4a00:a5d0:858:20c1:3958:1fd%en0:36830,
> /2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0:36830,
> /0:0:0:0:0:0:0:1%lo0:36830, /127.0.0.1:36830, /192.168.0.9:36830],
> discPort=36830, order=1, intOrder=1, lastExchangeTime=1618341689823,
> loc=true, ver=2.10.0#20210310-sha1:bc24f6ba, isClient=false],
> rebalanced=false, done=false, newCrdFut=null]
> WARN 2021-04-13T12:21:30,057 : [exchange-worker-#64%NODE_I1%] diagnostic
> - Last 10 exchange futures (total: 11):
> WARN 2021-04-13T12:21:30,057 : [exchange-worker-#64%NODE_I1%] diagnostic
> - >>> GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion
> [topVer=2, minorTopVer=1], evt=DISCOVERY_CUSTOM_EVT,
> evtNode=TcpDiscoveryNode [id=ada9dea1-579a-4846-b831-64144fb1f370,
>
> consistentId=0:0:0:0:0:0:0:1%lo0,127.0.0.1,192.168.0.9,2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
> addrs=ArrayList [0:0:0:0:0:0:0:1%lo0, 127.0.0.1, 192.168.0.9,
> 2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,
> 2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,
> 2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0], sockAddrs=HashSet
> [/2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
> /2601:647:4a00:a5d0:858:20c1:3958:1fd%en0:36830,
> /2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0:36830,
> /0:0:0:0:0:0:0:1%lo0:36830, /127.0.0.1:36830, /192.168.0.9:36830],
> discPort=36830, order=1, intOrder=1, lastExchangeTime=1618341689823,
> loc=true, ver=2.10.0#20210310-sha1:bc24f6ba, isClient=false],
> rebalanced=false, done=false, newCrdFut=null]
> WARN 2021-04-13T12:21:30,057 : [exchange-worker-#64%NODE_I1%] diagnostic
> - >>> GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion
> [topVer=2, minorTopVer=0], evt=NODE_JOINED, evtNode=TcpDiscoveryNode
> [id=6d8863ec-7499-40d4-ad47-48a075adfed9,
>
> consistentId=0:0:0:0:0:0:0:1%lo0,127.0.0.1,192.168.0.9,2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36831,
> addrs=ArrayList [0:0:0:0:0:0:0:1%lo0, 127.0.0.1, 192.168.0.9,
> 2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,
> 2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,
> 2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0], sockAddrs=HashSet
> [/2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36831,
> /0:0:0:0:0:0:0:1%lo0:36831, /127.0.0.1:36831, /192.168.0.9:36831,
> /2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0:36831,
> /2601:647:4a00:a5d0:858:20c1:3958:1fd%en0:36831], discPort=36831, order=2,
> intOrder=2, lastExchangeTime=1618341679815, loc=false,
> ver=2.10.0#20210310-sha1:bc24f6ba, isClient=false], rebalanced=false,
> done=false, newCrdFut=null]
> WARN 2021-04-13T12:21:30,057 : [exchange-worker-#64%NODE_I1%] diagnostic
> - >>> GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion
> [topVer=1, minorTopVer=8], evt=DISCOVERY_CUSTOM_EVT,
> evtNode=TcpDiscoveryNode [id=ada9dea1-579a-4846-b831-64144fb1f370,
>
> consistentId=0:0:0:0:0:0:0:1%lo0,127.0.0.1,192.168.0.9,2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
> addrs=ArrayList [0:0:0:0:0:0:0:1%lo0, 127.0.0.1, 192.168.0.9,
> 2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,
> 2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,
> 2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0], sockAddrs=HashSet
> [/2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
> /2601:647:4a00:a5d0:858:20c1:3958:1fd%en0:36830,
> /2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0:36830,
> /0:0:0:0:0:0:0:1%lo0:36830, /127.0.0.1:36830, /192.168.0.9:36830],
> discPort=36830, order=1, intOrder=1, lastExchangeTime=1618341689823,
> loc=true, ver=2.10.0#20210310-sha1:bc24f6ba, isClient=false],
> rebalanced=true, done=true, newCrdFut=null]
> WARN 2021-04-13T12:21:30,057 : [exchange-worker-#64%NODE_I1%] diagnostic
> - >>> GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion
> [topVer=1, minorTopVer=7], evt=DISCOVERY_CUSTOM_EVT,
> evtNode=TcpDiscoveryNode [id=ada9dea1-579a-4846-b831-64144fb1f370,
>
> consistentId=0:0:0:0:0:0:0:1%lo0,127.0.0.1,192.168.0.9,2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
> addrs=ArrayList [0:0:0:0:0:0:0:1%lo0, 127.0.0.1, 192.168.0.9,
> 2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,
> 2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,
> 2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0], sockAddrs=HashSet
> [/2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
> /2601:647:4a00:a5d0:858:20c1:3958:1fd%en0:36830,
> /2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0:36830,
> /0:0:0:0:0:0:0:1%lo0:36830, /127.0.0.1:36830, /192.168.0.9:36830],
> discPort=36830, order=1, intOrder=1, lastExchangeTime=1618341689823,
> loc=true, ver=2.10.0#20210310-sha1:bc24f6ba, isClient=false],
> rebalanced=true, done=true, newCrdFut=null]
> WARN 2021-04-13T12:21:30,057 : [exchange-worker-#64%NODE_I1%] diagnostic
> - >>> GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion
> [topVer=1, minorTopVer=6], evt=DISCOVERY_CUSTOM_EVT,
> evtNode=TcpDiscoveryNode [id=ada9dea1-579a-4846-b831-64144fb1f370,
>
> consistentId=0:0:0:0:0:0:0:1%lo0,127.0.0.1,192.168.0.9,2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
> addrs=ArrayList [0:0:0:0:0:0:0:1%lo0, 127.0.0.1, 192.168.0.9,
> 2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,
> 2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,
> 2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0], sockAddrs=HashSet
> [/2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
> /2601:647:4a00:a5d0:858:20c1:3958:1fd%en0:36830,
> /2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0:36830,
> /0:0:0:0:0:0:0:1%lo0:36830, /127.0.0.1:36830, /192.168.0.9:36830],
> discPort=36830, order=1, intOrder=1, lastExchangeTime=1618341689823,
> loc=true, ver=2.10.0#20210310-sha1:bc24f6ba, isClient=false],
> rebalanced=true, done=true, newCrdFut=null]
> WARN 2021-04-13T12:21:30,057 : [exchange-worker-#64%NODE_I1%] diagnostic
> - >>> GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion
> [topVer=1, minorTopVer=5], evt=DISCOVERY_CUSTOM_EVT,
> evtNode=TcpDiscoveryNode [id=ada9dea1-579a-4846-b831-64144fb1f370,
>
> consistentId=0:0:0:0:0:0:0:1%lo0,127.0.0.1,192.168.0.9,2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
> addrs=ArrayList [0:0:0:0:0:0:0:1%lo0, 127.0.0.1, 192.168.0.9,
> 2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,
> 2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,
> 2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0], sockAddrs=HashSet
> [/2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
> /2601:647:4a00:a5d0:858:20c1:3958:1fd%en0:36830,
> /2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0:36830,
> /0:0:0:0:0:0:0:1%lo0:36830, /127.0.0.1:36830, /192.168.0.9:36830],
> discPort=36830, order=1, intOrder=1, lastExchangeTime=1618341689823,
> loc=true, ver=2.10.0#20210310-sha1:bc24f6ba, isClient=false],
> rebalanced=true, done=true, newCrdFut=null]
> WARN 2021-04-13T12:21:30,058 : [exchange-worker-#64%NODE_I1%] diagnostic
> - >>> GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion
> [topVer=1, minorTopVer=4], evt=DISCOVERY_CUSTOM_EVT,
> evtNode=TcpDiscoveryNode [id=ada9dea1-579a-4846-b831-64144fb1f370,
>
> consistentId=0:0:0:0:0:0:0:1%lo0,127.0.0.1,192.168.0.9,2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
> addrs=ArrayList [0:0:0:0:0:0:0:1%lo0, 127.0.0.1, 192.168.0.9,
> 2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,
> 2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,
> 2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0], sockAddrs=HashSet
> [/2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
> /2601:647:4a00:a5d0:858:20c1:3958:1fd%en0:36830,
> /2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0:36830,
> /0:0:0:0:0:0:0:1%lo0:36830, /127.0.0.1:36830, /192.168.0.9:36830],
> discPort=36830, order=1, intOrder=1, lastExchangeTime=1618341689823,
> loc=true, ver=2.10.0#20210310-sha1:bc24f6ba, isClient=false],
> rebalanced=true, done=true, newCrdFut=null]
> WARN 2021-04-13T12:21:30,058 : [exchange-worker-#64%NODE_I1%] diagnostic
> - >>> GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion
> [topVer=1, minorTopVer=3], evt=DISCOVERY_CUSTOM_EVT,
> evtNode=TcpDiscoveryNode [id=ada9dea1-579a-4846-b831-64144fb1f370,
>
> consistentId=0:0:0:0:0:0:0:1%lo0,127.0.0.1,192.168.0.9,2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
> addrs=ArrayList [0:0:0:0:0:0:0:1%lo0, 127.0.0.1, 192.168.0.9,
> 2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,
> 2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,
> 2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0], sockAddrs=HashSet
> [/2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
> /2601:647:4a00:a5d0:858:20c1:3958:1fd%en0:36830,
> /2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0:36830,
> /0:0:0:0:0:0:0:1%lo0:36830, /127.0.0.1:36830, /192.168.0.9:36830],
> discPort=36830, order=1, intOrder=1, lastExchangeTime=1618341689823,
> loc=true, ver=2.10.0#20210310-sha1:bc24f6ba, isClient=false],
> rebalanced=true, done=true, newCrdFut=null]
> WARN 2021-04-13T12:21:30,058 : [exchange-worker-#64%NODE_I1%] diagnostic
> - >>> GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion
> [topVer=1, minorTopVer=2], evt=DISCOVERY_CUSTOM_EVT,
> evtNode=TcpDiscoveryNode [id=ada9dea1-579a-4846-b831-64144fb1f370,
>
> consistentId=0:0:0:0:0:0:0:1%lo0,127.0.0.1,192.168.0.9,2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
> addrs=ArrayList [0:0:0:0:0:0:0:1%lo0, 127.0.0.1, 192.168.0.9,
> 2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,
> 2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,
> 2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0], sockAddrs=HashSet
> [/2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
> /2601:647:4a00:a5d0:858:20c1:3958:1fd%en0:36830,
> /2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0:36830,
> /0:0:0:0:0:0:0:1%lo0:36830, /127.0.0.1:36830, /192.168.0.9:36830],
> discPort=36830, order=1, intOrder=1, lastExchangeTime=1618341689823,
> loc=true, ver=2.10.0#20210310-sha1:bc24f6ba, isClient=false],
> rebalanced=true, done=true, newCrdFut=null]
> WARN 2021-04-13T12:21:30,058 : [exchange-worker-#64%NODE_I1%] diagnostic
> - >>> GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion
> [topVer=1, minorTopVer=1], evt=DISCOVERY_CUSTOM_EVT,
> evtNode=TcpDiscoveryNode [id=ada9dea1-579a-4846-b831-64144fb1f370,
>
> consistentId=0:0:0:0:0:0:0:1%lo0,127.0.0.1,192.168.0.9,2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
> addrs=ArrayList [0:0:0:0:0:0:0:1%lo0, 127.0.0.1, 192.168.0.9,
> 2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,
> 2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,
> 2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0], sockAddrs=HashSet
> [/2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36830,
> /2601:647:4a00:a5d0:858:20c1:3958:1fd%en0:36830,
> /2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0:36830,
> /0:0:0:0:0:0:0:1%lo0:36830, /127.0.0.1:36830, /192.168.0.9:36830],
> discPort=36830, order=1, intOrder=1, lastExchangeTime=1618341689823,
> loc=true, ver=2.10.0#20210310-sha1:bc24f6ba, isClient=false],
> rebalanced=true, done=true, newCrdFut=null]
> WARN 2021-04-13T12:21:30,058 : [exchange-worker-#64%NODE_I1%] diagnostic
> - Latch manager state: ExchangeLatchManager
> [serverLatches=ConcurrentHashMap
> {}, clientLatches=ConcurrentHashMap {}]
> ERROR 2021-04-13T12:21:30,062 : [exchange-worker-#64%NODE_I1%]
> TcpCommunicationSpi - Failed to send message to remote node
> [node=TcpDiscoveryNode [id=6d8863ec-7499-40d4-ad47-48a075adfed9,
>
> consistentId=0:0:0:0:0:0:0:1%lo0,127.0.0.1,192.168.0.9,2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36831,
> addrs=ArrayList [0:0:0:0:0:0:0:1%lo0, 127.0.0.1, 192.168.0.9,
> 2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,
> 2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,
> 2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0], sockAddrs=HashSet
> [/2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36831,
> /0:0:0:0:0:0:0:1%lo0:36831, /127.0.0.1:36831, /192.168.0.9:36831,
> /2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0:36831,
> /2601:647:4a00:a5d0:858:20c1:3958:1fd%en0:36831], discPort=36831, order=2,
> intOrder=2, lastExchangeTime=1618341679815, loc=false,
> ver=2.10.0#20210310-sha1:bc24f6ba, isClient=false], msg=GridIoMessage
> [plc=2, topic=TOPIC_INTERNAL_DIAGNOSTIC, topicOrd=27, ordered=false,
> timeout=0, skipOnTimeout=false, msg=IgniteDiagnosticMessage [flags=1,
> futId=0]]]
> org.apache.ignite.IgniteCheckedException: null
> at
> org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7587)
> ~[ignite-core-2.10.0.jar:2.10.0]
> at
>
> org.apache.ignite.internal.util.future.GridFutureAdapter.resolve(GridFutureAdapter.java:260)
> ~[ignite-core-2.10.0.jar:2.10.0]
> at
>
> org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:209)
> ~[ignite-core-2.10.0.jar:2.10.0]
> at
>
> org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:160)
> ~[ignite-core-2.10.0.jar:2.10.0]
> at
>
> org.apache.ignite.spi.communication.tcp.internal.ConnectionClientPool.reserveClient(ConnectionClientPool.java:289)
> ~[ignite-core-2.10.0.jar:2.10.0]
> at
>
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.sendMessage0(TcpCommunicationSpi.java:1186)
> [ignite-core-2.10.0.jar:2.10.0]
> at
>
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.sendMessage(TcpCommunicationSpi.java:1133)
> [ignite-core-2.10.0.jar:2.10.0]
> at
>
> org.apache.ignite.internal.managers.communication.GridIoManager.send(GridIoManager.java:2101)
> [ignite-core-2.10.0.jar:2.10.0]
> at
>
> org.apache.ignite.internal.managers.communication.GridIoManager.sendToGridTopic(GridIoManager.java:2184)
> [ignite-core-2.10.0.jar:2.10.0]
> at
>
> org.apache.ignite.internal.processors.cluster.ClusterProcessor.sendDiagnosticMessage(ClusterProcessor.java:935)
> [ignite-core-2.10.0.jar:2.10.0]
> at
>
> org.apache.ignite.internal.processors.cluster.ClusterProcessor.requestDiagnosticInfo(ClusterProcessor.java:877)
> [ignite-core-2.10.0.jar:2.10.0]
> at
>
> org.apache.ignite.internal.IgniteDiagnosticPrepareContext.send(IgniteDiagnosticPrepareContext.java:131)
> [ignite-core-2.10.0.jar:2.10.0]
> at
>
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager.dumpDebugInfo(GridCachePartitionExchangeManager.java:2188)
> [ignite-core-2.10.0.jar:2.10.0]
> at
>
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:3423)
> [ignite-core-2.10.0.jar:2.10.0]
> at
>
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:3195)
> [ignite-core-2.10.0.jar:2.10.0]
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
> [ignite-core-2.10.0.jar:2.10.0]
> at java.lang.Thread.run(Thread.java:844) [?:?]
> Caused by: java.lang.NullPointerException
> at
>
> org.apache.ignite.spi.communication.tcp.internal.ConnectionClientPool.createShmemClient(ConnectionClientPool.java:521)
> ~[ignite-core-2.10.0.jar:2.10.0]
> at
>
> org.apache.ignite.spi.communication.tcp.internal.ConnectionClientPool.createCommunicationClient(ConnectionClientPool.java:428)
> ~[ignite-core-2.10.0.jar:2.10.0]
> at
>
> org.apache.ignite.spi.communication.tcp.internal.ConnectionClientPool.reserveClient(ConnectionClientPool.java:228)
> ~[ignite-core-2.10.0.jar:2.10.0]
> ... 12 more
> ERROR 2021-04-13T12:21:30,063 : [exchange-worker-#64%NODE_I1%] diagnostic
> - Failed to send diagnostic message: class o.a.i.IgniteCheckedException:
> Failed to send message (node may have left the grid or TCP connection
> cannot
> be established due to firewall issues) [node=TcpDiscoveryNode
> [id=6d8863ec-7499-40d4-ad47-48a075adfed9,
>
> consistentId=0:0:0:0:0:0:0:1%lo0,127.0.0.1,192.168.0.9,2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36831,
> addrs=ArrayList [0:0:0:0:0:0:0:1%lo0, 127.0.0.1, 192.168.0.9,
> 2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,
> 2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,
> 2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0], sockAddrs=HashSet
> [/2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36831,
> /0:0:0:0:0:0:0:1%lo0:36831, /127.0.0.1:36831, /192.168.0.9:36831,
> /2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0:36831,
> /2601:647:4a00:a5d0:858:20c1:3958:1fd%en0:36831], discPort=36831, order=2,
> intOrder=2, lastExchangeTime=1618341679815, loc=false,
> ver=2.10.0#20210310-sha1:bc24f6ba, isClient=false],
> topic=TOPIC_INTERNAL_DIAGNOSTIC, msg=IgniteDiagnosticMessage [flags=1,
> futId=0], policy=2]
> INFO 2021-04-13T12:21:30,067 : [exchange-worker-#64%NODE_I1%] diagnostic
> - Exchange future on coordinator waiting for server response
> [node=6d8863ec-7499-40d4-ad47-48a075adfed9, topVer=AffinityTopologyVersion
> [topVer=2, minorTopVer=0]]
> Remote node information:
> Failed to send diagnostic message: class
> org.apache.ignite.IgniteCheckedException: Failed to send message (node may
> have left the grid or TCP connection cannot be established due to firewall
> issues) [node=TcpDiscoveryNode [id=6d8863ec-7499-40d4-ad47-48a075adfed9,
>
> consistentId=0:0:0:0:0:0:0:1%lo0,127.0.0.1,192.168.0.9,2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36831,
> addrs=ArrayList [0:0:0:0:0:0:0:1%lo0, 127.0.0.1, 192.168.0.9,
> 2601:647:4a00:a5d0:858:20c1:3958:1fd%en0,
> 2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0,
> 2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0], sockAddrs=HashSet
> [/2601:647:4a00:a5d0:d50:6d10:5577:62d3%en0:36831,
> /0:0:0:0:0:0:0:1%lo0:36831, /127.0.0.1:36831, /192.168.0.9:36831,
> /2601:647:4a00:a5d0:d0ed:9ec:aafe:87ad%en0:36831,
> /2601:647:4a00:a5d0:858:20c1:3958:1fd%en0:36831], discPort=36831, order=2,
> intOrder=2, lastExchangeTime=1618341679815, loc=false,
> ver=2.10.0#20210310-sha1:bc24f6ba, isClient=false],
> topic=TOPIC_INTERNAL_DIAGNOSTIC, msg=IgniteDiagnosticMessage [flags=1,
> futId=0], policy=2]
> Local communication statistics:
> Communication SPI statistics [rmtNode=6d8863ec-7499-40d4-ad47-48a075adfed9]
> Communication SPI recovery descriptors:
> Communication SPI clients:
> NIO sessions statistics:
>
>
> *Debugging* to the source of the NullPointer I see it's at this point:
>
> https://github.com/apache/ignite/blob/bc24f6baf3e9b4f98cf98cc5df67fb5deb5ceb6c/modules/core/src/main/java/org/apache/ignite/spi/communication/tcp/internal/ConnectionClientPool.java#L521
> <
> https://github.com/apache/ignite/blob/bc24f6baf3e9b4f98cf98cc5df67fb5deb5ceb6c/modules/core/src/main/java/org/apache/ignite/spi/communication/tcp/internal/ConnectionClientPool.java#L521>
>
>
> And the value of msgFormatterSupplier is explicitly set as null in:
>
> https://github.com/apache/ignite/blob/da8a6bb4756c998aa99494d395752be96d841ec8/modules/core/src/main/java/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpi.java#L752
>
> The cluster forms successfully without the shared memory in 2.10.0 and
> worked both with and without shared memory on our previous version 2.6.0.
> Is
> there some configuration I am missing to communicate on shared memory?
> Thanks in advance!
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>