You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2018/10/03 23:04:38 UTC

[JENKINS] Lucene-Solr-Tests-7.x - Build # 917 - Still Failing

Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/917/

1 tests failed.
FAILED:  org.apache.solr.cloud.cdcr.CdcrBidirectionalTest.testBiDir

Error Message:
Captured an uncaught exception in thread: Thread[id=11131, name=cdcr-replicator-3089-thread-1, state=RUNNABLE, group=TGRP-CdcrBidirectionalTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=11131, name=cdcr-replicator-3089-thread-1, state=RUNNABLE, group=TGRP-CdcrBidirectionalTest]
Caused by: java.lang.AssertionError: 1613343786009624576 != 1613343786008576000
	at __randomizedtesting.SeedInfo.seed([5B0A46AB59A1F6D]:0)
	at org.apache.solr.update.CdcrUpdateLog$CdcrLogReader.forwardSeek(CdcrUpdateLog.java:611)
	at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:105)
	at org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
	at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 13245 lines...]
   [junit4] Suite: org.apache.solr.cloud.cdcr.CdcrBidirectionalTest
   [junit4]   2> 833723 INFO  (SUITE-CdcrBidirectionalTest-seed#[5B0A46AB59A1F6D]-worker) [    ] o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> Creating dataDir: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.cloud.cdcr.CdcrBidirectionalTest_5B0A46AB59A1F6D-001/init-core-data-001
   [junit4]   2> 833724 WARN  (SUITE-CdcrBidirectionalTest-seed#[5B0A46AB59A1F6D]-worker) [    ] o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=8 numCloses=8
   [junit4]   2> 833724 INFO  (SUITE-CdcrBidirectionalTest-seed#[5B0A46AB59A1F6D]-worker) [    ] o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) w/NUMERIC_DOCVALUES_SYSPROP=false
   [junit4]   2> 833738 INFO  (SUITE-CdcrBidirectionalTest-seed#[5B0A46AB59A1F6D]-worker) [    ] o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (true) via: @org.apache.solr.util.RandomizeSSL(reason=, value=NaN, ssl=NaN, clientAuth=NaN)
   [junit4]   2> 833747 INFO  (TEST-CdcrBidirectionalTest.testBiDir-seed#[5B0A46AB59A1F6D]) [    ] o.a.s.SolrTestCaseJ4 ###Starting testBiDir
   [junit4]   2> 833748 INFO  (TEST-CdcrBidirectionalTest.testBiDir-seed#[5B0A46AB59A1F6D]) [    ] o.a.s.c.MiniSolrCloudCluster Starting cluster of 1 servers in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.cloud.cdcr.CdcrBidirectionalTest_5B0A46AB59A1F6D-001/cdcr-cluster2-001
   [junit4]   2> 833748 INFO  (TEST-CdcrBidirectionalTest.testBiDir-seed#[5B0A46AB59A1F6D]) [    ] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 833759 INFO  (Thread-1605) [    ] o.a.s.c.ZkTestServer client port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 833759 INFO  (Thread-1605) [    ] o.a.s.c.ZkTestServer Starting server
   [junit4]   2> 833767 ERROR (Thread-1605) [    ] o.a.z.s.ZooKeeperServer ZKShutdownHandler is not registered, so ZooKeeper server won't take any action on ERROR or SHUTDOWN server state changes
   [junit4]   2> 833859 INFO  (TEST-CdcrBidirectionalTest.testBiDir-seed#[5B0A46AB59A1F6D]) [    ] o.a.s.c.ZkTestServer start zk server on port:46789
   [junit4]   2> 833884 INFO  (zkConnectionManagerCallback-6293-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 833903 INFO  (jetty-launcher-6290-thread-1) [    ] o.e.j.s.Server jetty-9.4.11.v20180605; built: 2018-06-05T18:24:03.829Z; git: d5fc0523cfa96bfebfbda19606cad384d772f04c; jvm 1.8.0_172-b11
   [junit4]   2> 833942 INFO  (jetty-launcher-6290-thread-1) [    ] o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 833942 INFO  (jetty-launcher-6290-thread-1) [    ] o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 833942 INFO  (jetty-launcher-6290-thread-1) [    ] o.e.j.s.session node0 Scavenging every 600000ms
   [junit4]   2> 833946 INFO  (jetty-launcher-6290-thread-1) [    ] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@79f4c8cc{/solr,null,AVAILABLE}
   [junit4]   2> 833947 INFO  (jetty-launcher-6290-thread-1) [    ] o.e.j.s.AbstractConnector Started ServerConnector@32477d40{SSL,[ssl, http/1.1]}{127.0.0.1:37145}
   [junit4]   2> 833947 INFO  (jetty-launcher-6290-thread-1) [    ] o.e.j.s.Server Started @834017ms
   [junit4]   2> 833947 INFO  (jetty-launcher-6290-thread-1) [    ] o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, hostPort=37145}
   [junit4]   2> 833947 ERROR (jetty-launcher-6290-thread-1) [    ] o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be missing or incomplete.
   [junit4]   2> 833947 INFO  (jetty-launcher-6290-thread-1) [    ] o.a.s.s.SolrDispatchFilter Using logger factory org.apache.logging.slf4j.Log4jLoggerFactory
   [junit4]   2> 833947 INFO  (jetty-launcher-6290-thread-1) [    ] o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr™ version 7.6.0
   [junit4]   2> 833947 INFO  (jetty-launcher-6290-thread-1) [    ] o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 833947 INFO  (jetty-launcher-6290-thread-1) [    ] o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 833947 INFO  (jetty-launcher-6290-thread-1) [    ] o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 2018-10-03T22:08:36.396Z
   [junit4]   2> 833971 INFO  (zkConnectionManagerCallback-6295-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 833972 INFO  (jetty-launcher-6290-thread-1) [    ] o.a.s.s.SolrDispatchFilter solr.xml found in ZooKeeper. Loading...
   [junit4]   2> 834204 INFO  (jetty-launcher-6290-thread-1) [    ] o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:46789/solr
   [junit4]   2> 834227 INFO  (zkConnectionManagerCallback-6299-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 834247 INFO  (zkConnectionManagerCallback-6301-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 834333 INFO  (jetty-launcher-6290-thread-1) [n:127.0.0.1:37145_solr    ] o.a.s.c.OverseerElectionContext I am going to be the leader 127.0.0.1:37145_solr
   [junit4]   2> 834334 INFO  (jetty-launcher-6290-thread-1) [n:127.0.0.1:37145_solr    ] o.a.s.c.Overseer Overseer (id=72443812924227587-127.0.0.1:37145_solr-n_0000000000) starting
   [junit4]   2> 834366 INFO  (zkConnectionManagerCallback-6308-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 834367 INFO  (jetty-launcher-6290-thread-1) [n:127.0.0.1:37145_solr    ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:46789/solr ready
   [junit4]   2> 834372 INFO  (OverseerStateUpdate-72443812924227587-127.0.0.1:37145_solr-n_0000000000) [n:127.0.0.1:37145_solr    ] o.a.s.c.Overseer Starting to work on the main queue : 127.0.0.1:37145_solr
   [junit4]   2> 834372 INFO  (jetty-launcher-6290-thread-1) [n:127.0.0.1:37145_solr    ] o.a.s.c.ZkController Register node as live in ZooKeeper:/live_nodes/127.0.0.1:37145_solr
   [junit4]   2> 834382 INFO  (OverseerStateUpdate-72443812924227587-127.0.0.1:37145_solr-n_0000000000) [n:127.0.0.1:37145_solr    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 834411 INFO  (zkCallback-6307-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 834451 INFO  (jetty-launcher-6290-thread-1) [n:127.0.0.1:37145_solr    ] o.a.s.h.a.MetricsHistoryHandler No .system collection, keeping metrics history in memory.
   [junit4]   2> 834500 INFO  (jetty-launcher-6290-thread-1) [n:127.0.0.1:37145_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr_37145.solr.node' (registry 'solr.node') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@4f34ceff
   [junit4]   2> 834516 INFO  (jetty-launcher-6290-thread-1) [n:127.0.0.1:37145_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr_37145.solr.jvm' (registry 'solr.jvm') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@4f34ceff
   [junit4]   2> 834517 INFO  (jetty-launcher-6290-thread-1) [n:127.0.0.1:37145_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr_37145.solr.jetty' (registry 'solr.jetty') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@4f34ceff
   [junit4]   2> 834518 INFO  (jetty-launcher-6290-thread-1) [n:127.0.0.1:37145_solr    ] o.a.s.c.CorePropertiesLocator Found 0 core definitions underneath /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.cloud.cdcr.CdcrBidirectionalTest_5B0A46AB59A1F6D-001/cdcr-cluster2-001/node1/.
   [junit4]   2> 834579 INFO  (zkConnectionManagerCallback-6311-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 834603 INFO  (zkConnectionManagerCallback-6314-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 834605 INFO  (TEST-CdcrBidirectionalTest.testBiDir-seed#[5B0A46AB59A1F6D]) [    ] o.a.s.c.MiniSolrCloudCluster Starting cluster of 1 servers in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.cloud.cdcr.CdcrBidirectionalTest_5B0A46AB59A1F6D-001/cdcr-cluster1-001
   [junit4]   2> 834605 INFO  (TEST-CdcrBidirectionalTest.testBiDir-seed#[5B0A46AB59A1F6D]) [    ] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 834615 INFO  (Thread-1615) [    ] o.a.s.c.ZkTestServer client port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 834615 INFO  (Thread-1615) [    ] o.a.s.c.ZkTestServer Starting server
   [junit4]   2> 834655 ERROR (Thread-1615) [    ] o.a.z.s.ZooKeeperServer ZKShutdownHandler is not registered, so ZooKeeper server won't take any action on ERROR or SHUTDOWN server state changes
   [junit4]   2> 834715 INFO  (TEST-CdcrBidirectionalTest.testBiDir-seed#[5B0A46AB59A1F6D]) [    ] o.a.s.c.ZkTestServer start zk server on port:43589
   [junit4]   2> 834747 INFO  (zkConnectionManagerCallback-6318-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 834828 INFO  (jetty-launcher-6315-thread-1) [    ] o.e.j.s.Server jetty-9.4.11.v20180605; built: 2018-06-05T18:24:03.829Z; git: d5fc0523cfa96bfebfbda19606cad384d772f04c; jvm 1.8.0_172-b11
   [junit4]   2> 834887 INFO  (jetty-launcher-6315-thread-1) [    ] o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 834887 INFO  (jetty-launcher-6315-thread-1) [    ] o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 834887 INFO  (jetty-launcher-6315-thread-1) [    ] o.e.j.s.session node0 Scavenging every 660000ms
   [junit4]   2> 834908 INFO  (jetty-launcher-6315-thread-1) [    ] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@36a3f24d{/solr,null,AVAILABLE}
   [junit4]   2> 834908 INFO  (jetty-launcher-6315-thread-1) [    ] o.e.j.s.AbstractConnector Started ServerConnector@5472755c{SSL,[ssl, http/1.1]}{127.0.0.1:44643}
   [junit4]   2> 834908 INFO  (jetty-launcher-6315-thread-1) [    ] o.e.j.s.Server Started @834978ms
   [junit4]   2> 834908 INFO  (jetty-launcher-6315-thread-1) [    ] o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, hostPort=44643}
   [junit4]   2> 834909 ERROR (jetty-launcher-6315-thread-1) [    ] o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be missing or incomplete.
   [junit4]   2> 834909 INFO  (jetty-launcher-6315-thread-1) [    ] o.a.s.s.SolrDispatchFilter Using logger factory org.apache.logging.slf4j.Log4jLoggerFactory
   [junit4]   2> 834909 INFO  (jetty-launcher-6315-thread-1) [    ] o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr™ version 7.6.0
   [junit4]   2> 834909 INFO  (jetty-launcher-6315-thread-1) [    ] o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 834909 INFO  (jetty-launcher-6315-thread-1) [    ] o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 834909 INFO  (jetty-launcher-6315-thread-1) [    ] o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 2018-10-03T22:08:37.358Z
   [junit4]   2> 834931 INFO  (zkConnectionManagerCallback-6320-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 834932 INFO  (jetty-launcher-6315-thread-1) [    ] o.a.s.s.SolrDispatchFilter solr.xml found in ZooKeeper. Loading...
   [junit4]   2> 835070 INFO  (jetty-launcher-6315-thread-1) [    ] o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:43589/solr
   [junit4]   2> 835075 INFO  (zkConnectionManagerCallback-6324-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 835077 INFO  (zkConnectionManagerCallback-6326-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 835144 INFO  (jetty-launcher-6315-thread-1) [n:127.0.0.1:44643_solr    ] o.a.s.c.OverseerElectionContext I am going to be the leader 127.0.0.1:44643_solr
   [junit4]   2> 835145 INFO  (jetty-launcher-6315-thread-1) [n:127.0.0.1:44643_solr    ] o.a.s.c.Overseer Overseer (id=72443812980850691-127.0.0.1:44643_solr-n_0000000000) starting
   [junit4]   2> 835156 INFO  (zkConnectionManagerCallback-6333-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 835158 INFO  (jetty-launcher-6315-thread-1) [n:127.0.0.1:44643_solr    ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:43589/solr ready
   [junit4]   2> 835159 INFO  (OverseerStateUpdate-72443812980850691-127.0.0.1:44643_solr-n_0000000000) [n:127.0.0.1:44643_solr    ] o.a.s.c.Overseer Starting to work on the main queue : 127.0.0.1:44643_solr
   [junit4]   2> 835160 INFO  (jetty-launcher-6315-thread-1) [n:127.0.0.1:44643_solr    ] o.a.s.c.ZkController Register node as live in ZooKeeper:/live_nodes/127.0.0.1:44643_solr
   [junit4]   2> 835167 INFO  (zkCallback-6325-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 835177 INFO  (zkCallback-6332-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 835183 INFO  (jetty-launcher-6315-thread-1) [n:127.0.0.1:44643_solr    ] o.a.s.h.a.MetricsHistoryHandler No .system collection, keeping metrics history in memory.
   [junit4]   2> 835198 INFO  (jetty-launcher-6315-thread-1) [n:127.0.0.1:44643_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr_44643.solr.node' (registry 'solr.node') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@4f34ceff
   [junit4]   2> 835206 INFO  (jetty-launcher-6315-thread-1) [n:127.0.0.1:44643_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr_44643.solr.jvm' (registry 'solr.jvm') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@4f34ceff
   [junit4]   2> 835207 INFO  (jetty-launcher-6315-thread-1) [n:127.0.0.1:44643_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr_44643.solr.jetty' (registry 'solr.jetty') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@4f34ceff
   [junit4]   2> 835208 INFO  (jetty-launcher-6315-thread-1) [n:127.0.0.1:44643_solr    ] o.a.s.c.CorePropertiesLocator Found 0 core definitions underneath /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.cloud.cdcr.CdcrBidirectionalTest_5B0A46AB59A1F6D-001/cdcr-cluster1-001/node1/.
   [junit4]   2> 835240 INFO  (zkConnectionManagerCallback-6336-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 835243 INFO  (zkConnectionManagerCallback-6339-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 835244 INFO  (TEST-CdcrBidirectionalTest.testBiDir-seed#[5B0A46AB59A1F6D]) [    ] o.a.s.c.c.CdcrBidirectionalTest cluster2 zkHost = 127.0.0.1:46789/solr
   [junit4]   2> 835244 INFO  (TEST-CdcrBidirectionalTest.testBiDir-seed#[5B0A46AB59A1F6D]) [    ] o.a.s.c.c.CdcrBidirectionalTest cluster1 zkHost = 127.0.0.1:43589/solr
   [junit4]   2> 835246 INFO  (zkConnectionManagerCallback-6341-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 835250 INFO  (zkConnectionManagerCallback-6345-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 835251 INFO  (TEST-CdcrBidirectionalTest.testBiDir-seed#[5B0A46AB59A1F6D]) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 835251 INFO  (TEST-CdcrBidirectionalTest.testBiDir-seed#[5B0A46AB59A1F6D]) [    ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:43589/solr ready
   [junit4]   2> 835273 INFO  (qtp585197032-10999) [n:127.0.0.1:44643_solr    ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params collection.configName=cdcr-cluster1&maxShardsPerNode=2&name=cdcr-cluster1&nrtReplicas=1&action=CREATE&numShards=2&property.solr.directoryFactory=solr.StandardDirectoryFactory&wt=javabin&version=2 and sendToOCPQueue=true
   [junit4]   2> 835276 INFO  (OverseerThreadFactory-3050-thread-1-processing-n:127.0.0.1:44643_solr) [n:127.0.0.1:44643_solr    ] o.a.s.c.a.c.CreateCollectionCmd Create collection cdcr-cluster1
   [junit4]   2> 835388 INFO  (OverseerStateUpdate-72443812980850691-127.0.0.1:44643_solr-n_0000000000) [n:127.0.0.1:44643_solr    ] o.a.s.c.o.SliceMutator createReplica() {
   [junit4]   2>   "operation":"ADDREPLICA",
   [junit4]   2>   "collection":"cdcr-cluster1",
   [junit4]   2>   "shard":"shard1",
   [junit4]   2>   "core":"cdcr-cluster1_shard1_replica_n1",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "base_url":"https://127.0.0.1:44643/solr",
   [junit4]   2>   "type":"NRT",
   [junit4]   2>   "waitForFinalState":"false"} 
   [junit4]   2> 835391 INFO  (OverseerStateUpdate-72443812980850691-127.0.0.1:44643_solr-n_0000000000) [n:127.0.0.1:44643_solr    ] o.a.s.c.o.SliceMutator createReplica() {
   [junit4]   2>   "operation":"ADDREPLICA",
   [junit4]   2>   "collection":"cdcr-cluster1",
   [junit4]   2>   "shard":"shard2",
   [junit4]   2>   "core":"cdcr-cluster1_shard2_replica_n3",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "base_url":"https://127.0.0.1:44643/solr",
   [junit4]   2>   "type":"NRT",
   [junit4]   2>   "waitForFinalState":"false"} 
   [junit4]   2> 835504 INFO  (qtp585197032-11000) [n:127.0.0.1:44643_solr    x:cdcr-cluster1_shard2_replica_n3] o.a.s.h.a.CoreAdminOperation core create command qt=/admin/cores&collection.configName=cdcr-cluster1&newCollection=true&collection=cdcr-cluster1&version=2&replicaType=NRT&coreNodeName=core_node4&name=cdcr-cluster1_shard2_replica_n3&action=CREATE&numShards=2&shard=shard2&property.solr.directoryFactory=solr.StandardDirectoryFactory&wt=javabin
   [junit4]   2> 835505 INFO  (qtp585197032-11000) [n:127.0.0.1:44643_solr    x:cdcr-cluster1_shard2_replica_n3] o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 2147483647 transient cores
   [junit4]   2> 835508 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr    x:cdcr-cluster1_shard1_replica_n1] o.a.s.h.a.CoreAdminOperation core create command qt=/admin/cores&collection.configName=cdcr-cluster1&newCollection=true&collection=cdcr-cluster1&version=2&replicaType=NRT&coreNodeName=core_node2&name=cdcr-cluster1_shard1_replica_n1&action=CREATE&numShards=2&shard=shard1&property.solr.directoryFactory=solr.StandardDirectoryFactory&wt=javabin
   [junit4]   2> 836554 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.c.SolrConfig Using Lucene MatchVersion: 7.6.0
   [junit4]   2> 836576 INFO  (qtp585197032-11000) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.c.SolrConfig Using Lucene MatchVersion: 7.6.0
   [junit4]   2> 836586 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.s.IndexSchema [cdcr-cluster1_shard1_replica_n1] Schema name=minimal
   [junit4]   2> 836592 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.s.IndexSchema Loaded schema minimal/1.1 with uniqueid field id
   [junit4]   2> 836593 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.c.CoreContainer Creating SolrCore 'cdcr-cluster1_shard1_replica_n1' using configuration from collection cdcr-cluster1, trusted=true
   [junit4]   2> 836593 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr_44643.solr.core.cdcr-cluster1.shard1.replica_n1' (registry 'solr.core.cdcr-cluster1.shard1.replica_n1') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@4f34ceff
   [junit4]   2> 836594 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.c.SolrCore [[cdcr-cluster1_shard1_replica_n1] ] Opening new SolrCore at [/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.cloud.cdcr.CdcrBidirectionalTest_5B0A46AB59A1F6D-001/cdcr-cluster1-001/node1/cdcr-cluster1_shard1_replica_n1], dataDir=[/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.cloud.cdcr.CdcrBidirectionalTest_5B0A46AB59A1F6D-001/cdcr-cluster1-001/node1/./cdcr-cluster1_shard1_replica_n1/data/]
   [junit4]   2> 836598 INFO  (qtp585197032-11000) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.s.IndexSchema [cdcr-cluster1_shard2_replica_n3] Schema name=minimal
   [junit4]   2> 836601 INFO  (qtp585197032-11000) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.s.IndexSchema Loaded schema minimal/1.1 with uniqueid field id
   [junit4]   2> 836601 INFO  (qtp585197032-11000) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.c.CoreContainer Creating SolrCore 'cdcr-cluster1_shard2_replica_n3' using configuration from collection cdcr-cluster1, trusted=true
   [junit4]   2> 836601 INFO  (qtp585197032-11000) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr_44643.solr.core.cdcr-cluster1.shard2.replica_n3' (registry 'solr.core.cdcr-cluster1.shard2.replica_n3') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@4f34ceff
   [junit4]   2> 836602 INFO  (qtp585197032-11000) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.c.SolrCore [[cdcr-cluster1_shard2_replica_n3] ] Opening new SolrCore at [/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.cloud.cdcr.CdcrBidirectionalTest_5B0A46AB59A1F6D-001/cdcr-cluster1-001/node1/cdcr-cluster1_shard2_replica_n3], dataDir=[/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.cloud.cdcr.CdcrBidirectionalTest_5B0A46AB59A1F6D-001/cdcr-cluster1-001/node1/./cdcr-cluster1_shard2_replica_n3/data/]
   [junit4]   2> 836709 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.u.UpdateHandler Using UpdateLog implementation: org.apache.solr.update.CdcrUpdateLog
   [junit4]   2> 836710 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.u.UpdateLog Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 836711 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.u.CommitTracker Hard AutoCommit: disabled
   [junit4]   2> 836711 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.u.CommitTracker Soft AutoCommit: disabled
   [junit4]   2> 836713 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.s.SolrIndexSearcher Opening [Searcher@30510869[cdcr-cluster1_shard1_replica_n1] main]
   [junit4]   2> 836714 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: /configs/cdcr-cluster1
   [junit4]   2> 836716 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using ZooKeeperStorageIO:path=/configs/cdcr-cluster1
   [junit4]   2> 836716 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.s.ZkIndexSchemaReader Creating ZooKeeper watch for the managed schema at /configs/cdcr-cluster1/managed-schema
   [junit4]   2> 836716 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.s.ZkIndexSchemaReader Current schema version 0 is already the latest
   [junit4]   2> 836717 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms.
   [junit4]   2> 836719 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.h.CdcrBufferStateManager Created znode /collections/cdcr-cluster1/cdcr/state/buffer
   [junit4]   2> 836720 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.h.CdcrProcessStateManager Created znode /collections/cdcr-cluster1/cdcr/state/process
   [junit4]   2> 836724 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.u.UpdateLog Could not find max version in index or recent updates, using new clock 1613343772296347648
   [junit4]   2> 836727 INFO  (searcherExecutor-3055-thread-1-processing-n:127.0.0.1:44643_solr x:cdcr-cluster1_shard1_replica_n1 c:cdcr-cluster1 s:shard1 r:core_node2) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.c.SolrCore [cdcr-cluster1_shard1_replica_n1] Registered new searcher Searcher@30510869[cdcr-cluster1_shard1_replica_n1] main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 836727 INFO  (qtp585197032-11000) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.u.UpdateHandler Using UpdateLog implementation: org.apache.solr.update.CdcrUpdateLog
   [junit4]   2> 836728 INFO  (qtp585197032-11000) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.u.UpdateLog Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 836729 INFO  (qtp585197032-11000) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.u.CommitTracker Hard AutoCommit: disabled
   [junit4]   2> 836729 INFO  (qtp585197032-11000) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.u.CommitTracker Soft AutoCommit: disabled
   [junit4]   2> 836730 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.c.ZkShardTerms Successful update of terms at /collections/cdcr-cluster1/terms/shard1 to Terms{values={core_node2=0}, version=0}
   [junit4]   2> 836733 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue.
   [junit4]   2> 836733 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
   [junit4]   2> 836733 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.c.SyncStrategy Sync replicas to https://127.0.0.1:44643/solr/cdcr-cluster1_shard1_replica_n1/
   [junit4]   2> 836733 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.c.SyncStrategy Sync Success - now sync replicas to me
   [junit4]   2> 836733 INFO  (qtp585197032-11000) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.s.SolrIndexSearcher Opening [Searcher@61ff5504[cdcr-cluster1_shard2_replica_n3] main]
   [junit4]   2> 836733 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.c.SyncStrategy https://127.0.0.1:44643/solr/cdcr-cluster1_shard1_replica_n1/ has no replicas
   [junit4]   2> 836733 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContext Found all replicas participating in election, clear LIR
   [junit4]   2> 836735 INFO  (zkCallback-6325-thread-1) [    ] o.a.s.h.CdcrLeaderStateManager Received new leader state @ cdcr-cluster1:shard1
   [junit4]   2> 836736 INFO  (qtp585197032-11000) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: /configs/cdcr-cluster1
   [junit4]   2> 836736 INFO  (qtp585197032-11000) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using ZooKeeperStorageIO:path=/configs/cdcr-cluster1
   [junit4]   2> 836736 INFO  (qtp585197032-11000) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.s.ZkIndexSchemaReader Creating ZooKeeper watch for the managed schema at /configs/cdcr-cluster1/managed-schema
   [junit4]   2> 836737 INFO  (qtp585197032-11000) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.s.ZkIndexSchemaReader Current schema version 0 is already the latest
   [junit4]   2> 836737 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContext I am the new leader: https://127.0.0.1:44643/solr/cdcr-cluster1_shard1_replica_n1/ shard1
   [junit4]   2> 836738 INFO  (qtp585197032-11000) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms.
   [junit4]   2> 836746 INFO  (qtp585197032-11000) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.u.UpdateLog Could not find max version in index or recent updates, using new clock 1613343772319416320
   [junit4]   2> 836749 INFO  (searcherExecutor-3056-thread-1-processing-n:127.0.0.1:44643_solr x:cdcr-cluster1_shard2_replica_n3 c:cdcr-cluster1 s:shard2 r:core_node4) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.c.SolrCore [cdcr-cluster1_shard2_replica_n3] Registered new searcher Searcher@61ff5504[cdcr-cluster1_shard2_replica_n3] main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 836752 INFO  (qtp585197032-11000) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.c.ZkShardTerms Successful update of terms at /collections/cdcr-cluster1/terms/shard2 to Terms{values={core_node4=0}, version=0}
   [junit4]   2> 836754 INFO  (qtp585197032-11000) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue.
   [junit4]   2> 836754 INFO  (qtp585197032-11000) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
   [junit4]   2> 836754 INFO  (qtp585197032-11000) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.c.SyncStrategy Sync replicas to https://127.0.0.1:44643/solr/cdcr-cluster1_shard2_replica_n3/
   [junit4]   2> 836754 INFO  (qtp585197032-11000) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.c.SyncStrategy Sync Success - now sync replicas to me
   [junit4]   2> 836755 INFO  (qtp585197032-11000) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.c.SyncStrategy https://127.0.0.1:44643/solr/cdcr-cluster1_shard2_replica_n3/ has no replicas
   [junit4]   2> 836755 INFO  (qtp585197032-11000) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.c.ShardLeaderElectionContext Found all replicas participating in election, clear LIR
   [junit4]   2> 836756 INFO  (zkCallback-6325-thread-2) [    ] o.a.s.h.CdcrLeaderStateManager Received new leader state @ cdcr-cluster1:shard2
   [junit4]   2> 836763 INFO  (qtp585197032-11000) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.c.ShardLeaderElectionContext I am the new leader: https://127.0.0.1:44643/solr/cdcr-cluster1_shard2_replica_n3/ shard2
   [junit4]   2> 836866 INFO  (qtp585197032-11000) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.c.ZkController I am the leader, no recovery necessary
   [junit4]   2> 836871 INFO  (qtp585197032-11000) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores params={qt=/admin/cores&collection.configName=cdcr-cluster1&newCollection=true&collection=cdcr-cluster1&version=2&replicaType=NRT&coreNodeName=core_node4&name=cdcr-cluster1_shard2_replica_n3&action=CREATE&numShards=2&shard=shard2&property.solr.directoryFactory=solr.StandardDirectoryFactory&wt=javabin} status=0 QTime=1367
   [junit4]   2> 836889 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.c.ZkController I am the leader, no recovery necessary
   [junit4]   2> 836894 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores params={qt=/admin/cores&collection.configName=cdcr-cluster1&newCollection=true&collection=cdcr-cluster1&version=2&replicaType=NRT&coreNodeName=core_node2&name=cdcr-cluster1_shard1_replica_n1&action=CREATE&numShards=2&shard=shard1&property.solr.directoryFactory=solr.StandardDirectoryFactory&wt=javabin} status=0 QTime=1386
   [junit4]   2> 836923 INFO  (qtp585197032-10999) [n:127.0.0.1:44643_solr    ] o.a.s.h.a.CollectionsHandler Wait for new collection to be active for at most 30 seconds. Check all shard replicas
   [junit4]   2> 837007 INFO  (zkCallback-6325-thread-1) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/cdcr-cluster1/state.json] for collection [cdcr-cluster1] has occurred - updating... (live nodes size: [1])
   [junit4]   2> 837292 INFO  (OverseerCollectionConfigSetProcessor-72443812980850691-127.0.0.1:44643_solr-n_0000000000) [n:127.0.0.1:44643_solr    ] o.a.s.c.OverseerTaskQueue Response ZK path: /overseer/collection-queue-work/qnr-0000000000 doesn't exist.  Requestor may have disconnected from ZooKeeper
   [junit4]   2> 837924 INFO  (qtp585197032-10999) [n:127.0.0.1:44643_solr    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections params={collection.configName=cdcr-cluster1&maxShardsPerNode=2&name=cdcr-cluster1&nrtReplicas=1&action=CREATE&numShards=2&property.solr.directoryFactory=solr.StandardDirectoryFactory&wt=javabin&version=2} status=0 QTime=2650
   [junit4]   2> 837955 INFO  (zkConnectionManagerCallback-6349-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 837992 INFO  (zkConnectionManagerCallback-6353-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 837993 INFO  (TEST-CdcrBidirectionalTest.testBiDir-seed#[5B0A46AB59A1F6D]) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 837994 INFO  (TEST-CdcrBidirectionalTest.testBiDir-seed#[5B0A46AB59A1F6D]) [    ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:46789/solr ready
   [junit4]   2> 838052 INFO  (qtp623115052-10940) [n:127.0.0.1:37145_solr    ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params collection.configName=cdcr-cluster2&maxShardsPerNode=2&name=cdcr-cluster2&nrtReplicas=1&action=CREATE&numShards=2&property.solr.directoryFactory=solr.StandardDirectoryFactory&wt=javabin&version=2 and sendToOCPQueue=true
   [junit4]   2> 838092 INFO  (OverseerThreadFactory-3037-thread-1-processing-n:127.0.0.1:37145_solr) [n:127.0.0.1:37145_solr    ] o.a.s.c.a.c.CreateCollectionCmd Create collection cdcr-cluster2
   [junit4]   2> 838208 INFO  (OverseerStateUpdate-72443812924227587-127.0.0.1:37145_solr-n_0000000000) [n:127.0.0.1:37145_solr    ] o.a.s.c.o.SliceMutator createReplica() {
   [junit4]   2>   "operation":"ADDREPLICA",
   [junit4]   2>   "collection":"cdcr-cluster2",
   [junit4]   2>   "shard":"shard1",
   [junit4]   2>   "core":"cdcr-cluster2_shard1_replica_n1",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "base_url":"https://127.0.0.1:37145/solr",
   [junit4]   2>   "type":"NRT",
   [junit4]   2>   "waitForFinalState":"false"} 
   [junit4]   2> 838210 INFO  (OverseerStateUpdate-72443812924227587-127.0.0.1:37145_solr-n_0000000000) [n:127.0.0.1:37145_solr    ] o.a.s.c.o.SliceMutator createReplica() {
   [junit4]   2>   "operation":"ADDREPLICA",
   [junit4]   2>   "collection":"cdcr-cluster2",
   [junit4]   2>   "shard":"shard2",
   [junit4]   2>   "core":"cdcr-cluster2_shard2_replica_n2",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "base_url":"https://127.0.0.1:37145/solr",
   [junit4]   2>   "type":"NRT",
   [junit4]   2>   "waitForFinalState":"false"} 
   [junit4]   2> 838426 INFO  (qtp623115052-10945) [n:127.0.0.1:37145_solr    x:cdcr-cluster2_shard1_replica_n1] o.a.s.h.a.CoreAdminOperation core create command qt=/admin/cores&collection.configName=cdcr-cluster2&newCollection=true&collection=cdcr-cluster2&version=2&replicaType=NRT&coreNodeName=core_node3&name=cdcr-cluster2_shard1_replica_n1&action=CREATE&numShards=2&shard=shard1&property.solr.directoryFactory=solr.StandardDirectoryFactory&wt=javabin
   [junit4]   2> 838426 INFO  (qtp623115052-10945) [n:127.0.0.1:37145_solr    x:cdcr-cluster2_shard1_replica_n1] o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 2147483647 transient cores
   [junit4]   2> 838430 INFO  (qtp623115052-10946) [n:127.0.0.1:37145_solr    x:cdcr-cluster2_shard2_replica_n2] o.a.s.h.a.CoreAdminOperation core create command qt=/admin/cores&collection.configName=cdcr-cluster2&newCollection=true&collection=cdcr-cluster2&version=2&replicaType=NRT&coreNodeName=core_node4&name=cdcr-cluster2_shard2_replica_n2&action=CREATE&numShards=2&shard=shard2&property.solr.directoryFactory=solr.StandardDirectoryFactory&wt=javabin
   [junit4]   2> 839445 INFO  (qtp623115052-10946) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.c.SolrConfig Using Lucene MatchVersion: 7.6.0
   [junit4]   2> 839459 INFO  (qtp623115052-10945) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.c.SolrConfig Using Lucene MatchVersion: 7.6.0
   [junit4]   2> 839469 INFO  (qtp623115052-10945) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.s.IndexSchema [cdcr-cluster2_shard1_replica_n1] Schema name=minimal
   [junit4]   2> 839473 INFO  (qtp623115052-10946) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.s.IndexSchema [cdcr-cluster2_shard2_replica_n2] Schema name=minimal
   [junit4]   2> 839475 INFO  (qtp623115052-10946) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.s.IndexSchema Loaded schema minimal/1.1 with uniqueid field id
   [junit4]   2> 839475 INFO  (qtp623115052-10946) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.c.CoreContainer Creating SolrCore 'cdcr-cluster2_shard2_replica_n2' using configuration from collection cdcr-cluster2, trusted=true
   [junit4]   2> 839475 INFO  (qtp623115052-10946) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr_37145.solr.core.cdcr-cluster2.shard2.replica_n2' (registry 'solr.core.cdcr-cluster2.shard2.replica_n2') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@4f34ceff
   [junit4]   2> 839475 INFO  (qtp623115052-10946) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.c.SolrCore [[cdcr-cluster2_shard2_replica_n2] ] Opening new SolrCore at [/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.cloud.cdcr.CdcrBidirectionalTest_5B0A46AB59A1F6D-001/cdcr-cluster2-001/node1/cdcr-cluster2_shard2_replica_n2], dataDir=[/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.cloud.cdcr.CdcrBidirectionalTest_5B0A46AB59A1F6D-001/cdcr-cluster2-001/node1/./cdcr-cluster2_shard2_replica_n2/data/]
   [junit4]   2> 839481 INFO  (qtp623115052-10945) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.s.IndexSchema Loaded schema minimal/1.1 with uniqueid field id
   [junit4]   2> 839481 INFO  (qtp623115052-10945) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.c.CoreContainer Creating SolrCore 'cdcr-cluster2_shard1_replica_n1' using configuration from collection cdcr-cluster2, trusted=true
   [junit4]   2> 839482 INFO  (qtp623115052-10945) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr_37145.solr.core.cdcr-cluster2.shard1.replica_n1' (registry 'solr.core.cdcr-cluster2.shard1.replica_n1') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@4f34ceff
   [junit4]   2> 839482 INFO  (qtp623115052-10945) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.c.SolrCore [[cdcr-cluster2_shard1_replica_n1] ] Opening new SolrCore at [/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.cloud.cdcr.CdcrBidirectionalTest_5B0A46AB59A1F6D-001/cdcr-cluster2-001/node1/cdcr-cluster2_shard1_replica_n1], dataDir=[/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.cloud.cdcr.CdcrBidirectionalTest_5B0A46AB59A1F6D-001/cdcr-cluster2-001/node1/./cdcr-cluster2_shard1_replica_n1/data/]
   [junit4]   2> 839615 INFO  (qtp623115052-10946) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.u.UpdateHandler Using UpdateLog implementation: org.apache.solr.update.CdcrUpdateLog
   [junit4]   2> 839615 INFO  (qtp623115052-10946) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.u.UpdateLog Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 839628 INFO  (qtp623115052-10946) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.u.CommitTracker Hard AutoCommit: disabled
   [junit4]   2> 839628 INFO  (qtp623115052-10946) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.u.CommitTracker Soft AutoCommit: disabled
   [junit4]   2> 839631 INFO  (qtp623115052-10945) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.u.UpdateHandler Using UpdateLog implementation: org.apache.solr.update.CdcrUpdateLog
   [junit4]   2> 839631 INFO  (qtp623115052-10945) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.u.UpdateLog Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 839631 INFO  (qtp623115052-10946) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.s.SolrIndexSearcher Opening [Searcher@3b76707[cdcr-cluster2_shard2_replica_n2] main]
   [junit4]   2> 839632 INFO  (qtp623115052-10945) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.u.CommitTracker Hard AutoCommit: disabled
   [junit4]   2> 839632 INFO  (qtp623115052-10946) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: /configs/cdcr-cluster2
   [junit4]   2> 839632 INFO  (qtp623115052-10945) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.u.CommitTracker Soft AutoCommit: disabled
   [junit4]   2> 839633 INFO  (qtp623115052-10946) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using ZooKeeperStorageIO:path=/configs/cdcr-cluster2
   [junit4]   2> 839633 INFO  (qtp623115052-10946) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.s.ZkIndexSchemaReader Creating ZooKeeper watch for the managed schema at /configs/cdcr-cluster2/managed-schema
   [junit4]   2> 839633 INFO  (qtp623115052-10946) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.s.ZkIndexSchemaReader Current schema version 0 is already the latest
   [junit4]   2> 839634 INFO  (qtp623115052-10946) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms.
   [junit4]   2> 839635 INFO  (qtp623115052-10945) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.s.SolrIndexSearcher Opening [Searcher@cb801a8[cdcr-cluster2_shard1_replica_n1] main]
   [junit4]   2> 839636 INFO  (qtp623115052-10945) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: /configs/cdcr-cluster2
   [junit4]   2> 839636 INFO  (qtp623115052-10946) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.h.CdcrBufferStateManager Created znode /collections/cdcr-cluster2/cdcr/state/buffer
   [junit4]   2> 839636 INFO  (qtp623115052-10945) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using ZooKeeperStorageIO:path=/configs/cdcr-cluster2
   [junit4]   2> 839636 INFO  (qtp623115052-10945) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.s.ZkIndexSchemaReader Creating ZooKeeper watch for the managed schema at /configs/cdcr-cluster2/managed-schema
   [junit4]   2> 839637 INFO  (qtp623115052-10945) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.s.ZkIndexSchemaReader Current schema version 0 is already the latest
   [junit4]   2> 839637 INFO  (qtp623115052-10946) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.h.CdcrProcessStateManager Created znode /collections/cdcr-cluster2/cdcr/state/process
   [junit4]   2> 839638 INFO  (qtp623115052-10945) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms.
   [junit4]   2> 839652 INFO  (qtp623115052-10945) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.u.UpdateLog Could not find max version in index or recent updates, using new clock 1613343775366578176
   [junit4]   2> 839653 INFO  (searcherExecutor-3068-thread-1-processing-n:127.0.0.1:37145_solr x:cdcr-cluster2_shard1_replica_n1 c:cdcr-cluster2 s:shard1 r:core_node3) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.c.SolrCore [cdcr-cluster2_shard1_replica_n1] Registered new searcher Searcher@cb801a8[cdcr-cluster2_shard1_replica_n1] main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 839660 INFO  (qtp623115052-10945) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.c.ZkShardTerms Successful update of terms at /collections/cdcr-cluster2/terms/shard1 to Terms{values={core_node3=0}, version=0}
   [junit4]   2> 839663 INFO  (qtp623115052-10945) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue.
   [junit4]   2> 839663 INFO  (qtp623115052-10945) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
   [junit4]   2> 839663 INFO  (qtp623115052-10945) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.c.SyncStrategy Sync replicas to https://127.0.0.1:37145/solr/cdcr-cluster2_shard1_replica_n1/
   [junit4]   2> 839663 INFO  (qtp623115052-10945) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.c.SyncStrategy Sync Success - now sync replicas to me
   [junit4]   2> 839663 INFO  (qtp623115052-10945) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.c.SyncStrategy https://127.0.0.1:37145/solr/cdcr-cluster2_shard1_replica_n1/ has no replicas
   [junit4]   2> 839663 INFO  (qtp623115052-10945) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContext Found all replicas participating in election, clear LIR
   [junit4]   2> 839664 INFO  (qtp623115052-10946) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.u.UpdateLog Could not find max version in index or recent updates, using new clock 1613343775379161088
   [junit4]   2> 839664 INFO  (searcherExecutor-3067-thread-1-processing-n:127.0.0.1:37145_solr x:cdcr-cluster2_shard2_replica_n2 c:cdcr-cluster2 s:shard2 r:core_node4) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.c.SolrCore [cdcr-cluster2_shard2_replica_n2] Registered new searcher Searcher@3b76707[cdcr-cluster2_shard2_replica_n2] main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 839675 INFO  (zkCallback-6300-thread-2) [    ] o.a.s.h.CdcrLeaderStateManager Received new leader state @ cdcr-cluster2:shard1
   [junit4]   2> 839680 INFO  (qtp623115052-10946) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.c.ZkShardTerms Successful update of terms at /collections/cdcr-cluster2/terms/shard2 to Terms{values={core_node4=0}, version=0}
   [junit4]   2> 839681 INFO  (qtp623115052-10945) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContext I am the new leader: https://127.0.0.1:37145/solr/cdcr-cluster2_shard1_replica_n1/ shard1
   [junit4]   2> 839684 INFO  (qtp623115052-10946) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue.
   [junit4]   2> 839684 INFO  (qtp623115052-10946) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
   [junit4]   2> 839684 INFO  (qtp623115052-10946) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.c.SyncStrategy Sync replicas to https://127.0.0.1:37145/solr/cdcr-cluster2_shard2_replica_n2/
   [junit4]   2> 839684 INFO  (qtp623115052-10946) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.c.SyncStrategy Sync Success - now sync replicas to me
   [junit4]   2> 839684 INFO  (qtp623115052-10946) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.c.SyncStrategy https://127.0.0.1:37145/solr/cdcr-cluster2_shard2_replica_n2/ has no replicas
   [junit4]   2> 839685 INFO  (qtp623115052-10946) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.c.ShardLeaderElectionContext Found all replicas participating in election, clear LIR
   [junit4]   2> 839693 INFO  (zkCallback-6300-thread-2) [    ] o.a.s.h.CdcrLeaderStateManager Received new leader state @ cdcr-cluster2:shard2
   [junit4]   2> 839696 INFO  (qtp623115052-10946) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.c.ShardLeaderElectionContext I am the new leader: https://127.0.0.1:37145/solr/cdcr-cluster2_shard2_replica_n2/ shard2
   [junit4]   2> 839798 INFO  (qtp623115052-10946) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.c.ZkController I am the leader, no recovery necessary
   [junit4]   2> 839800 INFO  (qtp623115052-10946) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores params={qt=/admin/cores&collection.configName=cdcr-cluster2&newCollection=true&collection=cdcr-cluster2&version=2&replicaType=NRT&coreNodeName=core_node4&name=cdcr-cluster2_shard2_replica_n2&action=CREATE&numShards=2&shard=shard2&property.solr.directoryFactory=solr.StandardDirectoryFactory&wt=javabin} status=0 QTime=1370
   [junit4]   2> 839833 INFO  (qtp623115052-10945) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.c.ZkController I am the leader, no recovery necessary
   [junit4]   2> 839836 INFO  (qtp623115052-10945) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores params={qt=/admin/cores&collection.configName=cdcr-cluster2&newCollection=true&collection=cdcr-cluster2&version=2&replicaType=NRT&coreNodeName=core_node3&name=cdcr-cluster2_shard1_replica_n1&action=CREATE&numShards=2&shard=shard1&property.solr.directoryFactory=solr.StandardDirectoryFactory&wt=javabin} status=0 QTime=1411
   [junit4]   2> 839840 INFO  (qtp623115052-10940) [n:127.0.0.1:37145_solr    ] o.a.s.h.a.CollectionsHandler Wait for new collection to be active for at most 30 seconds. Check all shard replicas
   [junit4]   2> 839938 INFO  (zkCallback-6300-thread-1) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/cdcr-cluster2/state.json] for collection [cdcr-cluster2] has occurred - updating... (live nodes size: [1])
   [junit4]   2> 840066 INFO  (OverseerCollectionConfigSetProcessor-72443812924227587-127.0.0.1:37145_solr-n_0000000000) [n:127.0.0.1:37145_solr    ] o.a.s.c.OverseerTaskQueue Response ZK path: /overseer/collection-queue-work/qnr-0000000000 doesn't exist.  Requestor may have disconnected from ZooKeeper
   [junit4]   2> 840840 INFO  (qtp623115052-10940) [n:127.0.0.1:37145_solr    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections params={collection.configName=cdcr-cluster2&maxShardsPerNode=2&name=cdcr-cluster2&nrtReplicas=1&action=CREATE&numShards=2&property.solr.directoryFactory=solr.StandardDirectoryFactory&wt=javabin&version=2} status=0 QTime=2788
   [junit4]   2> 840843 INFO  (zkConnectionManagerCallback-6360-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 840844 INFO  (qtp585197032-10994) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 840845 INFO  (qtp585197032-10994) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:46789/solr ready
   [junit4]   2> 840865 INFO  (qtp623115052-10943) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.h.CdcrRequestHandler Found maxVersionFromRecent 0 maxVersionFromIndex 0
   [junit4]   2> 840865 INFO  (qtp623115052-10943) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.c.S.Request [cdcr-cluster2_shard2_replica_n2]  webapp=/solr path=/cdcr params={action=SHARDCHECKPOINT&wt=javabin&version=2} status=0 QTime=6
   [junit4]   2> 840883 INFO  (qtp623115052-10942) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.h.CdcrRequestHandler Found maxVersionFromRecent 0 maxVersionFromIndex 0
   [junit4]   2> 840883 INFO  (qtp623115052-10942) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.c.S.Request [cdcr-cluster2_shard1_replica_n1]  webapp=/solr path=/cdcr params={action=SHARDCHECKPOINT&wt=javabin&version=2} status=0 QTime=0
   [junit4]   2> 840884 INFO  (qtp623115052-10945) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.c.S.Request [cdcr-cluster2_shard2_replica_n2]  webapp=/solr path=/cdcr params={_stateVer_=cdcr-cluster2:4&action=COLLECTIONCHECKPOINT&wt=javabin&version=2} status=0 QTime=34
   [junit4]   2> 840884 INFO  (qtp585197032-10994) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.h.CdcrReplicatorManager Create new update log reader for target cdcr-cluster2 with checkpoint -1 @ cdcr-cluster1:shard2
   [junit4]   2> 840884 INFO  (qtp585197032-10994) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.h.CdcrReplicatorManager Attempting to bootstrap target collection: cdcr-cluster2, shard: shard2
   [junit4]   2> 840884 INFO  (qtp585197032-10994) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.h.CdcrReplicatorManager Submitting bootstrap task to executor
   [junit4]   2> 840903 INFO  (zkCallback-6325-thread-1) [    ] o.a.s.h.CdcrProcessStateManager The CDCR process state has changed: WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/cdcr-cluster1/cdcr/state/process @ cdcr-cluster1:shard2
   [junit4]   2> 840903 INFO  (cdcr-bootstrap-status-6356-thread-1-processing-n:127.0.0.1:44643_solr x:cdcr-cluster1_shard2_replica_n3 c:cdcr-cluster1 s:shard2 r:core_node4) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.h.CdcrReplicatorManager Attempting to bootstrap target collection: cdcr-cluster2 shard: shard2 leader: https://127.0.0.1:37145/solr/cdcr-cluster2_shard2_replica_n2/
   [junit4]   2> 840904 INFO  (zkCallback-6325-thread-2) [    ] o.a.s.h.CdcrProcessStateManager The CDCR process state has changed: WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/cdcr-cluster1/cdcr/state/process @ cdcr-cluster1:shard1
   [junit4]   2> 840904 INFO  (zkCallback-6325-thread-1) [    ] o.a.s.h.CdcrProcessStateManager Received new CDCR process state from watcher: STARTED @ cdcr-cluster1:shard2
   [junit4]   2> 840904 INFO  (qtp585197032-10994) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.c.S.Request [cdcr-cluster1_shard2_replica_n3]  webapp=/solr path=/cdcr params={qt=/cdcr&_stateVer_=cdcr-cluster1:4&action=start&wt=javabin&version=2} status=0 QTime=62
   [junit4]   2> 840904 INFO  (zkCallback-6325-thread-2) [    ] o.a.s.h.CdcrProcessStateManager Received new CDCR process state from watcher: STARTED @ cdcr-cluster1:shard1
   [junit4]   2> 840941 INFO  (zkConnectionManagerCallback-6365-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 840941 INFO  (qtp623115052-10942) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.c.S.Request [cdcr-cluster2_shard2_replica_n2]  webapp=/solr path=/cdcr params={qt=/cdcr&masterUrl=https://127.0.0.1:44643/solr/cdcr-cluster1_shard2_replica_n3/&action=BOOTSTRAP&wt=javabin&version=2} status=0 QTime=0
   [junit4]   2> 840942 INFO  (zkCallback-6325-thread-2) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 840942 INFO  (recoveryExecutor-6297-thread-1-processing-n:127.0.0.1:37145_solr x:cdcr-cluster2_shard2_replica_n2 c:cdcr-cluster2 s:shard2 r:core_node4) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.u.UpdateLog Starting to buffer updates. FSUpdateLog{state=ACTIVE, tlog=null}
   [junit4]   2> 840942 INFO  (qtp623115052-10945) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.c.S.Request [cdcr-cluster2_shard2_replica_n2]  webapp=/solr path=/cdcr params={qt=/cdcr&action=BOOTSTRAP_STATUS&wt=javabin&version=2} status=0 QTime=0
   [junit4]   2> 840943 INFO  (cdcr-bootstrap-status-6356-thread-1-processing-n:127.0.0.1:44643_solr x:cdcr-cluster1_shard2_replica_n3 c:cdcr-cluster1 s:shard2 r:core_node4) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.h.CdcrReplicatorManager CDCR bootstrap running for 1 seconds, sleeping for 2000 ms
   [junit4]   2> 840943 INFO  (zkCallback-6325-thread-2) [    ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:46789/solr ready
   [junit4]   2> 840959 INFO  (qtp585197032-10994) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.u.DirectUpdateHandler2 start commit{_version_=1613343776737067008,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 840963 INFO  (qtp585197032-10994) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.u.DirectUpdateHandler2 No uncommitted changes. Skipping IW.commit.
   [junit4]   2> 840964 INFO  (qtp585197032-10994) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 840964 INFO  (qtp585197032-10994) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.c.S.Request [cdcr-cluster1_shard2_replica_n3]  webapp=/solr path=/update params={waitSearcher=true&openSearcher=false&commit=true&softCommit=false&commit_end_point=true&wt=javabin&version=2} status=0 QTime=5
   [junit4]   2> 840976 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.c.S.Request [cdcr-cluster1_shard2_replica_n3]  webapp=/solr path=/replication params={qt=/replication&wt=javabin&version=2&command=indexversion} status=0 QTime=0
   [junit4]   2> 840977 INFO  (recoveryExecutor-6297-thread-1-processing-n:127.0.0.1:37145_solr x:cdcr-cluster2_shard2_replica_n2 c:cdcr-cluster2 s:shard2 r:core_node4) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.h.IndexFetcher Master's generation: 1
   [junit4]   2> 840977 INFO  (recoveryExecutor-6297-thread-1-processing-n:127.0.0.1:37145_solr x:cdcr-cluster2_shard2_replica_n2 c:cdcr-cluster2 s:shard2 r:core_node4) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.h.IndexFetcher Master's version: 0
   [junit4]   2> 840977 INFO  (recoveryExecutor-6297-thread-1-processing-n:127.0.0.1:37145_solr x:cdcr-cluster2_shard2_replica_n2 c:cdcr-cluster2 s:shard2 r:core_node4) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.h.IndexFetcher Slave's generation: 1
   [junit4]   2> 840977 INFO  (recoveryExecutor-6297-thread-1-processing-n:127.0.0.1:37145_solr x:cdcr-cluster2_shard2_replica_n2 c:cdcr-cluster2 s:shard2 r:core_node4) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.h.IndexFetcher Slave's version: 0
   [junit4]   2> 840977 INFO  (recoveryExecutor-6297-thread-1-processing-n:127.0.0.1:37145_solr x:cdcr-cluster2_shard2_replica_n2 c:cdcr-cluster2 s:shard2 r:core_node4) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.h.IndexFetcher New index in Master. Deleting mine...
   [junit4]   2> 840983 INFO  (recoveryExecutor-6297-thread-1-processing-n:127.0.0.1:37145_solr x:cdcr-cluster2_shard2_replica_n2 c:cdcr-cluster2 s:shard2 r:core_node4) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.s.SolrIndexSearcher Opening [Searcher@7643fa9b[cdcr-cluster2_shard2_replica_n2] main]
   [junit4]   2> 840985 INFO  (searcherExecutor-3067-thread-1-processing-n:127.0.0.1:37145_solr x:cdcr-cluster2_shard2_replica_n2 c:cdcr-cluster2 s:shard2 r:core_node4) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.c.SolrCore [cdcr-cluster2_shard2_replica_n2] Registered new searcher Searcher@7643fa9b[cdcr-cluster2_shard2_replica_n2] main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 840985 INFO  (recoveryExecutor-6297-thread-1-processing-n:127.0.0.1:37145_solr x:cdcr-cluster2_shard2_replica_n2 c:cdcr-cluster2 s:shard2 r:core_node4) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.h.CdcrRequestHandler No replay needed.
   [junit4]   2> 840989 INFO  (qtp623115052-10943) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.h.CdcrRequestHandler Found maxVersionFromRecent 0 maxVersionFromIndex 0
   [junit4]   2> 840989 INFO  (qtp623115052-10943) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.c.S.Request [cdcr-cluster2_shard1_replica_n1]  webapp=/solr path=/cdcr params={action=SHARDCHECKPOINT&wt=javabin&version=2} status=0 QTime=0
   [junit4]   2> 840991 INFO  (qtp623115052-10945) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.h.CdcrRequestHandler Found maxVersionFromRecent 0 maxVersionFromIndex 0
   [junit4]   2> 840991 INFO  (qtp623115052-10945) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.c.S.Request [cdcr-cluster2_shard2_replica_n2]  webapp=/solr path=/cdcr params={action=SHARDCHECKPOINT&wt=javabin&version=2} status=0 QTime=0
   [junit4]   2> 841000 INFO  (qtp623115052-10942) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.c.S.Request [cdcr-cluster2_shard2_replica_n2]  webapp=/solr path=/cdcr params={_stateVer_=cdcr-cluster2:4&action=COLLECTIONCHECKPOINT&wt=javabin&version=2} status=0 QTime=52
   [junit4]   2> 841007 INFO  (zkCallback-6325-thread-2) [    ] o.a.s.h.CdcrReplicatorManager Create new update log reader for target cdcr-cluster2 with checkpoint -1 @ cdcr-cluster1:shard1
   [junit4]   2> 841007 INFO  (zkCallback-6325-thread-2) [    ] o.a.s.h.CdcrReplicatorManager Attempting to bootstrap target collection: cdcr-cluster2, shard: shard1
   [junit4]   2> 841007 INFO  (zkCallback-6325-thread-2) [    ] o.a.s.h.CdcrReplicatorManager Submitting bootstrap task to executor
   [junit4]   2> 841027 INFO  (cdcr-bootstrap-status-6361-thread-1) [    ] o.a.s.h.CdcrReplicatorManager Attempting to bootstrap target collection: cdcr-cluster2 shard: shard1 leader: https://127.0.0.1:37145/solr/cdcr-cluster2_shard1_replica_n1/
   [junit4]   2> 841040 INFO  (recoveryExecutor-6297-thread-1-processing-n:127.0.0.1:37145_solr x:cdcr-cluster2_shard1_replica_n1 c:cdcr-cluster2 s:shard1 r:core_node3) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.u.UpdateLog Starting to buffer updates. FSUpdateLog{state=ACTIVE, tlog=null}
   [junit4]   2> 841040 INFO  (qtp623115052-10943) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.c.S.Request [cdcr-cluster2_shard1_replica_n1]  webapp=/solr path=/cdcr params={qt=/cdcr&masterUrl=https://127.0.0.1:44643/solr/cdcr-cluster1_shard1_replica_n1/&action=BOOTSTRAP&wt=javabin&version=2} status=0 QTime=1
   [junit4]   2> 841042 INFO  (qtp623115052-10942) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.c.S.Request [cdcr-cluster2_shard1_replica_n1]  webapp=/solr path=/cdcr params={qt=/cdcr&action=BOOTSTRAP_STATUS&wt=javabin&version=2} status=0 QTime=0
   [junit4]   2> 841042 INFO  (cdcr-bootstrap-status-6361-thread-1) [    ] o.a.s.h.CdcrReplicatorManager CDCR bootstrap running for 1 seconds, sleeping for 2000 ms
   [junit4]   2> 841062 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 start commit{_version_=1613343776845070336,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 841062 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 No uncommitted changes. Skipping IW.commit.
   [junit4]   2> 841062 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 841063 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.c.S.Request [cdcr-cluster1_shard1_replica_n1]  webapp=/solr path=/update params={waitSearcher=true&openSearcher=false&commit=true&softCommit=false&commit_end_point=true&wt=javabin&version=2} status=0 QTime=1
   [junit4]   2> 841068 INFO  (qtp585197032-10994) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.c.S.Request [cdcr-cluster1_shard1_replica_n1]  webapp=/solr path=/replication params={qt=/replication&wt=javabin&version=2&command=indexversion} status=0 QTime=0
   [junit4]   2> 841068 INFO  (recoveryExecutor-6297-thread-1-processing-n:127.0.0.1:37145_solr x:cdcr-cluster2_shard1_replica_n1 c:cdcr-cluster2 s:shard1 r:core_node3) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.h.IndexFetcher Master's generation: 1
   [junit4]   2> 841069 INFO  (recoveryExecutor-6297-thread-1-processing-n:127.0.0.1:37145_solr x:cdcr-cluster2_shard1_replica_n1 c:cdcr-cluster2 s:shard1 r:core_node3) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.h.IndexFetcher Master's version: 0
   [junit4]   2> 841069 INFO  (recoveryExecutor-6297-thread-1-processing-n:127.0.0.1:37145_solr x:cdcr-cluster2_shard1_replica_n1 c:cdcr-cluster2 s:shard1 r:core_node3) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.h.IndexFetcher Slave's generation: 1
   [junit4]   2> 841069 INFO  (recoveryExecutor-6297-thread-1-processing-n:127.0.0.1:37145_solr x:cdcr-cluster2_shard1_replica_n1 c:cdcr-cluster2 s:shard1 r:core_node3) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.h.IndexFetcher Slave's version: 0
   [junit4]   2> 841069 INFO  (recoveryExecutor-6297-thread-1-processing-n:127.0.0.1:37145_solr x:cdcr-cluster2_shard1_replica_n1 c:cdcr-cluster2 s:shard1 r:core_node3) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.h.IndexFetcher New index in Master. Deleting mine...
   [junit4]   2> 841070 INFO  (recoveryExecutor-6297-thread-1-processing-n:127.0.0.1:37145_solr x:cdcr-cluster2_shard1_replica_n1 c:cdcr-cluster2 s:shard1 r:core_node3) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.s.SolrIndexSearcher Opening [Searcher@2ea545c7[cdcr-cluster2_shard1_replica_n1] main]
   [junit4]   2> 841072 INFO  (searcherExecutor-3068-thread-1-processing-n:127.0.0.1:37145_solr x:cdcr-cluster2_shard1_replica_n1 c:cdcr-cluster2 s:shard1 r:core_node3) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.c.SolrCore [cdcr-cluster2_shard1_replica_n1] Registered new searcher Searcher@2ea545c7[cdcr-cluster2_shard1_replica_n1] main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 841076 INFO  (recoveryExecutor-6297-thread-1-processing-n:127.0.0.1:37145_solr x:cdcr-cluster2_shard1_replica_n1 c:cdcr-cluster2 s:shard1 r:core_node3) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.h.CdcrRequestHandler No replay needed.
   [junit4]   2> 842905 INFO  (TEST-CdcrBidirectionalTest.testBiDir-seed#[5B0A46AB59A1F6D]) [    ] o.a.s.c.c.CdcrBidirectionalTest Adding 10 docs with commit=true, numDocs=100
   [junit4]   2> 842919 INFO  (qtp585197032-10994) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.c.ZkShardTerms Successful update of terms at /collections/cdcr-cluster1/terms/shard2 to Terms{values={core_node4=1}, version=1}
   [junit4]   2> 842919 INFO  (qtp585197032-10994) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.c.S.Request [cdcr-cluster1_shard2_replica_n3]  webapp=/solr path=/update params={_stateVer_=cdcr-cluster1:4&wt=javabin&version=2} status=0 QTime=4
   [junit4]   2> 842963 INFO  (qtp585197032-10996) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.c.ZkShardTerms Successful update of terms at /collections/cdcr-cluster1/terms/shard1 to Terms{values={core_node2=1}, version=1}
   [junit4]   2> 842963 INFO  (qtp585197032-10996) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.c.S.Request [cdcr-cluster1_shard1_replica_n1]  webapp=/solr path=/update params={_stateVer_=cdcr-cluster1:4&wt=javabin&version=2} status=0 QTime=30
   [junit4]   2> 842964 INFO  (qtp623115052-10945) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.c.S.Request [cdcr-cluster2_shard2_replica_n2]  webapp=/solr path=/cdcr params={qt=/cdcr&action=BOOTSTRAP_STATUS&wt=javabin&version=2} status=0 QTime=0
   [junit4]   2> 842964 INFO  (cdcr-bootstrap-status-6356-thread-1-processing-n:127.0.0.1:44643_solr x:cdcr-cluster1_shard2_replica_n3 c:cdcr-cluster1 s:shard2 r:core_node4) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.h.CdcrReplicatorManager CDCR bootstrap successful in 3 seconds
   [junit4]   2> 842985 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 start commit{_version_=1613343778861481984,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 842985 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.u.SolrIndexWriter Calling setCommitData with IW:org.apache.solr.update.SolrIndexWriter@77e4032a commitCommandVersion:1613343778861481984
   [junit4]   2> 843004 INFO  (qtp585197032-10996) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.u.DirectUpdateHandler2 start commit{_version_=1613343778881404928,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 843004 INFO  (qtp585197032-10996) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.u.SolrIndexWriter Calling setCommitData with IW:org.apache.solr.update.SolrIndexWriter@39d8c6b6 commitCommandVersion:1613343778881404928
   [junit4]   2> 843028 INFO  (qtp623115052-10946) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.h.CdcrRequestHandler Found maxVersionFromRecent 0 maxVersionFromIndex 0
   [junit4]   2> 843028 INFO  (qtp623115052-10946) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.c.S.Request [cdcr-cluster2_shard2_replica_n2]  webapp=/solr path=/cdcr params={action=SHARDCHECKPOINT&wt=javabin&version=2} status=0 QTime=0
   [junit4]   2> 843040 INFO  (qtp623115052-10946) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.h.CdcrRequestHandler Found maxVersionFromRecent 0 maxVersionFromIndex 0
   [junit4]   2> 843040 INFO  (qtp623115052-10946) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.c.S.Request [cdcr-cluster2_shard1_replica_n1]  webapp=/solr path=/cdcr params={action=SHARDCHECKPOINT&wt=javabin&version=2} status=0 QTime=0
   [junit4]   2> 843041 INFO  (qtp623115052-10945) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.c.S.Request [cdcr-cluster2_shard1_replica_n1]  webapp=/solr path=/cdcr params={_stateVer_=cdcr-cluster2:4&action=COLLECTIONCHECKPOINT&wt=javabin&version=2} status=0 QTime=76
   [junit4]   2> 843041 INFO  (cdcr-bootstrap-status-6356-thread-1-processing-n:127.0.0.1:44643_solr x:cdcr-cluster1_shard2_replica_n3 c:cdcr-cluster1 s:shard2 r:core_node4) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.h.CdcrReplicatorManager Create new update log reader for target cdcr-cluster2 with checkpoint -1 @ cdcr-cluster1:shard2
   [junit4]   2> 843059 INFO  (qtp585197032-10996) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.s.SolrIndexSearcher Opening [Searcher@4771ba73[cdcr-cluster1_shard2_replica_n3] main]
   [junit4]   2> 843061 INFO  (searcherExecutor-3056-thread-1-processing-n:127.0.0.1:44643_solr x:cdcr-cluster1_shard2_replica_n3 c:cdcr-cluster1 s:shard2 r:core_node4) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.c.SolrCore [cdcr-cluster1_shard2_replica_n3] Registered new searcher Searcher@4771ba73[cdcr-cluster1_shard2_replica_n3] main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(7.6.0):C57)))}
   [junit4]   2> 843063 INFO  (cdcr-bootstrap-status-6356-thread-1-processing-n:127.0.0.1:44643_solr x:cdcr-cluster1_shard2_replica_n3 c:cdcr-cluster1 s:shard2 r:core_node4) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.h.CdcrReplicatorManager Bootstrap successful, giving the go-ahead to replicator
   [junit4]   2> 843063 INFO  (qtp585197032-10996) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 843063 INFO  (qtp585197032-10996) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.c.S.Request [cdcr-cluster1_shard2_replica_n3]  webapp=/solr path=/update params={update.distrib=FROMLEADER&update.chain=cdcr-processor-chain&waitSearcher=true&openSearcher=true&commit=true&softCommit=false&distrib.from=https://127.0.0.1:44643/solr/cdcr-cluster1_shard2_replica_n3/&commit_end_point=true&wt=javabin&version=2&expungeDeletes=false} status=0 QTime=60
   [junit4]   2> 843072 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.s.SolrIndexSearcher Opening [Searcher@23ff6a0d[cdcr-cluster1_shard1_replica_n1] main]
   [junit4]   2> 843072 INFO  (qtp623115052-10942) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.c.S.Request [cdcr-cluster2_shard1_replica_n1]  webapp=/solr path=/cdcr params={qt=/cdcr&action=BOOTSTRAP_STATUS&wt=javabin&version=2} status=0 QTime=0
   [junit4]   2> 843072 INFO  (cdcr-bootstrap-status-6361-thread-1) [    ] o.a.s.h.CdcrReplicatorManager CDCR bootstrap successful in 3 seconds
   [junit4]   2> 843073 INFO  (searcherExecutor-3055-thread-1-processing-n:127.0.0.1:44643_solr x:cdcr-cluster1_shard1_replica_n1 c:cdcr-cluster1 s:shard1 r:core_node2) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.c.SolrCore [cdcr-cluster1_shard1_replica_n1] Registered new searcher Searcher@23ff6a0d[cdcr-cluster1_shard1_replica_n1] main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(7.6.0):C43)))}
   [junit4]   2> 843074 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 843074 INFO  (qtp585197032-10997) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.c.S.Request [cdcr-cluster1_shard1_replica_n1]  webapp=/solr path=/update params={update.distrib=FROMLEADER&update.chain=cdcr-processor-chain&waitSearcher=true&openSearcher=true&commit=true&softCommit=false&distrib.from=https://127.0.0.1:44643/solr/cdcr-cluster1_shard2_replica_n3/&commit_end_point=true&wt=javabin&version=2&expungeDeletes=false} status=0 QTime=89
   [junit4]   2> 843075 INFO  (qtp585197032-10994) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.c.S.Request [cdcr-cluster1_shard2_replica_n3]  webapp=/solr path=/update params={_stateVer_=cdcr-cluster1:4&waitSearcher=true&commit=true&softCommit=false&wt=javabin&version=2} status=0 QTime=108
   [junit4]   2> 843075 INFO  (TEST-CdcrBidirectionalTest.testBiDir-seed#[5B0A46AB59A1F6D]) [    ] o.a.s.c.c.CdcrBidirectionalTest Adding 10 docs with commit=true, numDocs=200
   [junit4]   2> 843083 INFO  (qtp585197032-10994) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.c.S.Request [cdcr-cluster1_shard2_replica_n3]  webapp=/solr path=/update params={_stateVer_=cdcr-cluster1:4&wt=javabin&version=2} status=0 QTime=2
   [junit4]   2> 843108 INFO  (qtp585197032-11000) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.c.S.Request [cdcr-cluster1_shard1_replica_n1]  webapp=/solr path=/update params={_stateVer_=cdcr-cluster1:4&wt=javabin&version=2} status=0 QTime=27
   [junit4]   2> 843133 INFO  (qtp585197032-11000) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 start commit{_version_=1613343779016671232,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 843133 INFO  (qtp585197032-10999) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.u.DirectUpdateHandler2 start commit{_version_=1613343779016671232,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 843134 INFO  (qtp585197032-10999) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.u.SolrIndexWriter Calling setCommitData with IW:org.apache.solr.update.SolrIndexWriter@39d8c6b6 commitCommandVersion:1613343779016671232
   [junit4]   2> 843134 INFO  (qtp585197032-11000) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.u.SolrIndexWriter Calling setCommitData with IW:org.apache.solr.update.SolrIndexWriter@77e4032a commitCommandVersion:1613343779016671232
   [junit4]   2> 843139 INFO  (qtp623115052-10943) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.h.CdcrRequestHandler Found maxVersionFromRecent 0 maxVersionFromIndex 0
   [junit4]   2> 843139 INFO  (qtp623115052-10943) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.c.S.Request [cdcr-cluster2_shard2_replica_n2]  webapp=/solr path=/cdcr params={action=SHARDCHECKPOINT&wt=javabin&version=2} status=0 QTime=0
   [junit4]   2> 843143 INFO  (qtp623115052-10945) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.h.CdcrRequestHandler Found maxVersionFromRecent 0 maxVersionFromIndex 0
   [junit4]   2> 843143 INFO  (qtp623115052-10945) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard1 r:core_node3 x:cdcr-cluster2_shard1_replica_n1] o.a.s.c.S.Request [cdcr-cluster2_shard1_replica_n1]  webapp=/solr path=/cdcr params={action=SHARDCHECKPOINT&wt=javabin&version=2} status=0 QTime=0
   [junit4]   2> 843144 INFO  (qtp623115052-10942) [n:127.0.0.1:37145_solr c:cdcr-cluster2 s:shard2 r:core_node4 x:cdcr-cluster2_shard2_replica_n2] o.a.s.c.S.Request [cdcr-cluster2_shard2_replica_n2]  webapp=/solr path=/cdcr params={_stateVer_=cdcr-cluster2:4&action=COLLECTIONCHECKPOINT&wt=javabin&version=2} status=0 QTime=71
   [junit4]   2> 843144 INFO  (cdcr-bootstrap-status-6361-thread-1) [    ] o.a.s.h.CdcrReplicatorManager Create new update log reader for target cdcr-cluster2 with checkpoint -1 @ cdcr-cluster1:shard1
   [junit4]   2> 843145 INFO  (cdcr-bootstrap-status-6361-thread-1) [    ] o.a.s.h.CdcrReplicatorManager Bootstrap successful, giving the go-ahead to replicator
   [junit4]   2> 843311 INFO  (qtp585197032-11000) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.s.SolrIndexSearcher Opening [Searcher@7ad3da5f[cdcr-cluster1_shard1_replica_n1] main]
   [junit4]   2> 843313 INFO  (searcherExecutor-3055-thread-1-processing-n:127.0.0.1:44643_solr x:cdcr-cluster1_shard1_replica_n1 c:cdcr-cluster1 s:shard1 r:core_node2) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.c.SolrCore [cdcr-cluster1_shard1_replica_n1] Registered new searcher Searcher@7ad3da5f[cdcr-cluster1_shard1_replica_n1] main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(7.6.0):C43) Uninverting(_1(7.6.0):C54)))}
   [junit4]   2> 843313 INFO  (qtp585197032-11000) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 843313 INFO  (qtp585197032-11000) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] o.a.s.c.S.Request [cdcr-cluster1_shard1_replica_n1]  webapp=/solr path=/update params={update.distrib=FROMLEADER&update.chain=cdcr-processor-chain&waitSearcher=true&openSearcher=true&commit=true&softCommit=false&distrib.from=https://127.0.0.1:44643/solr/cdcr-cluster1_shard1_replica_n1/&commit_end_point=true&wt=javabin&version=2&expungeDeletes=false} status=0 QTime=179
   [junit4]   2> 843313 INFO  (qtp585197032-10999) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:shard2 r:core_node4 x:cdcr-cluster1_shard2_replica_n3] o.a.s.s.SolrIndexSearcher Opening [Searcher@47ff425f[cdcr-cluster1_shard2_replica_n3] main]
   [junit4]   2> 843313 INFO  (qtp585197032-10999) [n:127.0.0.1:44643_solr c:cdcr-cluster1 s:sh

[...truncated too long message...]

tExceptionError: Captured an uncaught exception in thread: Thread[id=11216, name=cdcr-replicator-3092-thread-30, state=RUNNABLE, group=TGRP-CdcrBidirectionalTest]
   [junit4]    > Caused by: java.lang.AssertionError: 1613343786009624576 != 1613343786008576000
   [junit4]    > 	at __randomizedtesting.SeedInfo.seed([5B0A46AB59A1F6D]:0)
   [junit4]    > 	at org.apache.solr.update.CdcrUpdateLog$CdcrLogReader.forwardSeek(CdcrUpdateLog.java:611)
   [junit4]    > 	at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:125)
   [junit4]    > 	at org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
   [junit4]    > 	at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
   [junit4]    > 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   [junit4]    > 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   [junit4]    > 	at java.lang.Thread.run(Thread.java:748)Throwable #62: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=11217, name=cdcr-replicator-3089-thread-31, state=RUNNABLE, group=TGRP-CdcrBidirectionalTest]
   [junit4]    > Caused by: java.lang.AssertionError: 1613343786009624576 != 1613343786008576000
   [junit4]    > 	at __randomizedtesting.SeedInfo.seed([5B0A46AB59A1F6D]:0)
   [junit4]    > 	at org.apache.solr.update.CdcrUpdateLog$CdcrLogReader.forwardSeek(CdcrUpdateLog.java:611)
   [junit4]    > 	at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:105)
   [junit4]    > 	at org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
   [junit4]    > 	at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
   [junit4]    > 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   [junit4]    > 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   [junit4]    > 	at java.lang.Thread.run(Thread.java:748)Throwable #63: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=11218, name=cdcr-replicator-3092-thread-31, state=RUNNABLE, group=TGRP-CdcrBidirectionalTest]
   [junit4]    > Caused by: java.lang.AssertionError: 1613343786009624576 != 1613343786008576000
   [junit4]    > 	at __randomizedtesting.SeedInfo.seed([5B0A46AB59A1F6D]:0)
   [junit4]    > 	at org.apache.solr.update.CdcrUpdateLog$CdcrLogReader.forwardSeek(CdcrUpdateLog.java:611)
   [junit4]    > 	at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:125)
   [junit4]    > 	at org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
   [junit4]    > 	at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
   [junit4]    > 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   [junit4]    > 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   [junit4]    > 	at java.lang.Thread.run(Thread.java:748)
   [junit4]   2> NOTE: leaving temporary files on disk at: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.cloud.cdcr.CdcrBidirectionalTest_5B0A46AB59A1F6D-001
   [junit4]   2> Oct 03, 2018 10:09:23 PM com.carrotsearch.randomizedtesting.ThreadLeakControl checkThreadLeaks
   [junit4]   2> WARNING: Will linger awaiting termination of 3 leaked thread(s).
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): {abc=PostingsFormat(name=Direct), xyz=PostingsFormat(name=LuceneVarGapFixedInterval), id=PostingsFormat(name=Memory)}, docValues:{}, maxPointsInLeafNode=581, maxMBSortInHeap=7.810680327635939, sim=RandomSimilarity(queryNorm=false): {}, locale=de, timezone=Etc/GMT-13
   [junit4]   2> NOTE: Linux 4.4.0-130-generic amd64/Oracle Corporation 1.8.0_172 (64-bit)/cpus=4,threads=1,free=280460784,total=524812288
   [junit4]   2> NOTE: All tests run in this JVM: [CoreSorterTest, ScheduledMaintenanceTriggerTest, SearchHandlerTest, BadCopyFieldTest, TestScoreJoinQPNoScore, TestConfigSetImmutable, RollingRestartTest, TestRuleBasedAuthorizationPlugin, SoftAutoCommitTest, DebugComponentTest, SolrMetricsIntegrationTest, CdcrWithNodesRestartsTest, TestTrieFacet, SpatialHeatmapFacetsTest, ChangedSchemaMergeTest, TestSweetSpotSimilarityFactory, RecoveryZkTest, HdfsTlogReplayBufferedWhileIndexingTest, TestReplicaProperties, TestNestedUpdateProcessor, TestQueryTypes, TestDocBasedVersionConstraints, PhrasesIdentificationComponentTest, CleanupOldIndexTest, MetricsHandlerTest, ConnectionManagerTest, TestCustomStream, NumericFieldsTest, TestImpersonationWithHadoopAuth, TestComplexPhraseLeadingWildcard, ExternalFileFieldSortTest, TestExportWriter, TestEmbeddedSolrServerAdminHandler, ForceLeaderTest, StatsReloadRaceTest, TestPostingsSolrHighlighter, IgnoreLargeDocumentProcessorFactoryTest, TestPrepRecovery, TestWordDelimiterFilterFactory, TestConfigReload, TestUseDocValuesAsStored, AnalyticsMergeStrategyTest, TestMultiWordSynonyms, CloneFieldUpdateProcessorFactoryTest, TestCloudSearcherWarming, TestDownShardTolerantSearch, HttpPartitionTest, TestSchemaVersionResource, GraphQueryTest, SuggesterTest, TestCoreContainer, TestDynamicFieldCollectionResource, DistribCursorPagingTest, TestDocumentBuilder, SolrCloudReportersTest, TestNumericRangeQuery32, CdcrBidirectionalTest]
   [junit4] Completed [268/834 (1!)] on J2 in 48.03s, 1 test, 1 error <<< FAILURES!

[...truncated 47784 lines...]
-ecj-javadoc-lint-src:
    [mkdir] Created dir: /tmp/ecj60420827
 [ecj-lint] Compiling 1233 source files to /tmp/ecj60420827
 [ecj-lint] Processing annotations
 [ecj-lint] Annotations processed
 [ecj-lint] Processing annotations
 [ecj-lint] No elements to process
 [ecj-lint] invalid Class-Path header in manifest of jar file: /x1/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: /x1/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] ----------
 [ecj-lint] 1. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/java/org/apache/solr/client/solrj/embedded/EmbeddedSolrServer.java (at line 219)
 [ecj-lint] 	return (NamedList<Object>) new JavaBinCodec(resolver).unmarshal(in);
 [ecj-lint] 	                           ^^^^^^^^^^^^^^^^^^^^^^^^^^
 [ecj-lint] Resource leak: '<unassigned Closeable value>' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 2. ERROR in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/java/org/apache/solr/cloud/CloudUtil.java (at line 32)
 [ecj-lint] 	import org.apache.solr.client.solrj.cloud.autoscaling.Policy;
 [ecj-lint] 	       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 [ecj-lint] The import org.apache.solr.client.solrj.cloud.autoscaling.Policy is never used
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 3. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/java/org/apache/solr/cloud/api/collections/RestoreCmd.java (at line 260)
 [ecj-lint] 	throw new SolrException(ErrorCode.BAD_REQUEST, "Unexpected number of replicas, replicationFactor, " +
 [ecj-lint]               Replica.Type.NRT + " or " + Replica.Type.TLOG + " must be greater than 0");
 [ecj-lint] 	^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 [ecj-lint] Resource leak: 'repository' is not closed at this location
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 4. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/java/org/apache/solr/update/UpdateLog.java (at line 1865)
 [ecj-lint] 	if (exceptionOnExecuteUpdate.get() != null) throw exceptionOnExecuteUpdate.get();
 [ecj-lint] 	                                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 [ecj-lint] Resource leak: 'proc' is not closed at this location
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 5. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/java/org/apache/solr/util/FileUtils.java (at line 50)
 [ecj-lint] 	in = new FileInputStream(src).getChannel();
 [ecj-lint] 	     ^^^^^^^^^^^^^^^^^^^^^^^^
 [ecj-lint] Resource leak: '<unassigned Closeable value>' is never closed
 [ecj-lint] ----------
 [ecj-lint] 6. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/java/org/apache/solr/util/FileUtils.java (at line 51)
 [ecj-lint] 	out = new FileOutputStream(destination).getChannel();
 [ecj-lint] 	      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 [ecj-lint] Resource leak: '<unassigned Closeable value>' is never closed
 [ecj-lint] ----------
 [ecj-lint] 6 problems (1 error, 5 warnings)

BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/build.xml:633: The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/build.xml:101: The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build.xml:680: The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/common-build.xml:2086: The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/common-build.xml:2125: Compile failed; see the compiler error output for details.

Total time: 101 minutes 45 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

[JENKINS] Lucene-Solr-Tests-7.x - Build # 920 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/920/

1 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMixedBounds

Error Message:
did not finish processing in time

Stack Trace:
java.lang.AssertionError: did not finish processing in time
	at __randomizedtesting.SeedInfo.seed([86AD5E841EA5649D:8C2EE129531E6FC7]:0)
	at org.junit.Assert.fail(Assert.java:93)
	at org.junit.Assert.assertTrue(Assert.java:43)
	at org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMixedBounds(IndexSizeTriggerTest.java:572)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
	at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
	at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
	at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
	at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
	at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
	at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
	at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
	at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
	at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
	at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
	at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
	at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
	at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
	at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
	at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
	at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
	at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
	at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
	at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
	at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
	at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
	at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
	at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
	at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
	at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
	at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
	at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
	at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
	at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
	at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
	at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
	at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
	at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
	at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 14347 lines...]
   [junit4] Suite: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
   [junit4]   2> 1995949 INFO  (SUITE-IndexSizeTriggerTest-seed#[86AD5E841EA5649D]-worker) [    ] o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> Creating dataDir: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J0/temp/solr.cloud.autoscaling.IndexSizeTriggerTest_86AD5E841EA5649D-001/init-core-data-001
   [junit4]   2> 1995950 INFO  (SUITE-IndexSizeTriggerTest-seed#[86AD5E841EA5649D]-worker) [    ] o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) w/NUMERIC_DOCVALUES_SYSPROP=false
   [junit4]   2> 1995952 INFO  (SUITE-IndexSizeTriggerTest-seed#[86AD5E841EA5649D]-worker) [    ] o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false) via: @org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN)
   [junit4]   2> 1995952 INFO  (SUITE-IndexSizeTriggerTest-seed#[86AD5E841EA5649D]-worker) [    ] o.a.s.c.MiniSolrCloudCluster Starting cluster of 2 servers in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J0/temp/solr.cloud.autoscaling.IndexSizeTriggerTest_86AD5E841EA5649D-001/tempDir-001
   [junit4]   2> 1995952 INFO  (SUITE-IndexSizeTriggerTest-seed#[86AD5E841EA5649D]-worker) [    ] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 1995953 INFO  (Thread-4577) [    ] o.a.s.c.ZkTestServer client port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 1995953 INFO  (Thread-4577) [    ] o.a.s.c.ZkTestServer Starting server
   [junit4]   2> 1995954 ERROR (Thread-4577) [    ] o.a.z.s.ZooKeeperServer ZKShutdownHandler is not registered, so ZooKeeper server won't take any action on ERROR or SHUTDOWN server state changes
   [junit4]   2> 1996053 INFO  (SUITE-IndexSizeTriggerTest-seed#[86AD5E841EA5649D]-worker) [    ] o.a.s.c.ZkTestServer start zk server on port:37878
   [junit4]   2> 1996058 INFO  (zkConnectionManagerCallback-5375-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1996065 INFO  (jetty-launcher-5372-thread-1) [    ] o.e.j.s.Server jetty-9.4.11.v20180605; built: 2018-06-05T18:24:03.829Z; git: d5fc0523cfa96bfebfbda19606cad384d772f04c; jvm 1.8.0_172-b11
   [junit4]   2> 1996065 INFO  (jetty-launcher-5372-thread-1) [    ] o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 1996065 INFO  (jetty-launcher-5372-thread-1) [    ] o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 1996065 INFO  (jetty-launcher-5372-thread-1) [    ] o.e.j.s.session node0 Scavenging every 600000ms
   [junit4]   2> 1996066 INFO  (jetty-launcher-5372-thread-1) [    ] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@3ff4c1e7{/solr,null,AVAILABLE}
   [junit4]   2> 1996067 INFO  (jetty-launcher-5372-thread-1) [    ] o.e.j.s.AbstractConnector Started ServerConnector@332037cb{SSL,[ssl, http/1.1]}{127.0.0.1:43709}
   [junit4]   2> 1996067 INFO  (jetty-launcher-5372-thread-1) [    ] o.e.j.s.Server Started @1996178ms
   [junit4]   2> 1996067 INFO  (jetty-launcher-5372-thread-1) [    ] o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, hostPort=43709}
   [junit4]   2> 1996067 ERROR (jetty-launcher-5372-thread-1) [    ] o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be missing or incomplete.
   [junit4]   2> 1996067 INFO  (jetty-launcher-5372-thread-1) [    ] o.a.s.s.SolrDispatchFilter Using logger factory org.apache.logging.slf4j.Log4jLoggerFactory
   [junit4]   2> 1996067 INFO  (jetty-launcher-5372-thread-1) [    ] o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr? version 7.6.0
   [junit4]   2> 1996067 INFO  (jetty-launcher-5372-thread-1) [    ] o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 1996067 INFO  (jetty-launcher-5372-thread-1) [    ] o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 1996067 INFO  (jetty-launcher-5372-thread-1) [    ] o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 2018-10-04T16:55:19.307Z
   [junit4]   2> 1996067 INFO  (jetty-launcher-5372-thread-2) [    ] o.e.j.s.Server jetty-9.4.11.v20180605; built: 2018-06-05T18:24:03.829Z; git: d5fc0523cfa96bfebfbda19606cad384d772f04c; jvm 1.8.0_172-b11
   [junit4]   2> 1996074 INFO  (jetty-launcher-5372-thread-2) [    ] o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 1996074 INFO  (jetty-launcher-5372-thread-2) [    ] o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 1996074 INFO  (jetty-launcher-5372-thread-2) [    ] o.e.j.s.session node0 Scavenging every 660000ms
   [junit4]   2> 1996075 INFO  (jetty-launcher-5372-thread-2) [    ] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@3f880256{/solr,null,AVAILABLE}
   [junit4]   2> 1996075 INFO  (jetty-launcher-5372-thread-2) [    ] o.e.j.s.AbstractConnector Started ServerConnector@2f4707e6{SSL,[ssl, http/1.1]}{127.0.0.1:36548}
   [junit4]   2> 1996075 INFO  (jetty-launcher-5372-thread-2) [    ] o.e.j.s.Server Started @1996187ms
   [junit4]   2> 1996075 INFO  (jetty-launcher-5372-thread-2) [    ] o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, hostPort=36548}
   [junit4]   2> 1996075 ERROR (jetty-launcher-5372-thread-2) [    ] o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be missing or incomplete.
   [junit4]   2> 1996075 INFO  (jetty-launcher-5372-thread-2) [    ] o.a.s.s.SolrDispatchFilter Using logger factory org.apache.logging.slf4j.Log4jLoggerFactory
   [junit4]   2> 1996075 INFO  (jetty-launcher-5372-thread-2) [    ] o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr? version 7.6.0
   [junit4]   2> 1996076 INFO  (jetty-launcher-5372-thread-2) [    ] o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 1996076 INFO  (jetty-launcher-5372-thread-2) [    ] o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 1996076 INFO  (jetty-launcher-5372-thread-2) [    ] o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 2018-10-04T16:55:19.316Z
   [junit4]   2> 1996078 INFO  (zkConnectionManagerCallback-5379-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1996078 INFO  (jetty-launcher-5372-thread-2) [    ] o.a.s.s.SolrDispatchFilter solr.xml found in ZooKeeper. Loading...
   [junit4]   2> 1996098 INFO  (zkConnectionManagerCallback-5377-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1996098 INFO  (jetty-launcher-5372-thread-1) [    ] o.a.s.s.SolrDispatchFilter solr.xml found in ZooKeeper. Loading...
   [junit4]   2> 1996192 INFO  (jetty-launcher-5372-thread-2) [    ] o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:37878/solr
   [junit4]   2> 1996194 INFO  (zkConnectionManagerCallback-5383-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1996196 INFO  (zkConnectionManagerCallback-5385-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1996261 INFO  (jetty-launcher-5372-thread-2) [n:127.0.0.1:36548_solr    ] o.a.s.c.OverseerElectionContext I am going to be the leader 127.0.0.1:36548_solr
   [junit4]   2> 1996262 INFO  (jetty-launcher-5372-thread-2) [n:127.0.0.1:36548_solr    ] o.a.s.c.Overseer Overseer (id=72448243339296772-127.0.0.1:36548_solr-n_0000000000) starting
   [junit4]   2> 1996267 INFO  (zkConnectionManagerCallback-5392-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1996269 INFO  (jetty-launcher-5372-thread-2) [n:127.0.0.1:36548_solr    ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:37878/solr ready
   [junit4]   2> 1996270 INFO  (OverseerStateUpdate-72448243339296772-127.0.0.1:36548_solr-n_0000000000) [n:127.0.0.1:36548_solr    ] o.a.s.c.Overseer Starting to work on the main queue : 127.0.0.1:36548_solr
   [junit4]   2> 1996270 INFO  (jetty-launcher-5372-thread-2) [n:127.0.0.1:36548_solr    ] o.a.s.c.ZkController Register node as live in ZooKeeper:/live_nodes/127.0.0.1:36548_solr
   [junit4]   2> 1996277 INFO  (zkCallback-5391-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 1996277 INFO  (zkCallback-5384-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 1996280 DEBUG (OverseerAutoScalingTriggerThread-72448243339296772-127.0.0.1:36548_solr-n_0000000000) [    ] o.a.s.c.a.OverseerTriggerThread Adding .auto_add_replicas and .scheduled_maintenance triggers
   [junit4]   2> 1996281 DEBUG (OverseerAutoScalingTriggerThread-72448243339296772-127.0.0.1:36548_solr-n_0000000000) [    ] o.a.s.c.a.OverseerTriggerThread Refreshing /autoscaling.json with znode version 1
   [junit4]   2> 1996282 DEBUG (OverseerAutoScalingTriggerThread-72448243339296772-127.0.0.1:36548_solr-n_0000000000) [    ] o.a.s.c.a.OverseerTriggerThread Current znodeVersion 1, lastZnodeVersion -1
   [junit4]   2> 1996282 DEBUG (OverseerAutoScalingTriggerThread-72448243339296772-127.0.0.1:36548_solr-n_0000000000) [    ] o.a.s.c.a.OverseerTriggerThread Processed trigger updates upto znodeVersion 1
   [junit4]   2> 1996287 DEBUG (OverseerAutoScalingTriggerThread-72448243339296772-127.0.0.1:36548_solr-n_0000000000) [    ] o.a.s.c.a.NodeLostTrigger NodeLostTrigger .auto_add_replicas - Initial livenodes: [127.0.0.1:36548_solr]
   [junit4]   2> 1996288 DEBUG (OverseerAutoScalingTriggerThread-72448243339296772-127.0.0.1:36548_solr-n_0000000000) [    ] o.a.s.c.a.OverseerTriggerThread -- clean old nodeAdded markers
   [junit4]   2> 1996288 DEBUG (OverseerAutoScalingTriggerThread-72448243339296772-127.0.0.1:36548_solr-n_0000000000) [    ] o.a.s.c.a.OverseerTriggerThread Current znodeVersion 1, lastZnodeVersion 1
   [junit4]   2> 1996288 DEBUG (ScheduledTrigger-8734-thread-2) [    ] o.a.s.c.a.NodeLostTrigger Running NodeLostTrigger: .auto_add_replicas with currently live nodes: 1
   [junit4]   2> 1996293 INFO  (jetty-launcher-5372-thread-2) [n:127.0.0.1:36548_solr    ] o.a.s.h.a.MetricsHistoryHandler No .system collection, keeping metrics history in memory.
   [junit4]   2> 1996328 INFO  (jetty-launcher-5372-thread-2) [n:127.0.0.1:36548_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr_36548.solr.node' (registry 'solr.node') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@5558d1c
   [junit4]   2> 1996335 INFO  (jetty-launcher-5372-thread-2) [n:127.0.0.1:36548_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr_36548.solr.jvm' (registry 'solr.jvm') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@5558d1c
   [junit4]   2> 1996335 INFO  (jetty-launcher-5372-thread-2) [n:127.0.0.1:36548_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr_36548.solr.jetty' (registry 'solr.jetty') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@5558d1c
   [junit4]   2> 1996336 INFO  (jetty-launcher-5372-thread-2) [n:127.0.0.1:36548_solr    ] o.a.s.c.CorePropertiesLocator Found 0 core definitions underneath /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J0/temp/solr.cloud.autoscaling.IndexSizeTriggerTest_86AD5E841EA5649D-001/tempDir-001/node2/.
   [junit4]   2> 1996348 INFO  (jetty-launcher-5372-thread-1) [    ] o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:37878/solr
   [junit4]   2> 1996349 INFO  (zkConnectionManagerCallback-5397-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1996351 INFO  (zkConnectionManagerCallback-5399-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1996360 INFO  (jetty-launcher-5372-thread-1) [n:127.0.0.1:43709_solr    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 1996363 INFO  (jetty-launcher-5372-thread-1) [n:127.0.0.1:43709_solr    ] o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 2147483647 transient cores
   [junit4]   2> 1996363 INFO  (jetty-launcher-5372-thread-1) [n:127.0.0.1:43709_solr    ] o.a.s.c.ZkController Register node as live in ZooKeeper:/live_nodes/127.0.0.1:43709_solr
   [junit4]   2> 1996370 INFO  (zkCallback-5391-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2)
   [junit4]   2> 1996370 INFO  (zkCallback-5398-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2)
   [junit4]   2> 1996370 INFO  (zkCallback-5384-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2)
   [junit4]   2> 1996381 INFO  (zkConnectionManagerCallback-5406-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1996382 INFO  (jetty-launcher-5372-thread-1) [n:127.0.0.1:43709_solr    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (2)
   [junit4]   2> 1996383 INFO  (jetty-launcher-5372-thread-1) [n:127.0.0.1:43709_solr    ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:37878/solr ready
   [junit4]   2> 1996383 INFO  (jetty-launcher-5372-thread-1) [n:127.0.0.1:43709_solr    ] o.a.s.h.a.MetricsHistoryHandler No .system collection, keeping metrics history in memory.
   [junit4]   2> 1996398 INFO  (jetty-launcher-5372-thread-1) [n:127.0.0.1:43709_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr_43709.solr.node' (registry 'solr.node') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@5558d1c
   [junit4]   2> 1996406 INFO  (jetty-launcher-5372-thread-1) [n:127.0.0.1:43709_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr_43709.solr.jvm' (registry 'solr.jvm') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@5558d1c
   [junit4]   2> 1996407 INFO  (jetty-launcher-5372-thread-1) [n:127.0.0.1:43709_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr_43709.solr.jetty' (registry 'solr.jetty') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@5558d1c
   [junit4]   2> 1996408 INFO  (jetty-launcher-5372-thread-1) [n:127.0.0.1:43709_solr    ] o.a.s.c.CorePropertiesLocator Found 0 core definitions underneath /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J0/temp/solr.cloud.autoscaling.IndexSizeTriggerTest_86AD5E841EA5649D-001/tempDir-001/node1/.
   [junit4]   2> 1996431 INFO  (zkConnectionManagerCallback-5409-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1996434 INFO  (zkConnectionManagerCallback-5414-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1996435 INFO  (SUITE-IndexSizeTriggerTest-seed#[86AD5E841EA5649D]-worker) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (2)
   [junit4]   2> 1996436 INFO  (SUITE-IndexSizeTriggerTest-seed#[86AD5E841EA5649D]-worker) [    ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:37878/solr ready
   [junit4]   2> 1996459 DEBUG (SUITE-IndexSizeTriggerTest-seed#[86AD5E841EA5649D]-worker) [    ] o.a.s.c.a.s.SimClusterStateProvider --- new Overseer leader: 127.0.0.1:10006_solr
   [junit4]   2> 1996459 DEBUG (SUITE-IndexSizeTriggerTest-seed#[86AD5E841EA5649D]-worker) [    ] o.a.s.c.a.s.SimClusterStateProvider ** creating new collection states, currentVersion=0
   [junit4]   2> 1996459 DEBUG (Simulated OverseerAutoScalingTriggerThread) [    ] o.a.s.c.a.OverseerTriggerThread Adding .auto_add_replicas and .scheduled_maintenance triggers
   [junit4]   2> 1996459 DEBUG (SUITE-IndexSizeTriggerTest-seed#[86AD5E841EA5649D]-worker) [    ] o.a.s.c.a.s.SimClusterStateProvider ** saved cluster state version 0
   [junit4]   2> 1996459 INFO  (SUITE-IndexSizeTriggerTest-seed#[86AD5E841EA5649D]-worker) [    ] o.a.s.h.a.MetricsHistoryHandler No .system collection, keeping metrics history in memory.
   [junit4]   2> 1996459 DEBUG (Simulated OverseerAutoScalingTriggerThread) [    ] o.a.s.c.a.OverseerTriggerThread Refreshing /autoscaling.json with znode version 0
   [junit4]   2> 1996460 DEBUG (Simulated OverseerAutoScalingTriggerThread) [    ] o.a.s.c.a.OverseerTriggerThread Current znodeVersion 0, lastZnodeVersion -1
   [junit4]   2> 1996460 DEBUG (Simulated OverseerAutoScalingTriggerThread) [    ] o.a.s.c.a.OverseerTriggerThread Processed trigger updates upto znodeVersion 0
   [junit4]   2> 1996460 DEBUG (Simulated OverseerAutoScalingTriggerThread) [    ] o.a.s.c.a.NodeLostTrigger NodeLostTrigger .auto_add_replicas - Initial livenodes: [127.0.0.1:10006_solr, 127.0.0.1:10007_solr]
   [junit4]   2> 1996461 DEBUG (Simulated OverseerAutoScalingTriggerThread) [    ] o.a.s.c.a.OverseerTriggerThread -- clean old nodeAdded markers
   [junit4]   2> 1996461 DEBUG (Simulated OverseerAutoScalingTriggerThread) [    ] o.a.s.c.a.OverseerTriggerThread Current znodeVersion 0, lastZnodeVersion 0
   [junit4]   2> 1996461 DEBUG (ScheduledTrigger-8750-thread-1) [    ] o.a.s.c.a.NodeLostTrigger Running NodeLostTrigger: .auto_add_replicas with currently live nodes: 2
   [junit4]   2> 1996485 DEBUG (ScheduledTrigger-8750-thread-3) [    ] o.a.s.c.a.NodeLostTrigger Running NodeLostTrigger: .auto_add_replicas with currently live nodes: 2
   [junit4]   2> 1996505 DEBUG (ScheduledTrigger-8750-thread-1) [    ] o.a.s.c.a.NodeLostTrigger Running NodeLostTrigger: .auto_add_replicas with currently live nodes: 2
   [junit4]   2> 1996526 DEBUG (ScheduledTrigger-8750-thread-2) [    ] o.a.s.c.a.NodeLostTrigger Running NodeLostTrigger: .auto_add_replicas with currently live nodes: 2
   [junit4]   2> 1996546 DEBUG (ScheduledTrigger-8750-thread-1) [    ] o.a.s.c.a.NodeLostTrigger Running NodeLostTrigger: .auto_add_replicas with currently live nodes: 2
   [junit4]   2> 1996566 DEBUG (ScheduledTrigger-8750-thread-4) [    ] o.a.s.c.a.NodeLostTrigger Running NodeLostTrigger: .auto_add_replicas with currently live nodes: 2
   [junit4]   2> 1996586 DEBUG (ScheduledTrigger-8750-thread-1) [    ] o.a.s.c.a.NodeLostTrigger Running NodeLostTrigger: .auto_add_replicas with currently live nodes: 2
   [junit4]   2> 1996606 DEBUG (ScheduledTrigger-8750-thread-4) [    ] o.a.s.c.a.NodeLostTrigger Running NodeLostTrigger: .auto_add_replicas with currently live nodes: 2
   [junit4]   2> 1996626 DEBUG (ScheduledTrigger-8750-thread-1) [    ] o.a.s.c.a.NodeLostTrigger Running NodeLostTrigger: .auto_add_replicas with currently live nodes: 2
   [junit4]   2> 1996647 DEBUG (ScheduledTrigger-8750-thread-4) [    ] o.a.s.c.a.NodeLostTrigger Running NodeLostTrigger: .auto_add_replicas with currently live nodes: 2
   [junit4]   2> 1996668 DEBUG (ScheduledTrigger-8750-thread-1) [    ] o.a.s.c.a.NodeLostTrigger Running NodeLostTrigger: .auto_add_replicas with currently live nodes: 2
   [junit4]   2> 1996681 INFO  (TEST-IndexSizeTriggerTest.testMaxOps-seed#[86AD5E841EA5649D]) [    ] o.a.s.SolrTestCaseJ4 ###Starting testMaxOps
   [junit4]   2> 1996681 DEBUG (simCloudManagerPool-8749-thread-1) [    ] o.a.s.c.a.s.SimClusterStateProvider -- simCreateCollection testMaxOps_collection, currentVersion=1
   [junit4]   2> 1996682 DEBUG (simCloudManagerPool-8749-thread-1) [    ] o.a.s.c.a.s.SimClusterStateProvider ** creating new collection states, currentVersion=1
   [junit4]   2> 1996682 DEBUG (simCloudManagerPool-8749-thread-1) [    ] o.a.s.c.a.s.SimClusterStateProvider ** saved cluster state version 1
   [junit4]   2> 1996688 DEBUG (ScheduledTrigger-8750-thread-4) [    ] o.a.s.c.a.NodeLostTrigger Running NodeLostTrigger: .auto_add_replicas with currently live nodes: 2
   [junit4]   2> 1996692 DEBUG (simCloudManagerPool-8749-thread-1) [    ] o.a.s.c.a.s.SimClusterStateProvider ** creating new collection states, currentVersion=2
   [junit4]   2> 1996692 DEBUG (simCloudManagerPool-8749-thread-1) [    ] o.a.s.c.a.s.SimClusterStateProvider ** saved cluster state version 2
   [junit4]   2> 1996694 DEBUG (simCloudManagerPool-8749-thread-1) [    ] o.a.s.c.a.s.SimClusterStateProvider -- finished createCollection testMaxOps_collection, currentVersion=3
   [junit4]   2> 1996703 DEBUG (simCloudManagerPool-8749-thread-2) [    ] o.a.s.c.a.s.SimClusterStateProvider ** creating new collection states, currentVersion=3
   [junit4]   2> 1996703 DEBUG (simCloudManagerPool-8749-thread-2) [    ] o.a.s.c.a.s.SimClusterStateProvider ** saved cluster state version 3
   [junit4]   2> 1996704 DEBUG (simCloudManagerPool-8749-thread-12) [    ] o.a.s.c.a.s.SimClusterStateProvider -- elected new leader for testMaxOps_collection / shard1 (currentVersion=4): {"core_node1":{
   [junit4]   2>     "core":"testMaxOps_collection_shard1_replica_n1",
   [junit4]   2>     "shard":"shard1",
   [junit4]   2>     "collection":"testMaxOps_collection",
   [junit4]   2>     "node_name":"127.0.0.1:10006_solr",
   [junit4]   2>     "type":"NRT",
   [junit4]   2>     "leader":"true",
   [junit4]   2>     "SEARCHER.searcher.maxDoc":0,
   [junit4]   2>     "SEARCHER.searcher.deletedDocs":0,
   [junit4]   2>     "INDEX.sizeInBytes":10240,
   [junit4]   2>     "state":"active",
   [junit4]   2>     "INDEX.sizeInGB":9.5367431640625E-6,
   [junit4]   2>     "SEARCHER.searcher.numDocs":0}}
   [junit4]   2> 1996708 DEBUG (ScheduledTrigger-8750-thread-3) [    ] o.a.s.c.a.NodeLostTrigger Running NodeLostTrigger: .auto_add_replicas with currently live nodes: 2
   [junit4]   2> 1996713 DEBUG (simCloudManagerPool-8749-thread-4) [    ] o.a.s.c.a.s.SimClusterStateProvider ** creating new collection states, currentVersion=4
   [junit4]   2> 1996714 DEBUG (simCloudManagerPool-8749-thread-4) [    ] o.a.s.c.a.s.SimClusterStateProvider ** saved cluster state version 4
   [junit4]   2> 1996714 DEBUG (simCloudManagerPool-8749-thread-13) [    ] o.a.s.c.a.s.SimClusterStateProvider -- elected new leader for testMaxOps_collection / shard2 (currentVersion=5): {"core_node3":{
   [junit4]   2>     "core":"testMaxOps_collection_shard2_replica_n3",
   [junit4]   2>     "shard":"shard2",
   [junit4]   2>     "collection":"testMaxOps_collection",
   [junit4]   2>     "node_name":"127.0.0.1:10006_solr",
   [junit4]   2>     "type":"NRT",
   [junit4]   2>     "leader":"true",
   [junit4]   2>     "SEARCHER.searcher.maxDoc":0,
   [junit4]   2>     "SEARCHER.searcher.deletedDocs":0,
   [junit4]   2>     "INDEX.sizeInBytes":10240,
   [junit4]   2>     "state":"active",
   [junit4]   2>     "INDEX.sizeInGB":9.5367431640625E-6,
   [junit4]   2>     "SEARCHER.searcher.numDocs":0}}
   [junit4]   2> 1996724 DEBUG (simCloudManagerPool-8749-thread-3) [    ] o.a.s.c.a.s.SimClusterStateProvider ** creating new collection states, currentVersion=5
   [junit4]   2> 1996724 DEBUG (simCloudManagerPool-8749-thread-3) [    ] o.a.s.c.a.s.SimClusterStateProvider ** saved cluster state version 5
   [junit4]   2> 1996728 DEBUG (ScheduledTrigger-8750-thread-4) [    ] o.a.s.c.a.NodeLostTrigger Running NodeLostTrigger: .auto_add_replicas with currently live nodes: 2
   [junit4]   2> 1996734 DEBUG (simCloudManagerPool-8749-thread-5) [    ] o.a.s.c.a.s.SimClusterStateProvider ** creating new collection states, currentVersion=6
   [junit4]   2> 1996736 DEBUG (simCloudManagerPool-8749-thread-5) [    ] o.a.s.c.a.s.SimClusterStateProvider ** saved cluster state version 6
   [junit4]   2> 1996746 DEBUG (simCloudManagerPool-8749-thread-6) [    ] o.a.s.c.a.s.SimClusterStateProvider ** creating new collection states, currentVersion=7
   [junit4]   2> 1996747 DEBUG (simCloudManagerPool-8749-thread-6) [    ] o.a.s.c.a.s.SimClusterStateProvider ** saved cluster state version 7
   [junit4]   2> 1996747 DEBUG (simCloudManagerPool-8749-thread-14) [    ] o.a.s.c.a.s.SimClusterStateProvider -- elected new leader for testMaxOps_collection / shard3 (currentVersion=8): {"core_node5":{
   [junit4]   2>     "core":"testMaxOps_collection_shard3_replica_n5",
   [junit4]   2>     "shard":"shard3",
   [junit4]   2>     "collection":"testMaxOps_collection",
   [junit4]   2>     "node_name":"127.0.0.1:10006_solr",
   [junit4]   2>     "type":"NRT",
   [junit4]   2>     "leader":"true",
   [junit4]   2>     "SEARCHER.searcher.maxDoc":0,
   [junit4]   2>     "SEARCHER.searcher.deletedDocs":0,
   [junit4]   2>     "INDEX.sizeInBytes":10240,
   [junit4]   2>     "state":"active",
   [junit4]   2>     "INDEX.sizeInGB":9.5367431640625E-6,
   [junit4]   2>     "SEARCHER.searcher.numDocs":0}}
   [junit4]   2> 1996748 DEBUG (ScheduledTrigger-8750-thread-3) [    ] o.a.s.c.a.NodeLostTrigger Running NodeLostTrigger: .auto_add_replicas with currently live nodes: 2
   [junit4]   2> 1996757 DEBUG (simCloudManagerPool-8749-thread-7) [    ] o.a.s.c.a.s.SimClusterStateProvider ** creating new collection states, currentVersion=8
   [junit4]   2> 1996757 DEBUG (simCloudManagerPool-8749-thread-7) [    ] o.a.s.c.a.s.SimClusterStateProvider ** saved cluster state version 8
   [junit4]   2> 1996768 DEBUG (simCloudManagerPool-8749-thread-8) [    ] o.a.s.c.a.s.SimClusterStateProvider ** creating new collection states, currentVersion=9
   [junit4]   2> 1996768 DEBUG (ScheduledTrigger-8750-thread-2) [    ] o.a.s.c.a.NodeLostTrigger Running NodeLostTrigger: .auto_add_replicas with currently live nodes: 2
   [junit4]   2> 1996768 DEBUG (simCloudManagerPool-8749-thread-8) [    ] o.a.s.c.a.s.SimClusterStateProvider ** saved cluster state version 9
   [junit4]   2> 1996769 DEBUG (simCloudManagerPool-8749-thread-15) [    ] o.a.s.c.a.s.SimClusterStateProvider -- elected new leader for testMaxOps_collection / shard4 (currentVersion=10): {"core_node7":{
   [junit4]   2>     "core":"testMaxOps_collection_shard4_replica_n7",
   [junit4]   2>     "shard":"shard4",
   [junit4]   2>     "collection":"testMaxOps_collection",
   [junit4]   2>     "node_name":"127.0.0.1:10006_solr",
   [junit4]   2>     "type":"NRT",
   [junit4]   2>     "leader":"true",
   [junit4]   2>     "SEARCHER.searcher.maxDoc":0,
   [junit4]   2>     "SEARCHER.searcher.deletedDocs":0,
   [junit4]   2>     "INDEX.sizeInBytes":10240,
   [junit4]   2>     "state":"active",
   [junit4]   2>     "INDEX.sizeInGB":9.5367431640625E-6,
   [junit4]   2>     "SEARCHER.searcher.numDocs":0}}
   [junit4]   2> 1996779 DEBUG (simCloudManagerPool-8749-thread-9) [    ] o.a.s.c.a.s.SimClusterStateProvider ** creating new collection states, currentVersion=10
   [junit4]   2> 1996779 DEBUG (simCloudManagerPool-8749-thread-9) [    ] o.a.s.c.a.s.SimClusterStateProvider ** saved cluster state version 10
   [junit4]   2> 1996789 DEBUG (ScheduledTrigger-8750-thread-3) [    ] o.a.s.c.a.NodeLostTrigger Running NodeLostTrigger: .auto_add_replicas with currently live nodes: 2
   [junit4]   2> 1996790 DEBUG (simCloudManagerPool-8749-thread-10) [    ] o.a.s.c.a.s.SimClusterStateProvider ** creating new collection states, currentVersion=11
   [junit4]   2> 1996790 DEBUG (simCloudManagerPool-8749-thread-10) [    ] o.a.s.c.a.s.SimClusterStateProvider ** saved cluster state version 11
   [junit4]   2> 1996791 DEBUG (TEST-IndexSizeTriggerTest.testMaxOps-seed#[86AD5E841EA5649D]) [    ] o.a.s.c.a.s.SimClusterStateProvider ** creating new collection states, currentVersion=12
   [junit4]   2> 1996791 DEBUG (simCloudManagerPool-8749-thread-16) [    ] o.a.s.c.a.s.SimClusterStateProvider -- elected new leader for testMaxOps_collection / shard5 (currentVersion=12): {"core_node9":{
   [junit4]   2>     "core":"testMaxOps_collection_shard5_replica_n9",
   [junit4]   2>     "shard":"shard5",
   [junit4]   2>     "collection":"testMaxOps_collection",
   [junit4]   2>     "node_name":"127.0.0.1:10006_solr",
   [junit4]   2>     "type":"NRT",
   [junit4]   2>     "leader":"true",
   [junit4]   2>     "SEARCHER.searcher.maxDoc":0,
   [junit4]   2>     "SEARCHER.searcher.deletedDocs":0,
   [junit4]   2>     "INDEX.sizeInBytes":10240,
   [junit4]   2>     "state":"active",
   [junit4]   2>     "INDEX.sizeInGB":9.5367431640625E-6,
   [junit4]   2>     "SEARCHER.searcher.numDocs":0}}
   [junit4]   2> 1996791 DEBUG (TEST-IndexSizeTriggerTest.testMaxOps-seed#[86AD5E841EA5649D]) [    ] o.a.s.c.a.s.SimClusterStateProvider ** saved cluster state version 12
   [junit4]   2> 1996802 DEBUG (simCloudManagerPool-8749-thread-11) [    ] o.a.s.c.a.s.SimClusterStateProvider ** creating new collection states, currentVersion=13
   [junit4]   2> 1996802 DEBUG (simCloudManagerPool-8749-thread-11) [    ] o.a.s.c.a.s.SimClusterStateProvider ** saved cluster state version 13
   [junit4]   2> 1996809 DEBUG (simCloudManagerPool-8749-thread-17) [    ] o.a.s.c.a.AutoScalingHandler Verified autoscaling configuration
   [junit4]   2> 1996809 DEBUG (ScheduledTrigger-8750-thread-2) [    ] o.a.s.c.a.NodeLostTrigger Running NodeLostTrigger: .auto_add_replicas with currently live nodes: 2
   [junit4]   2> 1996809 DEBUG (simCloudManagerPool-8749-thread-17) [    ] o.a.s.c.a.OverseerTriggerThread Refreshing /autoscaling.json with znode version 1
   [junit4]   2> 1996809 DEBUG (Simulated OverseerAutoScalingTriggerThread) [    ] o.a.s.c.a.OverseerTriggerThread Processed trigger updates upto znodeVersion 1
   [junit4]   2> 1996810 DEBUG (Simulated OverseerAutoScalingTriggerThread) [    ] o.a.s.c.a.OverseerTriggerThread -- clean old nodeAdded markers
   [junit4]   2> 1996810 DEBUG (Simulated OverseerAutoScalingTriggerThread) [    ] o.a.s.c.a.OverseerTriggerThread Current znodeVersion 1, lastZnodeVersion 1
   [junit4]   2> 1996811 DEBUG (simCloudManagerPool-8749-thread-18) [    ] o.a.s.c.a.AutoScalingHandler Verified autoscaling configuration
   [junit4]   2> 1996811 DEBUG (simCloudManagerPool-8749-thread-18) [    ] o.a.s.c.a.OverseerTriggerThread Refreshing /autoscaling.json with znode version 2
   [junit4]   2> 1996812 DEBUG (Simulated OverseerAutoScalingTriggerThread) [    ] o.a.s.c.a.OverseerTriggerThread Processed trigger updates upto znodeVersion 2
   [junit4]   2> 1996812 DEBUG (Simulated OverseerAutoScalingTriggerThread) [    ] o.a.s.c.a.OverseerTriggerThread -- clean old nodeAdded markers
   [junit4]   2> 1996812 DEBUG (Simulated OverseerAutoScalingTriggerThread) [    ] o.a.s.c.a.OverseerTriggerThread Current znodeVersion 2, lastZnodeVersion 2
   [junit4]   2> 1996813 DEBUG (simCloudManagerPool-8749-thread-19) [    ] o.a.s.c.a.AutoScalingHandler Verified autoscaling configuration
   [junit4]   2> 1996813 DEBUG (simCloudManagerPool-8749-thread-19) [    ] o.a.s.c.a.OverseerTriggerThread Refreshing /autoscaling.json with znode version 3
   [junit4]   2> 1996813 DEBUG (Simulated OverseerAutoScalingTriggerThread) [    ] o.a.s.c.a.OverseerTriggerThread Processed trigger updates upto znodeVersion 3
   [junit4]   2> 1996813 DEBUG (Simulated OverseerAutoScalingTriggerThread) [    ] o.a.s.c.a.OverseerTriggerThread -- clean old nodeAdded markers
   [junit4]   2> 1996814 DEBUG (Simulated OverseerAutoScalingTriggerThread) [    ] o.a.s.c.a.OverseerTriggerThread Current znodeVersion 3, lastZnodeVersion 3
   [junit4]   2> 1996829 DEBUG (ScheduledTrigger-8750-thread-3) [    ] o.a.s.c.a.NodeLostTrigger Running NodeLostTrigger: .auto_add_replicas with currently live nodes: 2
   [junit4]   2> 1996849 DEBUG (ScheduledTrigger-8750-thread-2) [    ] o.a.s.c.a.NodeLostTrigger Running NodeLostTrigger: .auto_add_replicas with currently live nodes: 2
   [junit4]   2> 1996869 DEBUG (simCloudManagerPool-8749-thread-21) [    ] o.a.s.c.a.AutoScalingHandler Verified autoscaling configuration
   [junit4]   2> 1996869 DEBUG (simCloudManagerPool-8749-thread-21) [    ] o.a.s.c.a.OverseerTriggerThread Refreshing /autoscaling.json with znode version 4
   [junit4]   2> 1996869 DEBUG (ScheduledTrigger-8750-thread-2) [    ] o.a.s.c.a.NodeLostTrigger Running NodeLostTrigger: .auto_add_replicas with currently live nodes: 2
   [junit4]   2> 1996869 DEBUG (Simulated OverseerAutoScalingTriggerThread) [    ] o.a.s.c.a.OverseerTriggerThread Processed trigger updates upto znodeVersion 4
   [junit4]   2> 1996870 DEBUG (Simulated OverseerAutoScalingTriggerThread) [    ] o.a.s.c.a.OverseerTriggerThread -- clean old nodeAdded markers
   [junit4]   2> 1996870 DEBUG (Simulated OverseerAutoScalingTriggerThread) [    ] o.a.s.c.a.OverseerTriggerThread Current znodeVersion 4, lastZnodeVersion 4
   [junit4]   2> 1996889 DEBUG (ScheduledTrigger-8750-thread-3) [    ] o.a.s.c.a.NodeLostTrigger Running NodeLostTrigger: .auto_add_replicas with currently live nodes: 2
   [junit4]   2> 1996909 DEBUG (ScheduledTrigger-8750-thread-4) [    ] o.a.s.c.a.NodeLostTrigger Running NodeLostTrigger: .auto_add_replicas with currently live nodes: 2
   [junit4]   2> 1996930 DEBUG (ScheduledTrigger-8750-thread-3) [    ] o.a.s.c.a.NodeLostTrigger Running NodeLostTrigger: .auto_add_replicas with currently live nodes: 2
   [junit4]   2> 1996950 DEBUG (ScheduledTrigger-8750-thread-4) [    ] o.a.s.c.a.NodeLostTrigger Running NodeLostTrigger: .auto_add_replicas with currently live nodes: 2
   [junit4]   2> 1996970 DEBUG (ScheduledTrigger-8750-thread-1) [    ] o.a.s.c.a.NodeLostTrigger Running NodeLostTrigger: .auto_add_replicas with currently live nodes: 2
   [junit4]   2> 1996986 DEBUG (ScheduledTrigger-8750-thread-4) [    ] o.a.s.c.a.ScheduledTriggers ++++++++ Cooldown inactive - processing event: {
   [junit4]   2>   "id":"1574474a9ae623Tbbgyjfquicgu14dinz2lpqrxm",
   [junit4]   2>   "source":"index_size_trigger5",
   [junit4]   2>   "eventTime":6038824054089251,
   [junit4]   2>   "eventType":"INDEXSIZE",
   [junit4]   2>   "properties":{
   [junit4]   2>     "aboveSize":{
   [junit4]   2>       "testMaxOps_collection_shard1_replica_n1":"docs=38, bytes=29240",
   [junit4]   2>       "testMaxOps_collection_shard2_replica_n3":"docs=38, bytes=29240",
   [junit4]   2>       "testMaxOps_collection_shard3_replica_n5":"docs=30, bytes=25240",
   [junit4]   2>       "testMaxOps_collection_shard4_replica_n7":"docs=46, bytes=33240",
   [junit4]   2>       "testMaxOps_collection_shard5_replica_n9":"docs=48, bytes=34240"},
   [junit4]   2>     "belowSize":{},
   [junit4]   2>     "requestedOps":[
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard5"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard4"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard2"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard1"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard3"}]}}]}}
   [junit4]   2> 1996987 DEBUG (ScheduledTrigger-8750-thread-4) [    ] o.a.s.c.a.ScheduledTriggers Pausing all triggers: [index_size_trigger5, .auto_add_replicas, .scheduled_maintenance]
   [junit4]   2> 1996988 DEBUG (simCloudManagerPool-8749-thread-22) [    ] o.a.s.c.a.s.SimClusterStateProvider -- simCreateCollection .system, currentVersion=14
   [junit4]   2> 1996988 DEBUG (simCloudManagerPool-8749-thread-22) [    ] o.a.s.c.a.s.SimClusterStateProvider ** creating new collection states, currentVersion=14
   [junit4]   2> 1996989 DEBUG (simCloudManagerPool-8749-thread-22) [    ] o.a.s.c.a.s.SimClusterStateProvider ** saved cluster state version 14
   [junit4]   2> 1996999 DEBUG (simCloudManagerPool-8749-thread-22) [    ] o.a.s.c.a.s.SimClusterStateProvider ** creating new collection states, currentVersion=15
   [junit4]   2> 1996999 DEBUG (simCloudManagerPool-8749-thread-22) [    ] o.a.s.c.a.s.SimClusterStateProvider ** saved cluster state version 15
   [junit4]   2> 1997010 DEBUG (simCloudManagerPool-8749-thread-23) [    ] o.a.s.c.a.s.SimClusterStateProvider ** creating new collection states, currentVersion=16
   [junit4]   2> 1997010 DEBUG (simCloudManagerPool-8749-thread-23) [    ] o.a.s.c.a.s.SimClusterStateProvider ** saved cluster state version 16
   [junit4]   2> 1997010 DEBUG (simCloudManagerPool-8749-thread-23) [    ] o.a.s.c.a.s.SimClusterStateProvider -- elected new leader for .system / shard1 (currentVersion=17): {"core_node1":{
   [junit4]   2>     "core":".system_shard1_replica_n1",
   [junit4]   2>     "shard":"shard1",
   [junit4]   2>     "collection":".system",
   [junit4]   2>     "node_name":"127.0.0.1:10006_solr",
   [junit4]   2>     "type":"NRT",
   [junit4]   2>     "leader":"true",
   [junit4]   2>     "SEARCHER.searcher.maxDoc":0,
   [junit4]   2>     "SEARCHER.searcher.deletedDocs":0,
   [junit4]   2>     "INDEX.sizeInBytes":10240,
   [junit4]   2>     "state":"active",
   [junit4]   2>     "INDEX.sizeInGB":9.5367431640625E-6,
   [junit4]   2>     "SEARCHER.searcher.numDocs":0}}
   [junit4]   2> 1997024 DEBUG (simCloudManagerPool-8749-thread-24) [    ] o.a.s.c.a.s.SimClusterStateProvider ** creating new collection states, currentVersion=17
   [junit4]   2> 1997025 DEBUG (simCloudManagerPool-8749-thread-24) [    ] o.a.s.c.a.s.SimClusterStateProvider ** saved cluster state version 17
   [junit4]   2> 1997025 DEBUG (simCloudManagerPool-8749-thread-22) [    ] o.a.s.c.a.s.SimClusterStateProvider -- finished createCollection .system, currentVersion=18
   [junit4]   2> 1997027 INFO  (ScheduledTrigger-8750-thread-4) [    ] o.a.s.c.a.IndexSizeTriggerTest =======> CapturedEvent{timestamp=6038831790229551, stage=STARTED, actionName='null', event={
   [junit4]   2>   "id":"1574474a9ae623Tbbgyjfquicgu14dinz2lpqrxm",
   [junit4]   2>   "source":"index_size_trigger5",
   [junit4]   2>   "eventTime":6038824054089251,
   [junit4]   2>   "eventType":"INDEXSIZE",
   [junit4]   2>   "properties":{
   [junit4]   2>     "aboveSize":{
   [junit4]   2>       "testMaxOps_collection_shard1_replica_n1":"docs=38, bytes=29240",
   [junit4]   2>       "testMaxOps_collection_shard2_replica_n3":"docs=38, bytes=29240",
   [junit4]   2>       "testMaxOps_collection_shard3_replica_n5":"docs=30, bytes=25240",
   [junit4]   2>       "testMaxOps_collection_shard4_replica_n7":"docs=46, bytes=33240",
   [junit4]   2>       "testMaxOps_collection_shard5_replica_n9":"docs=48, bytes=34240"},
   [junit4]   2>     "belowSize":{},
   [junit4]   2>     "_enqueue_time_":6038829814525301,
   [junit4]   2>     "requestedOps":[
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard5"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard4"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard2"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard1"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard3"}]}}]}}, context={}, config={
   [junit4]   2>   "trigger":"index_size_trigger5",
   [junit4]   2>   "stage":[
   [junit4]   2>     "STARTED",
   [junit4]   2>     "ABORTED",
   [junit4]   2>     "SUCCEEDED",
   [junit4]   2>     "FAILED"],
   [junit4]   2>   "beforeAction":["compute_plan"],
   [junit4]   2>   "afterAction":["compute_plan"],
   [junit4]   2>   "class":"org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest$CapturingTriggerListener"}, message='null'}
   [junit4]   2> 1997029 DEBUG (AutoscalingActionExecutor-8751-thread-1) [    ] o.a.s.c.a.ScheduledTriggers -- processing actions for {
   [junit4]   2>   "id":"1574474a9ae623Tbbgyjfquicgu14dinz2lpqrxm",
   [junit4]   2>   "source":"index_size_trigger5",
   [junit4]   2>   "eventTime":6038824054089251,
   [junit4]   2>   "eventType":"INDEXSIZE",
   [junit4]   2>   "properties":{
   [junit4]   2>     "aboveSize":{
   [junit4]   2>       "testMaxOps_collection_shard1_replica_n1":"docs=38, bytes=29240",
   [junit4]   2>       "testMaxOps_collection_shard2_replica_n3":"docs=38, bytes=29240",
   [junit4]   2>       "testMaxOps_collection_shard3_replica_n5":"docs=30, bytes=25240",
   [junit4]   2>       "testMaxOps_collection_shard4_replica_n7":"docs=46, bytes=33240",
   [junit4]   2>       "testMaxOps_collection_shard5_replica_n9":"docs=48, bytes=34240"},
   [junit4]   2>     "belowSize":{},
   [junit4]   2>     "_enqueue_time_":6038829814525301,
   [junit4]   2>     "requestedOps":[
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard5"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard4"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard2"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard1"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard3"}]}}]}}
   [junit4]   2> 1997031 INFO  (AutoscalingActionExecutor-8751-thread-1) [    ] o.a.s.c.a.IndexSizeTriggerTest =======> CapturedEvent{timestamp=6038831986231751, stage=BEFORE_ACTION, actionName='compute_plan', event={
   [junit4]   2>   "id":"1574474a9ae623Tbbgyjfquicgu14dinz2lpqrxm",
   [junit4]   2>   "source":"index_size_trigger5",
   [junit4]   2>   "eventTime":6038824054089251,
   [junit4]   2>   "eventType":"INDEXSIZE",
   [junit4]   2>   "properties":{
   [junit4]   2>     "aboveSize":{
   [junit4]   2>       "testMaxOps_collection_shard1_replica_n1":"docs=38, bytes=29240",
   [junit4]   2>       "testMaxOps_collection_shard2_replica_n3":"docs=38, bytes=29240",
   [junit4]   2>       "testMaxOps_collection_shard3_replica_n5":"docs=30, bytes=25240",
   [junit4]   2>       "testMaxOps_collection_shard4_replica_n7":"docs=46, bytes=33240",
   [junit4]   2>       "testMaxOps_collection_shard5_replica_n9":"docs=48, bytes=34240"},
   [junit4]   2>     "belowSize":{},
   [junit4]   2>     "_enqueue_time_":6038829814525301,
   [junit4]   2>     "requestedOps":[
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard5"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard4"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard2"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard1"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard3"}]}}]}}, context={properties.BEFORE_ACTION=[compute_plan], source=index_size_trigger5}, config={
   [junit4]   2>   "trigger":"index_size_trigger5",
   [junit4]   2>   "stage":[
   [junit4]   2>     "STARTED",
   [junit4]   2>     "ABORTED",
   [junit4]   2>     "SUCCEEDED",
   [junit4]   2>     "FAILED"],
   [junit4]   2>   "beforeAction":["compute_plan"],
   [junit4]   2>   "afterAction":["compute_plan"],
   [junit4]   2>   "class":"org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest$CapturingTriggerListener"}, message='null'}
   [junit4]   2> 1997031 DEBUG (AutoscalingActionExecutor-8751-thread-1) [    ] o.a.s.c.a.ComputePlanAction -- processing event: {
   [junit4]   2>   "id":"1574474a9ae623Tbbgyjfquicgu14dinz2lpqrxm",
   [junit4]   2>   "source":"index_size_trigger5",
   [junit4]   2>   "eventTime":6038824054089251,
   [junit4]   2>   "eventType":"INDEXSIZE",
   [junit4]   2>   "properties":{
   [junit4]   2>     "aboveSize":{
   [junit4]   2>       "testMaxOps_collection_shard1_replica_n1":"docs=38, bytes=29240",
   [junit4]   2>       "testMaxOps_collection_shard2_replica_n3":"docs=38, bytes=29240",
   [junit4]   2>       "testMaxOps_collection_shard3_replica_n5":"docs=30, bytes=25240",
   [junit4]   2>       "testMaxOps_collection_shard4_replica_n7":"docs=46, bytes=33240",
   [junit4]   2>       "testMaxOps_collection_shard5_replica_n9":"docs=48, bytes=34240"},
   [junit4]   2>     "belowSize":{},
   [junit4]   2>     "_enqueue_time_":6038829814525301,
   [junit4]   2>     "requestedOps":[
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard5"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard4"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard2"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard1"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard3"}]}}]}} with context properties: {BEFORE_ACTION=[compute_plan]}
   [junit4]   2> 1997034 DEBUG (AutoscalingActionExecutor-8751-thread-1) [    ] o.a.s.c.a.ComputePlanAction Computed Plan: action=SPLITSHARD&collection=testMaxOps_collection&shard=shard5
   [junit4]   2> 1997034 DEBUG (AutoscalingActionExecutor-8751-thread-1) [    ] o.a.s.c.a.ComputePlanAction Computed Plan: action=SPLITSHARD&collection=testMaxOps_collection&shard=shard4
   [junit4]   2> 1997034 DEBUG (AutoscalingActionExecutor-8751-thread-1) [    ] o.a.s.c.a.ComputePlanAction Computed Plan: action=SPLITSHARD&collection=testMaxOps_collection&shard=shard2
   [junit4]   2> 1997034 DEBUG (AutoscalingActionExecutor-8751-thread-1) [    ] o.a.s.c.a.ComputePlanAction Computed Plan: action=SPLITSHARD&collection=testMaxOps_collection&shard=shard1
   [junit4]   2> 1997034 DEBUG (AutoscalingActionExecutor-8751-thread-1) [    ] o.a.s.c.a.ComputePlanAction Computed Plan: action=SPLITSHARD&collection=testMaxOps_collection&shard=shard3
   [junit4]   2> 1997035 INFO  (AutoscalingActionExecutor-8751-thread-1) [    ] o.a.s.c.a.IndexSizeTriggerTest =======> CapturedEvent{timestamp=6038832176172701, stage=AFTER_ACTION, actionName='compute_plan', event={
   [junit4]   2>   "id":"1574474a9ae623Tbbgyjfquicgu14dinz2lpqrxm",
   [junit4]   2>   "source":"index_size_trigger5",
   [junit4]   2>   "eventTime":6038824054089251,
   [junit4]   2>   "eventType":"INDEXSIZE",
   [junit4]   2>   "properties":{
   [junit4]   2>     "__start__":5,
   [junit4]   2>     "aboveSize":{
   [junit4]   2>       "testMaxOps_collection_shard1_replica_n1":"docs=38, bytes=29240",
   [junit4]   2>       "testMaxOps_collection_shard2_replica_n3":"docs=38, bytes=29240",
   [junit4]   2>       "testMaxOps_collection_shard3_replica_n5":"docs=30, bytes=25240",
   [junit4]   2>       "testMaxOps_collection_shard4_replica_n7":"docs=46, bytes=33240",
   [junit4]   2>       "testMaxOps_collection_shard5_replica_n9":"docs=48, bytes=34240"},
   [junit4]   2>     "belowSize":{},
   [junit4]   2>     "_enqueue_time_":6038829814525301,
   [junit4]   2>     "requestedOps":[
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard5"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard4"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard2"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard1"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard3"}]}}]}}, context={properties.operations=[{
   [junit4]   2>   "class":"org.apache.solr.client.solrj.request.CollectionAdminRequest$SplitShard",
   [junit4]   2>   "method":"GET",
   [junit4]   2>   "params.action":"SPLITSHARD",
   [junit4]   2>   "params.collection":"testMaxOps_collection",
   [junit4]   2>   "params.shard":"shard5"}, {
   [junit4]   2>   "class":"org.apache.solr.client.solrj.request.CollectionAdminRequest$SplitShard",
   [junit4]   2>   "method":"GET",
   [junit4]   2>   "params.action":"SPLITSHARD",
   [junit4]   2>   "params.collection":"testMaxOps_collection",
   [junit4]   2>   "params.shard":"shard4"}, {
   [junit4]   2>   "class":"org.apache.solr.client.solrj.request.CollectionAdminRequest$SplitShard",
   [junit4]   2>   "method":"GET",
   [junit4]   2>   "params.action":"SPLITSHARD",
   [junit4]   2>   "params.collection":"testMaxOps_collection",
   [junit4]   2>   "params.shard":"shard2"}, {
   [junit4]   2>   "class":"org.apache.solr.client.solrj.request.CollectionAdminRequest$SplitShard",
   [junit4]   2>   "method":"GET",
   [junit4]   2>   "params.action":"SPLITSHARD",
   [junit4]   2>   "params.collection":"testMaxOps_collection",
   [junit4]   2>   "params.shard":"shard1"}, {
   [junit4]   2>   "class":"org.apache.solr.client.solrj.request.CollectionAdminRequest$SplitShard",
   [junit4]   2>   "method":"GET",
   [junit4]   2>   "params.action":"SPLITSHARD",
   [junit4]   2>   "params.collection":"testMaxOps_collection",
   [junit4]   2>   "params.shard":"shard3"}], properties.BEFORE_ACTION=[compute_plan], source=index_size_trigger5, properties.AFTER_ACTION=[compute_plan]}, config={
   [junit4]   2>   "trigger":"index_size_trigger5",
   [junit4]   2>   "stage":[
   [junit4]   2>     "STARTED",
   [junit4]   2>     "ABORTED",
   [junit4]   2>     "SUCCEEDED",
   [junit4]   2>     "FAILED"],
   [junit4]   2>   "beforeAction":["compute_plan"],
   [junit4]   2>   "afterAction":["compute_plan"],
   [junit4]   2>   "class":"org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest$CapturingTriggerListener"}, message='null'}
   [junit4]   2> 1997036 INFO  (AutoscalingActionExecutor-8751-thread-1) [    ] o.a.s.c.a.IndexSizeTriggerTest =======> CapturedEvent{timestamp=6038832223466251, stage=SUCCEEDED, actionName='null', event={
   [junit4]   2>   "id":"1574474a9ae623Tbbgyjfquicgu14dinz2lpqrxm",
   [junit4]   2>   "source":"index_size_trigger5",
   [junit4]   2>   "eventTime":6038824054089251,
   [junit4]   2>   "eventType":"INDEXSIZE",
   [junit4]   2>   "properties":{
   [junit4]   2>     "__start__":5,
   [junit4]   2>     "aboveSize":{
   [junit4]   2>       "testMaxOps_collection_shard1_replica_n1":"docs=38, bytes=29240",
   [junit4]   2>       "testMaxOps_collection_shard2_replica_n3":"docs=38, bytes=29240",
   [junit4]   2>       "testMaxOps_collection_shard3_replica_n5":"docs=30, bytes=25240",
   [junit4]   2>       "testMaxOps_collection_shard4_replica_n7":"docs=46, bytes=33240",
   [junit4]   2>       "testMaxOps_collection_shard5_replica_n9":"docs=48, bytes=34240"},
   [junit4]   2>     "belowSize":{},
   [junit4]   2>     "_enqueue_time_":6038829814525301,
   [junit4]   2>     "requestedOps":[
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard5"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard4"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard2"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard1"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard3"}]}}]}}, context={}, config={
   [junit4]   2>   "trigger":"index_size_trigger5",
   [junit4]   2>   "stage":[
   [junit4]   2>     "STARTED",
   [junit4]   2>     "ABORTED",
   [junit4]   2>     "SUCCEEDED",
   [junit4]   2>     "FAILED"],
   [junit4]   2>   "beforeAction":["compute_plan"],
   [junit4]   2>   "afterAction":["compute_plan"],
   [junit4]   2>   "class":"org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest$CapturingTriggerListener"}, message='null'}
   [junit4]   2> 1997036 DEBUG (AutoscalingActionExecutor-8751-thread-1) [    ] o.a.s.c.a.ScheduledTriggers Resuming trigger: index_size_trigger5 after 100ms
   [junit4]   2> 1997036 DEBUG (AutoscalingActionExecutor-8751-thread-1) [    ] o.a.s.c.a.ScheduledTriggers Resuming trigger: .auto_add_replicas after 100ms
   [junit4]   2> 1997036 DEBUG (AutoscalingActionExecutor-8751-thread-1) [    ] o.a.s.c.a.ScheduledTriggers Resuming trigger: .scheduled_maintenance after 100ms
   [junit4]   2> 1997036 DEBUG (AutoscalingActionExecutor-8751-thread-1) [    ] o.a.s.c.a.ScheduledTriggers -- processing took 366 ms for event id=1574474a9ae623Tbbgyjfquicgu14dinz2lpqrxm
   [junit4]   2> 1997037 DEBUG (simCloudManagerPool-8749-thread-27) [    ] o.a.s.c.a.AutoScalingHandler Verified autoscaling configuration
   [junit4]   2> 1997037 DEBUG (simCloudManagerPool-8749-thread-27) [    ] o.a.s.c.a.OverseerTriggerThread Refreshing /autoscaling.json with znode version 5
   [junit4]   2> 1997037 DEBUG (Simulated OverseerAutoScalingTriggerThread) [    ] o.a.s.c.a.OverseerTriggerThread Processed trigger updates upto znodeVersion 5
   [junit4]   2> 1997037 DEBUG (Simulated OverseerAutoScalingTriggerThread) [    ] o.a.s.c.a.OverseerTriggerThread -- clean old nodeAdded markers
   [junit4]   2> 1997037 DEBUG (Simulated OverseerAutoScalingTriggerThread) [    ] o.a.s.c.a.OverseerTriggerThread Current znodeVersion 5, lastZnodeVersion 5
   [junit4]   2> 1997039 DEBUG (simCloudManagerPool-8749-thread-28) [    ] o.a.s.c.a.AutoScalingHandler Verified autoscaling configuration
   [junit4]   2> 1997039 DEBUG (simCloudManagerPool-8749-thread-28) [    ] o.a.s.c.a.OverseerTriggerThread Refreshing /autoscaling.json with znode version 6
   [junit4]   2> 1997040 DEBUG (Simulated OverseerAutoScalingTriggerThread) [    ] o.a.s.c.a.OverseerTriggerThread Processed trigger updates upto znodeVersion 6
   [junit4]   2> 1997040 DEBUG (Simulated OverseerAutoScalingTriggerThread) [    ] o.a.s.c.a.OverseerTriggerThread -- clean old nodeAdded markers
   [junit4]   2> 1997040 DEBUG (Simulated OverseerAutoScalingTriggerThread) [    ] o.a.s.c.a.OverseerTriggerThread Current znodeVersion 6, lastZnodeVersion 6
   [junit4]   2> 1997136 DEBUG (ScheduledTrigger-8750-thread-3) [    ] o.a.s.c.a.NodeLostTrigger Running NodeLostTrigger: .auto_add_replicas with currently live nodes: 2
   [junit4]   2> 1997143 DEBUG (ScheduledTrigger-8750-thread-1) [    ] o.a.s.c.a.ScheduledTriggers ++++++++ Cooldown inactive - processing event: {
   [junit4]   2>   "id":"15744940320723Tbbgyjfquicgu14dinz2lpqrxp",
   [junit4]   2>   "source":"index_size_trigger5",
   [junit4]   2>   "eventTime":6038832469378851,
   [junit4]   2>   "eventType":"INDEXSIZE",
   [junit4]   2>   "properties":{
   [junit4]   2>     "aboveSize":{
   [junit4]   2>       "testMaxOps_collection_shard1_replica_n1":"docs=38, bytes=29240",
   [junit4]   2>       "testMaxOps_collection_shard2_replica_n3":"docs=38, bytes=29240",
   [junit4]   2>       "testMaxOps_collection_shard3_replica_n5":"docs=30, bytes=25240",
   [junit4]   2>       "testMaxOps_collection_shard4_replica_n7":"docs=46, bytes=33240",
   [junit4]   2>       "testMaxOps_collection_shard5_replica_n9":"docs=48, bytes=34240"},
   [junit4]   2>     "belowSize":{},
   [junit4]   2>     "requestedOps":[
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard5"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard4"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard2"}]}}]}}
   [junit4]   2> 1997143 DEBUG (ScheduledTrigger-8750-thread-1) [    ] o.a.s.c.a.ScheduledTriggers Pausing all triggers: [index_size_trigger5, .auto_add_replicas, .scheduled_maintenance]
   [junit4]   2> 1997144 INFO  (ScheduledTrigger-8750-thread-1) [    ] o.a.s.c.a.IndexSizeTriggerTest =======> CapturedEvent{timestamp=6038837647414551, stage=STARTED, actionName='null', event={
   [junit4]   2>   "id":"15744940320723Tbbgyjfquicgu14dinz2lpqrxp",
   [junit4]   2>   "source":"index_size_trigger5",
   [junit4]   2>   "eventTime":6038832469378851,
   [junit4]   2>   "eventType":"INDEXSIZE",
   [junit4]   2>   "properties":{
   [junit4]   2>     "aboveSize":{
   [junit4]   2>       "testMaxOps_collection_shard1_replica_n1":"docs=38, bytes=29240",
   [junit4]   2>       "testMaxOps_collection_shard2_replica_n3":"docs=38, bytes=29240",
   [junit4]   2>       "testMaxOps_collection_shard3_replica_n5":"docs=30, bytes=25240",
   [junit4]   2>       "testMaxOps_collection_shard4_replica_n7":"docs=46, bytes=33240",
   [junit4]   2>       "testMaxOps_collection_shard5_replica_n9":"docs=48, bytes=34240"},
   [junit4]   2>     "belowSize":{},
   [junit4]   2>     "_enqueue_time_":6038837626240901,
   [junit4]   2>     "requestedOps":[
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard5"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard4"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard2"}]}}]}}, context={}, config={
   [junit4]   2>   "trigger":"index_size_trigger5",
   [junit4]   2>   "stage":[
   [junit4]   2>     "STARTED",
   [junit4]   2>     "ABORTED",
   [junit4]   2>     "SUCCEEDED",
   [junit4]   2>     "FAILED"],
   [junit4]   2>   "beforeAction":["compute_plan"],
   [junit4]   2>   "afterAction":["compute_plan"],
   [junit4]   2>   "class":"org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest$CapturingTriggerListener"}, message='null'}
   [junit4]   2> 1997144 DEBUG (AutoscalingActionExecutor-8751-thread-1) [    ] o.a.s.c.a.ScheduledTriggers -- processing actions for {
   [junit4]   2>   "id":"15744940320723Tbbgyjfquicgu14dinz2lpqrxp",
   [junit4]   2>   "source":"index_size_trigger5",
   [junit4]   2>   "eventTime":6038832469378851,
   [junit4]   2>   "eventType":"INDEXSIZE",
   [junit4]   2>   "properties":{
   [junit4]   2>     "aboveSize":{
   [junit4]   2>       "testMaxOps_collection_shard1_replica_n1":"docs=38, bytes=29240",
   [junit4]   2>       "testMaxOps_collection_shard2_replica_n3":"docs=38, bytes=29240",
   [junit4]   2>       "testMaxOps_collection_shard3_replica_n5":"docs=30, bytes=25240",
   [junit4]   2>       "testMaxOps_collection_shard4_replica_n7":"docs=46, bytes=33240",
   [junit4]   2>       "testMaxOps_collection_shard5_replica_n9":"docs=48, bytes=34240"},
   [junit4]   2>     "belowSize":{},
   [junit4]   2>     "_enqueue_time_":6038837626240901,
   [junit4]   2>     "requestedOps":[
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard5"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard4"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard2"}]}}]}}
   [junit4]   2> 1997144 INFO  (AutoscalingActionExecutor-8751-thread-1) [    ] o.a.s.c.a.IndexSizeTriggerTest =======> CapturedEvent{timestamp=6038837672754001, stage=BEFORE_ACTION, actionName='compute_plan', event={
   [junit4]   2>   "id":"15744940320723Tbbgyjfquicgu14dinz2lpqrxp",
   [junit4]   2>   "source":"index_size_trigger5",
   [junit4]   2>   "eventTime":6038832469378851,
   [junit4]   2>   "eventType":"INDEXSIZE",
   [junit4]   2>   "properties":{
   [junit4]   2>     "aboveSize":{
   [junit4]   2>       "testMaxOps_collection_shard1_replica_n1":"docs=38, bytes=29240",
   [junit4]   2>       "testMaxOps_collection_shard2_replica_n3":"docs=38, bytes=29240",
   [junit4]   2>       "testMaxOps_collection_shard3_replica_n5":"docs=30, bytes=25240",
   [junit4]   2>       "testMaxOps_collection_shard4_replica_n7":"docs=46, bytes=33240",
   [junit4]   2>       "testMaxOps_collection_shard5_replica_n9":"docs=48, bytes=34240"},
   [junit4]   2>     "belowSize":{},
   [junit4]   2>     "_enqueue_time_":6038837626240901,
   [junit4]   2>     "requestedOps":[
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard5"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard4"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard2"}]}}]}}, context={properties.BEFORE_ACTION=[compute_plan], source=index_size_trigger5}, config={
   [junit4]   2>   "trigger":"index_size_trigger5",
   [junit4]   2>   "stage":[
   [junit4]   2>     "STARTED",
   [junit4]   2>     "ABORTED",
   [junit4]   2>     "SUCCEEDED",
   [junit4]   2>     "FAILED"],
   [junit4]   2>   "beforeAction":["compute_plan"],
   [junit4]   2>   "afterAction":["compute_plan"],
   [junit4]   2>   "class":"org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest$CapturingTriggerListener"}, message='null'}
   [junit4]   2> 1997145 DEBUG (AutoscalingActionExecutor-8751-thread-1) [    ] o.a.s.c.a.ComputePlanAction -- processing event: {
   [junit4]   2>   "id":"15744940320723Tbbgyjfquicgu14dinz2lpqrxp",
   [junit4]   2>   "source":"index_size_trigger5",
   [junit4]   2>   "eventTime":6038832469378851,
   [junit4]   2>   "eventType":"INDEXSIZE",
   [junit4]   2>   "properties":{
   [junit4]   2>     "aboveSize":{
   [junit4]   2>       "testMaxOps_collection_shard1_replica_n1":"docs=38, bytes=29240",
   [junit4]   2>       "testMaxOps_collection_shard2_replica_n3":"docs=38, bytes=29240",
   [junit4]   2>       "testMaxOps_collection_shard3_replica_n5":"docs=30, bytes=25240",
   [junit4]   2>       "testMaxOps_collection_shard4_replica_n7":"docs=46, bytes=33240",
   [junit4]   2>       "testMaxOps_collection_shard5_replica_n9":"docs=48, bytes=34240"},
   [junit4]   2>     "belowSize":{},
   [junit4]   2>     "_enqueue_time_":6038837626240901,
   [junit4]   2>     "requestedOps":[
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard5"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard4"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard2"}]}}]}} with context properties: {BEFORE_ACTION=[compute_plan]}
   [junit4]   2> 1997145 DEBUG (AutoscalingActionExecutor-8751-thread-1) [    ] o.a.s.c.a.ComputePlanAction Computed Plan: action=SPLITSHARD&collection=testMaxOps_collection&shard=shard5
   [junit4]   2> 1997145 DEBUG (AutoscalingActionExecutor-8751-thread-1) [    ] o.a.s.c.a.ComputePlanAction Computed Plan: action=SPLITSHARD&collection=testMaxOps_collection&shard=shard4
   [junit4]   2> 1997145 DEBUG (AutoscalingActionExecutor-8751-thread-1) [    ] o.a.s.c.a.ComputePlanAction Computed Plan: action=SPLITSHARD&collection=testMaxOps_collection&shard=shard2
   [junit4]   2> 1997146 INFO  (AutoscalingActionExecutor-8751-thread-1) [    ] o.a.s.c.a.IndexSizeTriggerTest =======> CapturedEvent{timestamp=6038837721564101, stage=AFTER_ACTION, actionName='compute_plan', event={
   [junit4]   2>   "id":"15744940320723Tbbgyjfquicgu14dinz2lpqrxp",
   [junit4]   2>   "source":"index_size_trigger5",
   [junit4]   2>   "eventTime":6038832469378851,
   [junit4]   2>   "eventType":"INDEXSIZE",
   [junit4]   2>   "properties":{
   [junit4]   2>     "__start__":3,
   [junit4]   2>     "aboveSize":{
   [junit4]   2>       "testMaxOps_collection_shard1_replica_n1":"docs=38, bytes=29240",
   [junit4]   2>       "testMaxOps_collection_shard2_replica_n3":"docs=38, bytes=29240",
   [junit4]   2>       "testMaxOps_collection_shard3_replica_n5":"docs=30, bytes=25240",
   [junit4]   2>       "testMaxOps_collection_shard4_replica_n7":"docs=46, bytes=33240",
   [junit4]   2>       "testMaxOps_collection_shard5_replica_n9":"docs=48, bytes=34240"},
   [junit4]   2>     "belowSize":{},
   [junit4]   2>     "_enqueue_time_":6038837626240901,
   [junit4]   2>     "requestedOps":[
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard5"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard4"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard2"}]}}]}}, context={properties.operations=[{
   [junit4]   2>   "class":"org.apache.solr.client.solrj.request.CollectionAdminRequest$SplitShard",
   [junit4]   2>   "method":"GET",
   [junit4]   2>   "params.action":"SPLITSHARD",
   [junit4]   2>   "params.collection":"testMaxOps_collection",
   [junit4]   2>   "params.shard":"shard5"}, {
   [junit4]   2>   "class":"org.apache.solr.client.solrj.request.CollectionAdminRequest$SplitShard",
   [junit4]   2>   "method":"GET",
   [junit4]   2>   "params.action":"SPLITSHARD",
   [junit4]   2>   "params.collection":"testMaxOps_collection",
   [junit4]   2>   "params.shard":"shard4"}, {
   [junit4]   2>   "class":"org.apache.solr.client.solrj.request.CollectionAdminRequest$SplitShard",
   [junit4]   2>   "method":"GET",
   [junit4]   2>   "params.action":"SPLITSHARD",
   [junit4]   2>   "params.collection":"testMaxOps_collection",
   [junit4]   2>   "params.shard":"shard2"}], properties.BEFORE_ACTION=[compute_plan], source=index_size_trigger5, properties.AFTER_ACTION=[compute_plan]}, config={
   [junit4]   2>   "trigger":"index_size_trigger5",
   [junit4]   2>   "stage":[
   [junit4]   2>     "STARTED",
   [junit4]   2>     "ABORTED",
   [junit4]   2>     "SUCCEEDED",
   [junit4]   2>     "FAILED"],
   [junit4]   2>   "beforeAction":["compute_plan"],
   [junit4]   2>   "afterAction":["compute_plan"],
   [junit4]   2>   "class":"org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest$CapturingTriggerListener"}, message='null'}
   [junit4]   2> 1997146 INFO  (AutoscalingActionExecutor-8751-thread-1) [    ] o.a.s.c.a.IndexSizeTriggerTest =======> CapturedEvent{timestamp=6038837755647501, stage=SUCCEEDED, actionName='null', event={
   [junit4]   2>   "id":"15744940320723Tbbgyjfquicgu14dinz2lpqrxp",
   [junit4]   2>   "source":"index_size_trigger5",
   [junit4]   2>   "eventTime":6038832469378851,
   [junit4]   2>   "eventType":"INDEXSIZE",
   [junit4]   2>   "properties":{
   [junit4]   2>     "__start__":3,
   [junit4]   2>     "aboveSize":{
   [junit4]   2>       "testMaxOps_collection_shard1_replica_n1":"docs=38, bytes=29240",
   [junit4]   2>       "testMaxOps_collection_shard2_replica_n3":"docs=38, bytes=29240",
   [junit4]   2>       "testMaxOps_collection_shard3_replica_n5":"docs=30, bytes=25240",
   [junit4]   2>       "testMaxOps_collection_shard4_replica_n7":"docs=46, bytes=33240",
   [junit4]   2>       "testMaxOps_collection_shard5_replica_n9":"docs=48, bytes=34240"},
   [junit4]   2>     "belowSize":{},
   [junit4]   2>     "_enqueue_time_":6038837626240901,
   [junit4]   2>     "requestedOps":[
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard5"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard4"}]}},
   [junit4]   2>       {
   [junit4]   2>         "action":"SPLITSHARD",
   [junit4]   2>         "hints":{"COLL_SHARD":[{
   [junit4]   2>               "first":"testMaxOps_collection",
   [junit4]   2>               "second":"shard2"}]}}]}}, context={}, config={
   [junit4]   2>   "trigger":"index_size_trigger5",
   [junit4]   2>   "stage":[
   [junit4]   2>     "STARTED",
   [junit4]   2>     "ABORTED",
   [junit4]   2>     "SUCCEEDED",
   [junit4]   2>     "FAILED"],
   [junit4]   2>   "beforeAction":["compute_plan"],
   [junit4]   2>   "afterAction":["compute_plan"],
   [junit4]   2>   "class":"org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest$CapturingTriggerListener"}, message='null'}
   [junit4]   2> 1997146 DEBUG (AutoscalingActionExecutor-8751-thread-1) [    ] o.a.s.c.a.ScheduledTriggers Resuming trigger: index_size_trigger5 after 100ms
   [junit4]   2> 1997146 DEBUG (AutoscalingActionExecutor-8751-thread-1) [    ] o.a.s.c.a.ScheduledTriggers Resuming trigger: .auto_add_replicas after 100ms
   [junit4]   2> 1997146 DEBUG (AutoscalingActionExecutor-8751-thread-1) [    ] o.a.s.c.a.ScheduledTriggers Resuming trigger: .scheduled_maintenance after 100ms
   [junit4]   2> 1997146 DEBUG (AutoscalingActionExecutor-8751-thread-1) [    ] o.a.s.c.a.ScheduledTriggers -- processing took 107 ms for event id=15744940320723Tbbgyjfquicgu14dinz2lpqrxp
   [junit4]   2> 1997147 DEBUG (simCloudManagerPool-8749-thread-31) [    ] o.a.s.c.a.AutoScalingHandler Verified autoscaling configuration
   [junit4]   2> 1997147 DEBUG (simCloudManagerPool-8749-thread-31) [    ] o.a.s.c.a.OverseerTriggerThread Refreshing /autoscaling.json with znode version 7
   [junit4]   2> 1997147 DEBUG (Simulated OverseerAutoScalingTriggerThread) [    ] o.a.s.c.a.OverseerTriggerThread Processed trigger updates upto znodeVersion 7
   [junit4]   2> 1997148 DEBUG (TEST-IndexSizeTriggerTest.testMaxOps-seed#[86AD5E841EA5649D]) [    ] o.a.s.c.a.s.SimClusterStateProvider ** creating new collection states, currentVersion=18
   [junit4]   2> 1997149 DEBUG (TEST-IndexSizeTriggerTest.testMaxOps-seed#[86AD5E841EA5649D]) [    ] o.a.s.c.a.s.SimClusterStateProvider ** saved cluster state version 18
   [junit4]   2> 1997150 INFO  (TEST-IndexSizeTriggerTest.testMaxOps-seed#[86AD5E841EA5649D]) [    ] o.a.s.c.a.IndexSizeTriggerTest #######################################
   [junit4]   2> ############ CLUSTER STATE ############
   [junit4]   2> #######################################
   [junit4]   2> ## Live nodes:		2
   [junit4]   2> ## Empty nodes:	0
   [junit4]   2> ## Dead nodes:		0
   [junit4]   2> ## Collections:
   [junit4]   2> ##  * .system
   [junit4]   2> ##    shardsTotal	1
   [junit4]   2> ##    shardsState	{active=1}
   [junit4]   2> ##      shardsWithoutLeader	0
   [junit4]   2> ##    totalReplicas	2
   [junit4]   2> ##      activeReplicas	2
   [junit4]   2> ##      inactiveReplicas	0
   [junit4]   2> ##    totalActiveDocs	4
   [junit4]   2> ##      bufferedDocs	0
   [junit4]   2> ##      maxActiveSliceDocs	4
   [junit4]   2> ##      minActiveSliceDocs	4
   [junit4]   2> ##      avgActiveSliceDocs	4
   [junit4]   2> ##    totalInactiveDocs	0
   [junit4]   2> ##      maxInactiveSliceDocs	0
   [junit4]   2> ##      minInactiveSliceDocs	0
   [junit4]   2> ##      avgInactiveSliceDocs	NaN
   [junit4]   2> ##    totalActiveBytes	12,240
   [junit4]   2> ##      maxActiveSliceBytes	12,240
   [junit4]   2> ##      minActiveSliceBytes	12,240
   [junit4]   2> ##      avgActiveSliceBytes	12,240
   [junit4]   2> ##    totalInactiveBytes	0
   [junit4]   2> ##      maxInactiveSliceBytes	0
   [junit4]   2> ##      minInactiveSliceBytes	0
   [junit4]   2> ##      avgInactiveSliceBytes	NaN
   [junit4]   2> ##    totalActiveDeletedDocs	0
   [junit4]   2> ##  * testMaxOps_collection
   [junit4]   2> ##    shardsTotal	5
   [junit4]   2> ##    shardsState	{active=5}
   [junit4]   2> ##      shardsWithoutLeader	0
   [junit4]   2> ##    totalReplicas	10
   [junit4]   2> ##      activeReplicas	10
   [junit4]   2> ##      inactiveReplicas	0
   [junit4]   2> ##    totalActiveDocs	200
   [junit4]   2> ##      bufferedDocs	0
   [junit4]   2> ##      maxActiveSliceDocs	48
   [junit4]   2> ##      minActiveSliceDocs	30
   [junit4]   2> ##      avgActiveSliceDocs	40
   [junit4]   2> ##    totalInactiveDocs	0
   [junit4]   2> ##      maxInactiveSliceDocs	0
   [junit4]   2> ##      minInactiveSliceDocs	0
   [junit4]   2> ##      avgInactiveSliceDocs	NaN
   [junit4]   2> ##    totalActiveBytes	151,200
   [junit4]   2> ##      maxActiveSliceBytes	34,240
   [junit4]   2> ##      minActiveSliceBytes	25,240
   [junit4]   2> ##      avgActiveSliceBytes	30,240
   [junit4]   2> ##    totalInactiveBytes	0
   [junit4]   2> ##      maxInactiveSliceBytes	0
   [junit4]   2> ##      minInactiveSliceBytes	0
   [junit4]   2> ##      avgInactiveSliceBytes	NaN
   [junit4]   2> ##    totalActiveDeletedDocs	0
   [junit4]   2> DocCollection(testMaxOps_collection//clusterstate.json/18)={
   [junit4]   2>   "replicationFactor":"1",
   [junit4]   2>   "pullReplicas":"0",
   [junit4]   2>   "router":{"name":"compositeId"},
   [junit4]   2>   "maxShardsPerNode":"10",
   [junit4]   2>   "autoAddReplicas":"false",
   [junit4]   2>   "nrtReplicas":"2",
   [junit4]   2>   "tlogReplicas":"0",
   [junit4]   2>   "autoCreated":"true",
   [junit4]   2>   "shards":{
   [junit4]   2>     "shard2":{
   [junit4]   2>       "replicas":{
   [junit4]   2>         "core_node3":{
   [junit4]   2>           "core":"testMaxOps_collection_shard2_replica_n3",
   [junit4]   2>           "leader":"true",
   [junit4]   2>           "SEARCHER.searcher.maxDoc":38,
   [junit4]   2>           "SEARCHER.searcher.deletedDocs":0,
   [junit4]   2>           "INDEX.sizeInBytes":29240,
   [junit4]   2>           "node_name":"127.0.0.1:10006_solr",
   [junit4]   2>           "state":"active",
   [junit4]   2>           "type":"NRT",
   [junit4]   2>           "INDEX.sizeInGB":2.7231872081756592E-5,
   [junit4]   2>           "SEARCHER.searcher.numDocs":38},
   [junit4]   2>         "core_node4":{
   [junit4]   2>           "core":"testMaxOps_collection_shard2_replica_n4",
   [junit4]   2>           "SEARCHER.searcher.maxDoc":38,
   [junit4]   2>           "SEARCHER.searcher.deletedDocs":0,
   [junit4]   2>           "INDEX.sizeInBytes":29240,
   [junit4]   2>           "node_name":"127.0.0.1:10007_solr",
   [junit4]   2>           "state":"active",
   [junit4]   2>           "type":"NRT",
   [junit4]   2>           "INDEX.sizeInGB":2.7231872081756592E-5,
   [junit4]   2>           "SEARCHER.searcher.numDocs":38}},
   [junit4]   2>       "range":"b3330000-e665ffff",
   [junit4]   2>       "state":"active"},
   [junit4]   2>     "shard3":{
   [junit4]   2>       "replicas":{
   [junit4]   2>         "core_node6":{
   [junit4]   2>           "core":"testMaxOps_collection_shard3_replica_n6",
   [junit4]   2>           "SEARCHER.searcher.maxDoc":30,
   [junit4]   2>           "SEARCHER.searcher.deletedDocs":0,
   [junit4]   2>           "INDEX.sizeInBytes":25240,
   [junit4]   2>           "node_name":"127.0.0.1:10007_solr",
   [junit4]   2>           "state":"active",
   [junit4]   2>           "type":"NRT",
   [junit4]   2>           "INDEX.sizeInGB":2.3506581783294678E-5,
   [junit4]   2>           "SEARCHER.searcher.numDocs":30},
   [junit4]   2>         "core_node5":{
   [junit4]   2>           "core":"testMaxOps_collection_shard3_replica_n5",
   [junit4]   2>           "leader":"true",
   [junit4]   2>           "SEARCHER.searcher.maxDoc":30,
   [junit4]   2>           "SEARCHER.searcher.deletedDocs":0,
   [junit4]   2>           "INDEX.sizeInBytes":25240,
   [junit4]   2>           "node_name":"127.0.0.1:10006_solr",
   [junit4]   2>           "state":"active",
   [junit4]   2>           "type":"NRT",
   [junit4]   2>           "INDEX.sizeInGB":2.3506581783294678E-5,
   [junit4]   2>           "SEARCHER.searcher.numDocs":30}},
   [junit4]   2>       "range":"e6660000-1998ffff",
   [junit4]   2>       "state":"active"},
   [junit4]   2>     "shard4":{
   [junit4]   2>       "replicas":{
   [junit4]   2>         "core_node7":{
   [junit4]   2>           "core":"testMaxOps_collection_shard4_replica_n7",
   [junit4]   2>           "leader":"true",
   [junit4]   2>           "SEARCHER.searcher.maxDoc":46,
   [junit4]   2>           "SEARCHER.searcher.deletedDocs":0,
   [junit4]   2>           "INDEX.sizeInBytes":33240,
   [junit4]   2>           "node_name":"127.0.0.1:10006_solr",
   [junit4]   2>           "state":"active",
   [junit4]   2>           "type":"NRT",
   [junit4]   2>           "INDEX.sizeInGB":3.0957162380218506E-5,
   [junit4]   2>           "SEARCHER.searcher.numDocs":46},
   [junit4]   2>         "core_node8":{
   [junit4]   2>           "core":"testMaxOps_collection_shard4_replica_n8",
   [junit4]   2>           "SEARCHER.searcher.maxDoc":46,
   [junit4]   2>           "SEARCHER.searcher.deletedDocs":0,
   [junit4]   2>           "INDEX.sizeInBytes":33240,
   [junit4]   2>           "node_name":"127.0.0.1:10007_solr",
   [junit4]   2>           "state":"active",
   [junit4]   2>           "type":"NRT",
   [junit4]   2>           "INDEX.sizeInGB":3.0957162380218506E-5,
   [junit4]   2>           "SEARCHER.searcher.numDocs":46}},
   [junit4]   2>       "range":"19990000-4ccbffff",
   [junit4]   2>       "state":"active"},
   [junit4]   2>     "shard5":{
   [junit4]   2>       "replicas":{
   [junit4]   2>         "core_node10":{
   [junit4]   2>           "core":"testMaxOps_collection_shard5_replica_n10",
   [junit4]   2>           "SEARCHER.searcher.maxDoc":48,
   [junit4]   2>           "SEARCHER.searcher.deletedDocs":0,
   [junit4]   2>           "INDEX.sizeInBytes":34240,
   [junit4]   2>           "node_name":"127.0.0.1:10007_solr",
   [junit4]   2>           "state":"active",
   [junit4]   2>           "type":"NRT",
   [junit4]   2>           "INDEX.sizeInGB":3.1888484954833984E-5,
   [junit4]   2>           "SEARCHER.searcher.numDocs":48},
   [junit4]   2>         "core_node9":{
   [junit4]   2>           "core":"testMaxOps_collection_shard5_replica_n9",
   [junit4]   2>           "leader":"true",
   [junit4]   2>           "SEARCHER.searcher.maxDoc":48,
   [junit4]   2>           "SEARCHER.searcher.deletedDocs":0,
   [junit4]   2>           "INDEX.sizeInBytes":34240,
   [junit4]   2>           "node_name":"127.0.0.1:10006_solr",
   [junit4]   2>           "state":"active",
   [junit4]   2>           "type":"NRT",
   [junit4]   2>           "INDEX.sizeInGB":3.1888484954833984E-5,
   [junit4]   2>           "SEARCHER.searcher.numDocs":48}},
   [junit4]   2>       "range":"4ccc0000-7fffffff",
   [junit4]   2>       "state":"active"},
   [junit4]   2>     "shard1":{
   [junit4]   2>       "replicas":{
   [junit4]   2>         "core_node1":{
   [junit4]   2>           "core":"testMaxOps_collection_shard1_replica_n1",
   [junit4]   2>           "leader":"true",
   [junit4]   2>           "SEARCHER.searcher.maxDoc":38,
   [junit4]   2>           "SEARCHER.searcher.deletedDocs":0,
   [junit4]   2>           "INDEX.sizeInBytes":29240,
   [junit4]   2>           "node_name":"127.0.0.1:10006_solr",
   [junit4]   2>           "state":"active",
   [junit4]   2>           "type":"NRT",
   [junit4]   2>           "INDEX.sizeInGB":2.7231872081756592E-5,
   [junit4]   2>           "SEARCHER.searcher.numDocs":38},
   [junit4]   2>         "core_node2":{
   [junit4]   2>           "core":"testMaxOps_collection_shard1_replica_n2",
   [junit4]   2>           "SEARCHER.searcher.maxDoc":38,
   [junit4]   2>           "SEARCHER.searcher.deletedDocs":0,
   [junit4]   2>           "INDEX.sizeInBytes":29240,
   [junit4]   2>           "node_name":"127.0.0.1:10007_solr",
   [junit4]   2>           "state":"active",
   [junit4]   2>           "type":"NRT",
   [junit4]   2>           "INDEX.sizeInGB":2.7231872081756592E-5,
   [junit4]   2>           "SEARCHER.searcher.numDocs":38}},
   [junit4]   2>       "range":"80000000-b332ffff",
   [junit4]   2>       "state":"active"}}}
   [junit4]   2> DocCollection(.system//clusterstate.json/18)={
   [junit4]   2>   "

[...truncated too long message...]

arch/TestSameScoresWithThreads.java (at line 49)
 [ecj-lint] 	LineFileDocs docs = new LineFileDocs(random());
 [ecj-lint] 	             ^^^^
 [ecj-lint] Resource leak: 'docs' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 70. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/search/TestSearcherManager.java (at line 313)
 [ecj-lint] 	SearcherManager sm = new SearcherManager(writer, false, false, new SearcherFactory());
 [ecj-lint] 	                ^^
 [ecj-lint] Resource leak: 'sm' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 71. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/search/TestTermQuery.java (at line 52)
 [ecj-lint] 	new TermQuery(new Term("foo", "bar"), TermContext.build(new MultiReader().getContext(), new Term("foo", "bar"))));
 [ecj-lint] 	                                                        ^^^^^^^^^^^^^^^^^
 [ecj-lint] Resource leak: '<unassigned Closeable value>' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 72. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/store/TestBufferedIndexInput.java (at line 50)
 [ecj-lint] 	MyBufferedIndexInput input = new MyBufferedIndexInput();
 [ecj-lint] 	                     ^^^^^
 [ecj-lint] Resource leak: 'input' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 73. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/store/TestHugeRamFile.java (at line 88)
 [ecj-lint] 	RAMInputStream in = new RAMInputStream("testcase", f);
 [ecj-lint] 	               ^^
 [ecj-lint] Resource leak: 'in' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 74. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/store/TestRAMDirectory.java (at line 81)
 [ecj-lint] 	RAMDirectory ramDir = new RAMDirectory(fsDir, newIOContext(random()));
 [ecj-lint] 	             ^^^^^^
 [ecj-lint] Resource leak: 'ramDir' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 75. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/store/TestTrackingDirectoryWrapper.java (at line 32)
 [ecj-lint] 	TrackingDirectoryWrapper dir = new TrackingDirectoryWrapper(new RAMDirectory());
 [ecj-lint] 	                         ^^^
 [ecj-lint] Resource leak: 'dir' is never closed
 [ecj-lint] ----------
 [ecj-lint] 76. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/store/TestTrackingDirectoryWrapper.java (at line 37)
 [ecj-lint] 	TrackingDirectoryWrapper dir = new TrackingDirectoryWrapper(new RAMDirectory());
 [ecj-lint] 	                         ^^^
 [ecj-lint] Resource leak: 'dir' is never closed
 [ecj-lint] ----------
 [ecj-lint] 77. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/store/TestTrackingDirectoryWrapper.java (at line 43)
 [ecj-lint] 	TrackingDirectoryWrapper dir = new TrackingDirectoryWrapper(new RAMDirectory());
 [ecj-lint] 	                         ^^^
 [ecj-lint] Resource leak: 'dir' is never closed
 [ecj-lint] ----------
 [ecj-lint] 78. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/store/TestTrackingDirectoryWrapper.java (at line 51)
 [ecj-lint] 	TrackingDirectoryWrapper dir = new TrackingDirectoryWrapper(new RAMDirectory());
 [ecj-lint] 	                         ^^^
 [ecj-lint] Resource leak: 'dir' is never closed
 [ecj-lint] ----------
 [ecj-lint] 79. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/store/TestTrackingDirectoryWrapper.java (at line 60)
 [ecj-lint] 	TrackingDirectoryWrapper dest = new TrackingDirectoryWrapper(new RAMDirectory());
 [ecj-lint] 	                         ^^^^
 [ecj-lint] Resource leak: 'dest' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 80. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/util/TestCloseableThreadLocal.java (at line 31)
 [ecj-lint] 	CloseableThreadLocal<Object> ctl = new CloseableThreadLocal<>();
 [ecj-lint] 	                             ^^^
 [ecj-lint] Resource leak: 'ctl' is never closed
 [ecj-lint] ----------
 [ecj-lint] 81. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/util/TestCloseableThreadLocal.java (at line 39)
 [ecj-lint] 	CloseableThreadLocal<Object> ctl = new CloseableThreadLocal<>();
 [ecj-lint] 	                             ^^^
 [ecj-lint] Resource leak: 'ctl' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 82. ERROR in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/util/TestQueryBuilder.java (at line 21)
 [ecj-lint] 	import java.io.Reader;
 [ecj-lint] 	       ^^^^^^^^^^^^^^
 [ecj-lint] The import java.io.Reader is never used
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 83. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/util/bkd/Test2BBKDPoints.java (at line 44)
 [ecj-lint] 	BKDWriter w = new BKDWriter(numDocs, dir, "_0", 1, Long.BYTES,
 [ecj-lint] 	          ^
 [ecj-lint] Resource leak: 'w' is never closed
 [ecj-lint] ----------
 [ecj-lint] 84. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/util/bkd/Test2BBKDPoints.java (at line 81)
 [ecj-lint] 	BKDWriter w = new BKDWriter(numDocs, dir, "_0", 2, Long.BYTES,
 [ecj-lint] 	          ^
 [ecj-lint] Resource leak: 'w' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 85. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/util/bkd/TestBKD.java (at line 51)
 [ecj-lint] 	BKDWriter w = new BKDWriter(100, dir, "tmp", 1, 4, 2, 1.0f, 100, true);
 [ecj-lint] 	          ^
 [ecj-lint] Resource leak: 'w' is never closed
 [ecj-lint] ----------
 [ecj-lint] 86. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/util/bkd/TestBKD.java (at line 126)
 [ecj-lint] 	BKDWriter w = new BKDWriter(numDocs, dir, "tmp", numDims, 4, maxPointsInLeafNode, maxMB, numDocs, true);
 [ecj-lint] 	          ^
 [ecj-lint] Resource leak: 'w' is never closed
 [ecj-lint] ----------
 [ecj-lint] 87. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/util/bkd/TestBKD.java (at line 267)
 [ecj-lint] 	BKDWriter w = new BKDWriter(numDocs, dir, "tmp", numDims, numBytesPerDim, maxPointsInLeafNode, maxMB, numDocs, true);
 [ecj-lint] 	          ^
 [ecj-lint] Resource leak: 'w' is never closed
 [ecj-lint] ----------
 [ecj-lint] 88. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/util/bkd/TestBKD.java (at line 715)
 [ecj-lint] 	w = new BKDWriter(numValues, dir, "_" + seg, numDims, numBytesPerDim, maxPointsInLeafNode, maxMB, docValues.length, false);
 [ecj-lint] 	^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 [ecj-lint] Resource leak: 'w' is not closed at this location
 [ecj-lint] ----------
 [ecj-lint] 89. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/util/bkd/TestBKD.java (at line 974)
 [ecj-lint] 	BKDWriter w = new BKDWriter(numDocs+1, dir, "tmp", 1, Integer.BYTES, 2, 0.01f, numDocs, true);
 [ecj-lint] 	          ^
 [ecj-lint] Resource leak: 'w' is never closed
 [ecj-lint] ----------
 [ecj-lint] 90. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/util/bkd/TestBKD.java (at line 1015)
 [ecj-lint] 	BKDWriter w = new BKDWriter(numDocs+1, dir, "tmp", 2, Integer.BYTES, 2, 0.01f, numDocs,
 [ecj-lint] 	          ^
 [ecj-lint] Resource leak: 'w' is never closed
 [ecj-lint] ----------
 [ecj-lint] 91. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/util/bkd/TestBKD.java (at line 1066)
 [ecj-lint] 	BKDWriter w = new BKDWriter(numDocs+1, dir, "tmp", numDims, bytesPerDim, 32, 1f, numDocs, true);
 [ecj-lint] 	          ^
 [ecj-lint] Resource leak: 'w' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 92. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/util/fst/TestFSTs.java (at line 311)
 [ecj-lint] 	final LineFileDocs docs = new LineFileDocs(random());
 [ecj-lint] 	                   ^^^^
 [ecj-lint] Resource leak: 'docs' is never closed
 [ecj-lint] ----------
 [ecj-lint] 92 problems (1 error, 91 warnings)

BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/build.xml:633: The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/build.xml:101: The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/build.xml:202: The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/common-build.xml:2092: The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/common-build.xml:2125: Compile failed; see the compiler error output for details.

Total time: 88 minutes 38 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

[JENKINS] Lucene-Solr-Tests-7.x - Build # 919 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/919/

All tests passed

Build Log:
[...truncated 51276 lines...]
-ecj-javadoc-lint-tests:
    [mkdir] Created dir: /tmp/ecj410814933
 [ecj-lint] Compiling 494 source files to /tmp/ecj410814933
 [ecj-lint] ----------
 [ecj-lint] 1. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/TestAssertions.java (at line 44)
 [ecj-lint] 	new TestTokenStream1();
 [ecj-lint] 	^^^^^^^^^^^^^^^^^^^^^^
 [ecj-lint] Resource leak: '<unassigned Closeable value>' is never closed
 [ecj-lint] ----------
 [ecj-lint] 2. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/TestAssertions.java (at line 45)
 [ecj-lint] 	new TestTokenStream2();
 [ecj-lint] 	^^^^^^^^^^^^^^^^^^^^^^
 [ecj-lint] Resource leak: '<unassigned Closeable value>' is never closed
 [ecj-lint] ----------
 [ecj-lint] 3. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/TestAssertions.java (at line 47)
 [ecj-lint] 	new TestTokenStream3();
 [ecj-lint] 	^^^^^^^^^^^^^^^^^^^^^^
 [ecj-lint] Resource leak: '<unassigned Closeable value>' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 4. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/TestMergeSchedulerExternal.java (at line 130)
 [ecj-lint] 	IndexWriter writer = new IndexWriter(dir, iwc);
 [ecj-lint] 	            ^^^^^^
 [ecj-lint] Resource leak: 'writer' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 5. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/analysis/TestCachingTokenFilter.java (at line 120)
 [ecj-lint] 	Analyzer analyzer = new MockAnalyzer(random());
 [ecj-lint] 	         ^^^^^^^^
 [ecj-lint] Resource leak: 'analyzer' is never closed
 [ecj-lint] ----------
 [ecj-lint] 6. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/analysis/TestCachingTokenFilter.java (at line 122)
 [ecj-lint] 	CachingTokenFilter buffer = new CachingTokenFilter(input);
 [ecj-lint] 	                   ^^^^^^
 [ecj-lint] Resource leak: 'buffer' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 7. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/analysis/TestCharFilter.java (at line 29)
 [ecj-lint] 	CharFilter cs = new CharFilter1(new StringReader(""));
 [ecj-lint] 	           ^^
 [ecj-lint] Resource leak: 'cs' is never closed
 [ecj-lint] ----------
 [ecj-lint] 8. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/analysis/TestCharFilter.java (at line 34)
 [ecj-lint] 	CharFilter cs = new CharFilter2(new StringReader(""));
 [ecj-lint] 	           ^^
 [ecj-lint] Resource leak: 'cs' is never closed
 [ecj-lint] ----------
 [ecj-lint] 9. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/analysis/TestCharFilter.java (at line 39)
 [ecj-lint] 	CharFilter cs = new CharFilter2(new CharFilter1(new StringReader("")));
 [ecj-lint] 	           ^^
 [ecj-lint] Resource leak: 'cs' is never closed
 [ecj-lint] ----------
 [ecj-lint] 10. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/analysis/TestCharFilter.java (at line 44)
 [ecj-lint] 	CharFilter cs = new CharFilter1(new CharFilter1(new StringReader("")));
 [ecj-lint] 	           ^^
 [ecj-lint] Resource leak: 'cs' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 11. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/analysis/TestDelegatingAnalyzerWrapper.java (at line 39)
 [ecj-lint] 	DelegatingAnalyzerWrapper w2 = new DelegatingAnalyzerWrapper(Analyzer.GLOBAL_REUSE_STRATEGY) {
 [ecj-lint] 	                          ^^
 [ecj-lint] Resource leak: 'w2' is never closed
 [ecj-lint] ----------
 [ecj-lint] 12. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/analysis/TestDelegatingAnalyzerWrapper.java (at line 50)
 [ecj-lint] 	DelegatingAnalyzerWrapper w1 = new DelegatingAnalyzerWrapper(Analyzer.GLOBAL_REUSE_STRATEGY) {
 [ecj-lint] 	                          ^^
 [ecj-lint] Resource leak: 'w1' is never closed
 [ecj-lint] ----------
 [ecj-lint] 13. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/analysis/TestDelegatingAnalyzerWrapper.java (at line 71)
 [ecj-lint] 	DelegatingAnalyzerWrapper w1 = new DelegatingAnalyzerWrapper(Analyzer.GLOBAL_REUSE_STRATEGY) {
 [ecj-lint] 	                          ^^
 [ecj-lint] Resource leak: 'w1' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 14. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/analysis/TestToken.java (at line 103)
 [ecj-lint] 	TokenStream ts = new MockTokenizer(Token.TOKEN_ATTRIBUTE_FACTORY, MockTokenizer.WHITESPACE, false, MockTokenizer.DEFAULT_MAX_TOKEN_LENGTH);
 [ecj-lint] 	            ^^
 [ecj-lint] Resource leak: 'ts' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 15. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/analysis/standard/TestStandardAnalyzer.java (at line 393)
 [ecj-lint] 	Analyzer a = new StandardAnalyzer();
 [ecj-lint] 	         ^
 [ecj-lint] Resource leak: 'a' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 16. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/analysis/tokenattributes/TestCharTermAttributeImpl.java (at line 172)
 [ecj-lint] 	Formatter formatter = new Formatter(t, Locale.ROOT);
 [ecj-lint] 	          ^^^^^^^^^
 [ecj-lint] Resource leak: 'formatter' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 17. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/analysis/tokenattributes/TestPackedTokenAttributeImpl.java (at line 62)
 [ecj-lint] 	TokenStream ts = new MockTokenizer(TokenStream.DEFAULT_TOKEN_ATTRIBUTE_FACTORY, MockTokenizer.WHITESPACE, false, MockTokenizer.DEFAULT_MAX_TOKEN_LENGTH);
 [ecj-lint] 	            ^^
 [ecj-lint] Resource leak: 'ts' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 18. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/codecs/lucene70/TestLucene70DocValuesFormat.java (at line 478)
 [ecj-lint] 	RAMInputStream in = new RAMInputStream("", buffer);
 [ecj-lint] 	               ^^
 [ecj-lint] Resource leak: 'in' is never closed
 [ecj-lint] ----------
 [ecj-lint] 19. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/codecs/lucene70/TestLucene70DocValuesFormat.java (at line 528)
 [ecj-lint] 	RAMInputStream in = new RAMInputStream("", buffer);
 [ecj-lint] 	               ^^
 [ecj-lint] Resource leak: 'in' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 20. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestAddIndexes.java (at line 423)
 [ecj-lint] 	IndexWriter writer = new IndexWriter(aux, dontMergeConfig);
 [ecj-lint] 	            ^^^^^^
 [ecj-lint] Resource leak: 'writer' is never closed
 [ecj-lint] ----------
 [ecj-lint] 21. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestAddIndexes.java (at line 1433)
 [ecj-lint] 	DirectoryReader wrappedReader = new SoftDeletesDirectoryReaderWrapper(reader, "soft_delete");
 [ecj-lint] 	                ^^^^^^^^^^^^^
 [ecj-lint] Resource leak: 'wrappedReader' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 22. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestAllFilesCheckIndexHeader.java (at line 58)
 [ecj-lint] 	LineFileDocs docs = new LineFileDocs(random());
 [ecj-lint] 	             ^^^^
 [ecj-lint] Resource leak: 'docs' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 23. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestAllFilesDetectTruncation.java (at line 55)
 [ecj-lint] 	LineFileDocs docs = new LineFileDocs(random());
 [ecj-lint] 	             ^^^^
 [ecj-lint] Resource leak: 'docs' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 24. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestAllFilesHaveChecksumFooter.java (at line 41)
 [ecj-lint] 	LineFileDocs docs = new LineFileDocs(random());
 [ecj-lint] 	             ^^^^
 [ecj-lint] Resource leak: 'docs' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 25. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestAllFilesHaveCodecHeader.java (at line 44)
 [ecj-lint] 	LineFileDocs docs = new LineFileDocs(random());
 [ecj-lint] 	             ^^^^
 [ecj-lint] Resource leak: 'docs' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 26. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestConcurrentMergeScheduler.java (at line 394)
 [ecj-lint] 	ConcurrentMergeScheduler cms = new ConcurrentMergeScheduler();
 [ecj-lint] 	                         ^^^
 [ecj-lint] Resource leak: 'cms' is never closed
 [ecj-lint] ----------
 [ecj-lint] 27. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestConcurrentMergeScheduler.java (at line 520)
 [ecj-lint] 	final IndexWriter w = new IndexWriter(dir, iwc);
 [ecj-lint] 	                  ^
 [ecj-lint] Resource leak: 'w' is never closed
 [ecj-lint] ----------
 [ecj-lint] 28. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestConcurrentMergeScheduler.java (at line 592)
 [ecj-lint] 	ConcurrentMergeScheduler cms = new ConcurrentMergeScheduler();
 [ecj-lint] 	                         ^^^
 [ecj-lint] Resource leak: 'cms' is never closed
 [ecj-lint] ----------
 [ecj-lint] 29. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestConcurrentMergeScheduler.java (at line 613)
 [ecj-lint] 	ConcurrentMergeScheduler cms = new ConcurrentMergeScheduler();
 [ecj-lint] 	                         ^^^
 [ecj-lint] Resource leak: 'cms' is never closed
 [ecj-lint] ----------
 [ecj-lint] 30. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestConcurrentMergeScheduler.java (at line 620)
 [ecj-lint] 	ConcurrentMergeScheduler cms = new ConcurrentMergeScheduler();
 [ecj-lint] 	                         ^^^
 [ecj-lint] Resource leak: 'cms' is never closed
 [ecj-lint] ----------
 [ecj-lint] 31. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestConcurrentMergeScheduler.java (at line 628)
 [ecj-lint] 	ConcurrentMergeScheduler cms = new ConcurrentMergeScheduler();
 [ecj-lint] 	                         ^^^
 [ecj-lint] Resource leak: 'cms' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 32. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestDeletionPolicy.java (at line 451)
 [ecj-lint] 	writer = new IndexWriter(dir, newIndexWriterConfig(new MockAnalyzer(random()))
 [ecj-lint]                                     .setIndexDeletionPolicy(policy)
 [ecj-lint]                                     .setIndexCommit(lastCommit));
 [ecj-lint] 	^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 [ecj-lint] Resource leak: 'writer' is not closed at this location
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 33. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestDocValuesIndexing.java (at line 835)
 [ecj-lint] 	IndexWriter writer = new IndexWriter(dir, conf);
 [ecj-lint] 	            ^^^^^^
 [ecj-lint] Resource leak: 'writer' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 34. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestForceMergeForever.java (at line 82)
 [ecj-lint] 	return;
 [ecj-lint] 	^^^^^^^
 [ecj-lint] Resource leak: 'docs' is not closed at this location
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 35. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestIndexFileDeleter.java (at line 451)
 [ecj-lint] 	RandomIndexWriter w = new RandomIndexWriter(random(), dir, iwc);
 [ecj-lint] 	                  ^
 [ecj-lint] Resource leak: 'w' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 36. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestIndexWriter.java (at line 189)
 [ecj-lint] 	new IndexWriter(dir, new IndexWriterConfig(new MockAnalyzer(random()))).rollback();
 [ecj-lint] 	^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 [ecj-lint] Resource leak: '<unassigned Closeable value>' is never closed
 [ecj-lint] ----------
 [ecj-lint] 37. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestIndexWriter.java (at line 1696)
 [ecj-lint] 	IndexWriter w = new IndexWriter(dir,
 [ecj-lint] 	            ^
 [ecj-lint] Resource leak: 'w' is never closed
 [ecj-lint] ----------
 [ecj-lint] 38. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestIndexWriter.java (at line 1708)
 [ecj-lint] 	IndexWriter w = new IndexWriter(dir,
 [ecj-lint] 	            ^
 [ecj-lint] Resource leak: 'w' is never closed
 [ecj-lint] ----------
 [ecj-lint] 39. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestIndexWriter.java (at line 3539)
 [ecj-lint] 	new IndexWriter(dir, config);
 [ecj-lint] 	^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 [ecj-lint] Resource leak: '<unassigned Closeable value>' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 40. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterCommit.java (at line 298)
 [ecj-lint] 	writer = new IndexWriter(dir, newIndexWriterConfig(new MockAnalyzer(random()))
 [ecj-lint]                                     .setOpenMode(OpenMode.APPEND));
 [ecj-lint] 	^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 [ecj-lint] Resource leak: 'writer' is not closed at this location
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 41. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterDelete.java (at line 912)
 [ecj-lint] 	IndexWriter modifier = new IndexWriter(dir, newIndexWriterConfig(new MockAnalyzer(random(), MockTokenizer.WHITESPACE, false)));
 [ecj-lint] 	            ^^^^^^^^
 [ecj-lint] Resource leak: 'modifier' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 42. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterExceptions.java (at line 596)
 [ecj-lint] 	IndexWriter writer = new IndexWriter(dir, newIndexWriterConfig(new MockAnalyzer(random()))
 [ecj-lint] 	            ^^^^^^
 [ecj-lint] Resource leak: 'writer' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 43. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterLockRelease.java (at line 39)
 [ecj-lint] 	new IndexWriter(dir, new IndexWriterConfig(new MockAnalyzer(random())).setOpenMode(OpenMode.APPEND));
 [ecj-lint] 	^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 [ecj-lint] Resource leak: '<unassigned Closeable value>' is never closed
 [ecj-lint] ----------
 [ecj-lint] 44. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterLockRelease.java (at line 42)
 [ecj-lint] 	new IndexWriter(dir, new IndexWriterConfig(new MockAnalyzer(random())).setOpenMode(OpenMode.APPEND));
 [ecj-lint] 	^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 [ecj-lint] Resource leak: '<unassigned Closeable value>' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 45. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterMaxDocs.java (at line 310)
 [ecj-lint] 	MultiReader mr = new MultiReader(subReaders);
 [ecj-lint] 	            ^^
 [ecj-lint] Resource leak: 'mr' is never closed
 [ecj-lint] ----------
 [ecj-lint] 46. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterMaxDocs.java (at line 398)
 [ecj-lint] 	throw e;
 [ecj-lint] 	^^^^^^^^
 [ecj-lint] Resource leak: 'w' is not closed at this location
 [ecj-lint] ----------
 [ecj-lint] 47. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterMaxDocs.java (at line 447)
 [ecj-lint] 	throw e;
 [ecj-lint] 	^^^^^^^^
 [ecj-lint] Resource leak: 'w' is not closed at this location
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 48. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterOnDiskFull.java (at line 500)
 [ecj-lint] 	IndexWriter w = new IndexWriter(
 [ecj-lint] 	            ^
 [ecj-lint] Resource leak: 'w' is never closed
 [ecj-lint] ----------
 [ecj-lint] 49. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterOnDiskFull.java (at line 546)
 [ecj-lint] 	IndexWriter writer = new IndexWriter(dir, newIndexWriterConfig(new MockAnalyzer(random()))
 [ecj-lint] 	            ^^^^^^
 [ecj-lint] Resource leak: 'writer' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 50. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterOutOfFileDescriptors.java (at line 40)
 [ecj-lint] 	LineFileDocs docs = new LineFileDocs(random());
 [ecj-lint] 	             ^^^^
 [ecj-lint] Resource leak: 'docs' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 51. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterWithThreads.java (at line 585)
 [ecj-lint] 	final LineFileDocs docs = new LineFileDocs(random());
 [ecj-lint] 	                   ^^^^
 [ecj-lint] Resource leak: 'docs' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 52. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestLongPostings.java (at line 44)
 [ecj-lint] 	Analyzer a = new MockAnalyzer(random());
 [ecj-lint] 	         ^
 [ecj-lint] Resource leak: 'a' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 53. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestParallelReaderEmptyIndex.java (at line 57)
 [ecj-lint] 	ParallelCompositeReader cpr = new ParallelCompositeReader(
 [ecj-lint] 	                        ^^^
 [ecj-lint] Resource leak: 'cpr' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 54. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestParallelTermEnum.java (at line 90)
 [ecj-lint] 	ParallelLeafReader pr = new ParallelLeafReader(ir1, ir2);
 [ecj-lint] 	                   ^^
 [ecj-lint] Resource leak: 'pr' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 55. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestPersistentSnapshotDeletionPolicy.java (at line 139)
 [ecj-lint] 	throw ioe;
 [ecj-lint] 	^^^^^^^^^^
 [ecj-lint] Resource leak: 'writer' is not closed at this location
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 56. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestReaderPool.java (at line 234)
 [ecj-lint] 	throw new AssertionError(ex);
 [ecj-lint] 	^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 [ecj-lint] Resource leak: 'pool' is not closed at this location
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 57. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestRollback.java (at line 40)
 [ecj-lint] 	IndexWriter w = new IndexWriter(dir, newIndexWriterConfig(new MockAnalyzer(random()))
 [ecj-lint] 	            ^
 [ecj-lint] Resource leak: 'w' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 58. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestSoftDeletesRetentionMergePolicy.java (at line 470)
 [ecj-lint] 	throw new AssertionError(e);
 [ecj-lint] 	^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 [ecj-lint] Resource leak: 'writer' is not closed at this location
 [ecj-lint] ----------
 [ecj-lint] 59. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestSoftDeletesRetentionMergePolicy.java (at line 536)
 [ecj-lint] 	throw new AssertionError(e);
 [ecj-lint] 	^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 [ecj-lint] Resource leak: 'writer' is not closed at this location
 [ecj-lint] ----------
 [ecj-lint] 60. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestSoftDeletesRetentionMergePolicy.java (at line 675)
 [ecj-lint] 	DirectoryReader reader = new IncludeSoftDeletesWrapper(unwrapped);
 [ecj-lint] 	                ^^^^^^
 [ecj-lint] Resource leak: 'reader' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 61. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestSwappedIndexFiles.java (at line 50)
 [ecj-lint] 	LineFileDocs docs = new LineFileDocs(random());
 [ecj-lint] 	             ^^^^
 [ecj-lint] Resource leak: 'docs' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 62. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestTryDelete.java (at line 79)
 [ecj-lint] 	ReferenceManager<IndexSearcher> mgr = new SearcherManager(writer,
 [ecj-lint] 	                                ^^^
 [ecj-lint] Resource leak: 'mgr' is never closed
 [ecj-lint] ----------
 [ecj-lint] 63. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestTryDelete.java (at line 124)
 [ecj-lint] 	ReferenceManager<IndexSearcher> mgr = new SearcherManager(writer,
 [ecj-lint] 	                                ^^^
 [ecj-lint] Resource leak: 'mgr' is never closed
 [ecj-lint] ----------
 [ecj-lint] 64. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/index/TestTryDelete.java (at line 166)
 [ecj-lint] 	ReferenceManager<IndexSearcher> mgr = new SearcherManager(writer,
 [ecj-lint] 	                                ^^^
 [ecj-lint] Resource leak: 'mgr' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 65. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/search/TestFuzzyQuery.java (at line 406)
 [ecj-lint] 	MockAnalyzer analyzer = new MockAnalyzer(random());
 [ecj-lint] 	             ^^^^^^^^
 [ecj-lint] Resource leak: 'analyzer' is never closed
 [ecj-lint] ----------
 [ecj-lint] 66. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/search/TestFuzzyQuery.java (at line 524)
 [ecj-lint] 	RandomIndexWriter w = new RandomIndexWriter(random(), dir);
 [ecj-lint] 	                  ^
 [ecj-lint] Resource leak: 'w' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 67. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/search/TestLRUQueryCache.java (at line 161)
 [ecj-lint] 	throw error.get();
 [ecj-lint] 	^^^^^^^^^^^^^^^^^^
 [ecj-lint] Resource leak: 'w' is not closed at this location
 [ecj-lint] ----------
 [ecj-lint] 68. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/search/TestLRUQueryCache.java (at line 161)
 [ecj-lint] 	throw error.get();
 [ecj-lint] 	^^^^^^^^^^^^^^^^^^
 [ecj-lint] Resource leak: 'mgr' is not closed at this location
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 69. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/search/TestSameScoresWithThreads.java (at line 49)
 [ecj-lint] 	LineFileDocs docs = new LineFileDocs(random());
 [ecj-lint] 	             ^^^^
 [ecj-lint] Resource leak: 'docs' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 70. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/search/TestSearcherManager.java (at line 313)
 [ecj-lint] 	SearcherManager sm = new SearcherManager(writer, false, false, new SearcherFactory());
 [ecj-lint] 	                ^^
 [ecj-lint] Resource leak: 'sm' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 71. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/search/TestTermQuery.java (at line 52)
 [ecj-lint] 	new TermQuery(new Term("foo", "bar"), TermContext.build(new MultiReader().getContext(), new Term("foo", "bar"))));
 [ecj-lint] 	                                                        ^^^^^^^^^^^^^^^^^
 [ecj-lint] Resource leak: '<unassigned Closeable value>' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 72. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/store/TestBufferedIndexInput.java (at line 50)
 [ecj-lint] 	MyBufferedIndexInput input = new MyBufferedIndexInput();
 [ecj-lint] 	                     ^^^^^
 [ecj-lint] Resource leak: 'input' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 73. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/store/TestHugeRamFile.java (at line 88)
 [ecj-lint] 	RAMInputStream in = new RAMInputStream("testcase", f);
 [ecj-lint] 	               ^^
 [ecj-lint] Resource leak: 'in' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 74. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/store/TestRAMDirectory.java (at line 81)
 [ecj-lint] 	RAMDirectory ramDir = new RAMDirectory(fsDir, newIOContext(random()));
 [ecj-lint] 	             ^^^^^^
 [ecj-lint] Resource leak: 'ramDir' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 75. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/store/TestTrackingDirectoryWrapper.java (at line 32)
 [ecj-lint] 	TrackingDirectoryWrapper dir = new TrackingDirectoryWrapper(new RAMDirectory());
 [ecj-lint] 	                         ^^^
 [ecj-lint] Resource leak: 'dir' is never closed
 [ecj-lint] ----------
 [ecj-lint] 76. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/store/TestTrackingDirectoryWrapper.java (at line 37)
 [ecj-lint] 	TrackingDirectoryWrapper dir = new TrackingDirectoryWrapper(new RAMDirectory());
 [ecj-lint] 	                         ^^^
 [ecj-lint] Resource leak: 'dir' is never closed
 [ecj-lint] ----------
 [ecj-lint] 77. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/store/TestTrackingDirectoryWrapper.java (at line 43)
 [ecj-lint] 	TrackingDirectoryWrapper dir = new TrackingDirectoryWrapper(new RAMDirectory());
 [ecj-lint] 	                         ^^^
 [ecj-lint] Resource leak: 'dir' is never closed
 [ecj-lint] ----------
 [ecj-lint] 78. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/store/TestTrackingDirectoryWrapper.java (at line 51)
 [ecj-lint] 	TrackingDirectoryWrapper dir = new TrackingDirectoryWrapper(new RAMDirectory());
 [ecj-lint] 	                         ^^^
 [ecj-lint] Resource leak: 'dir' is never closed
 [ecj-lint] ----------
 [ecj-lint] 79. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/store/TestTrackingDirectoryWrapper.java (at line 60)
 [ecj-lint] 	TrackingDirectoryWrapper dest = new TrackingDirectoryWrapper(new RAMDirectory());
 [ecj-lint] 	                         ^^^^
 [ecj-lint] Resource leak: 'dest' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 80. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/util/TestCloseableThreadLocal.java (at line 31)
 [ecj-lint] 	CloseableThreadLocal<Object> ctl = new CloseableThreadLocal<>();
 [ecj-lint] 	                             ^^^
 [ecj-lint] Resource leak: 'ctl' is never closed
 [ecj-lint] ----------
 [ecj-lint] 81. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/util/TestCloseableThreadLocal.java (at line 39)
 [ecj-lint] 	CloseableThreadLocal<Object> ctl = new CloseableThreadLocal<>();
 [ecj-lint] 	                             ^^^
 [ecj-lint] Resource leak: 'ctl' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 82. ERROR in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/util/TestQueryBuilder.java (at line 21)
 [ecj-lint] 	import java.io.Reader;
 [ecj-lint] 	       ^^^^^^^^^^^^^^
 [ecj-lint] The import java.io.Reader is never used
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 83. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/util/bkd/Test2BBKDPoints.java (at line 44)
 [ecj-lint] 	BKDWriter w = new BKDWriter(numDocs, dir, "_0", 1, Long.BYTES,
 [ecj-lint] 	          ^
 [ecj-lint] Resource leak: 'w' is never closed
 [ecj-lint] ----------
 [ecj-lint] 84. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/util/bkd/Test2BBKDPoints.java (at line 81)
 [ecj-lint] 	BKDWriter w = new BKDWriter(numDocs, dir, "_0", 2, Long.BYTES,
 [ecj-lint] 	          ^
 [ecj-lint] Resource leak: 'w' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 85. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/util/bkd/TestBKD.java (at line 51)
 [ecj-lint] 	BKDWriter w = new BKDWriter(100, dir, "tmp", 1, 4, 2, 1.0f, 100, true);
 [ecj-lint] 	          ^
 [ecj-lint] Resource leak: 'w' is never closed
 [ecj-lint] ----------
 [ecj-lint] 86. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/util/bkd/TestBKD.java (at line 126)
 [ecj-lint] 	BKDWriter w = new BKDWriter(numDocs, dir, "tmp", numDims, 4, maxPointsInLeafNode, maxMB, numDocs, true);
 [ecj-lint] 	          ^
 [ecj-lint] Resource leak: 'w' is never closed
 [ecj-lint] ----------
 [ecj-lint] 87. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/util/bkd/TestBKD.java (at line 267)
 [ecj-lint] 	BKDWriter w = new BKDWriter(numDocs, dir, "tmp", numDims, numBytesPerDim, maxPointsInLeafNode, maxMB, numDocs, true);
 [ecj-lint] 	          ^
 [ecj-lint] Resource leak: 'w' is never closed
 [ecj-lint] ----------
 [ecj-lint] 88. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/util/bkd/TestBKD.java (at line 715)
 [ecj-lint] 	w = new BKDWriter(numValues, dir, "_" + seg, numDims, numBytesPerDim, maxPointsInLeafNode, maxMB, docValues.length, false);
 [ecj-lint] 	^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 [ecj-lint] Resource leak: 'w' is not closed at this location
 [ecj-lint] ----------
 [ecj-lint] 89. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/util/bkd/TestBKD.java (at line 974)
 [ecj-lint] 	BKDWriter w = new BKDWriter(numDocs+1, dir, "tmp", 1, Integer.BYTES, 2, 0.01f, numDocs, true);
 [ecj-lint] 	          ^
 [ecj-lint] Resource leak: 'w' is never closed
 [ecj-lint] ----------
 [ecj-lint] 90. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/util/bkd/TestBKD.java (at line 1015)
 [ecj-lint] 	BKDWriter w = new BKDWriter(numDocs+1, dir, "tmp", 2, Integer.BYTES, 2, 0.01f, numDocs,
 [ecj-lint] 	          ^
 [ecj-lint] Resource leak: 'w' is never closed
 [ecj-lint] ----------
 [ecj-lint] 91. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/util/bkd/TestBKD.java (at line 1066)
 [ecj-lint] 	BKDWriter w = new BKDWriter(numDocs+1, dir, "tmp", numDims, bytesPerDim, 32, 1f, numDocs, true);
 [ecj-lint] 	          ^
 [ecj-lint] Resource leak: 'w' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 92. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/core/src/test/org/apache/lucene/util/fst/TestFSTs.java (at line 311)
 [ecj-lint] 	final LineFileDocs docs = new LineFileDocs(random());
 [ecj-lint] 	                   ^^^^
 [ecj-lint] Resource leak: 'docs' is never closed
 [ecj-lint] ----------
 [ecj-lint] 92 problems (1 error, 91 warnings)

BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/build.xml:633: The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/build.xml:101: The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/build.xml:202: The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/common-build.xml:2092: The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/common-build.xml:2125: Compile failed; see the compiler error output for details.

Total time: 95 minutes 39 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

[JENKINS] Lucene-Solr-Tests-7.x - Build # 918 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/918/

All tests passed

Build Log:
[...truncated 61044 lines...]
-ecj-javadoc-lint-src:
    [mkdir] Created dir: /tmp/ecj729983867
 [ecj-lint] Compiling 1233 source files to /tmp/ecj729983867
 [ecj-lint] Processing annotations
 [ecj-lint] Annotations processed
 [ecj-lint] Processing annotations
 [ecj-lint] No elements to process
 [ecj-lint] invalid Class-Path header in manifest of jar file: /x1/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: /x1/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] ----------
 [ecj-lint] 1. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/java/org/apache/solr/client/solrj/embedded/EmbeddedSolrServer.java (at line 219)
 [ecj-lint] 	return (NamedList<Object>) new JavaBinCodec(resolver).unmarshal(in);
 [ecj-lint] 	                           ^^^^^^^^^^^^^^^^^^^^^^^^^^
 [ecj-lint] Resource leak: '<unassigned Closeable value>' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 2. ERROR in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/java/org/apache/solr/cloud/CloudUtil.java (at line 32)
 [ecj-lint] 	import org.apache.solr.client.solrj.cloud.autoscaling.Policy;
 [ecj-lint] 	       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 [ecj-lint] The import org.apache.solr.client.solrj.cloud.autoscaling.Policy is never used
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 3. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/java/org/apache/solr/cloud/api/collections/RestoreCmd.java (at line 260)
 [ecj-lint] 	throw new SolrException(ErrorCode.BAD_REQUEST, "Unexpected number of replicas, replicationFactor, " +
 [ecj-lint]               Replica.Type.NRT + " or " + Replica.Type.TLOG + " must be greater than 0");
 [ecj-lint] 	^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 [ecj-lint] Resource leak: 'repository' is not closed at this location
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 4. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/java/org/apache/solr/update/UpdateLog.java (at line 1865)
 [ecj-lint] 	if (exceptionOnExecuteUpdate.get() != null) throw exceptionOnExecuteUpdate.get();
 [ecj-lint] 	                                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 [ecj-lint] Resource leak: 'proc' is not closed at this location
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 5. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/java/org/apache/solr/util/FileUtils.java (at line 50)
 [ecj-lint] 	in = new FileInputStream(src).getChannel();
 [ecj-lint] 	     ^^^^^^^^^^^^^^^^^^^^^^^^
 [ecj-lint] Resource leak: '<unassigned Closeable value>' is never closed
 [ecj-lint] ----------
 [ecj-lint] 6. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/java/org/apache/solr/util/FileUtils.java (at line 51)
 [ecj-lint] 	out = new FileOutputStream(destination).getChannel();
 [ecj-lint] 	      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 [ecj-lint] Resource leak: '<unassigned Closeable value>' is never closed
 [ecj-lint] ----------
 [ecj-lint] 6 problems (1 error, 5 warnings)

BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/build.xml:633: The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/build.xml:101: The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build.xml:680: The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/common-build.xml:2086: The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/common-build.xml:2125: Compile failed; see the compiler error output for details.

Total time: 103 minutes 22 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any