You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2018/12/29 13:22:20 UTC

[JENKINS] Lucene-Solr-repro - Build # 2583 - Still Unstable

Build: https://builds.apache.org/job/Lucene-Solr-repro/2583/

[...truncated 28 lines...]
[repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/247/consoleText

[repro] Revision: 345a655f216258c406c384ada9aa6d5f14e254f9

[repro] Repro line:  ant test  -Dtestcase=TestSimTriggerIntegration -Dtests.method=testNodeMarkersRegistration -Dtests.seed=21F7DF3C0DA239EF -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=en-CA -Dtests.timezone=Pacific/Pago_Pago -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestSimTriggerIntegration -Dtests.method=testNodeAddedTriggerRestoreState -Dtests.seed=21F7DF3C0DA239EF -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=en-CA -Dtests.timezone=Pacific/Pago_Pago -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 345a655f216258c406c384ada9aa6d5f14e254f9
[repro] git fetch
[repro] git checkout 345a655f216258c406c384ada9aa6d5f14e254f9

[...truncated 1 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]    solr/core
[repro]       TestSimTriggerIntegration
[repro] ant compile-test

[...truncated 3592 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.TestSimTriggerIntegration" -Dtests.showOutput=onerror  -Dtests.seed=21F7DF3C0DA239EF -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=en-CA -Dtests.timezone=Pacific/Pago_Pago -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 7004 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   5/5 failed: org.apache.solr.cloud.autoscaling.sim.TestSimTriggerIntegration

[repro] Re-testing 100% failures at the tip of master
[repro] git fetch
[repro] git checkout master

[...truncated 4 lines...]
[repro] git merge --ff-only

[...truncated 86 lines...]
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]    solr/core
[repro]       TestSimTriggerIntegration
[repro] ant compile-test

[...truncated 3592 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.TestSimTriggerIntegration" -Dtests.showOutput=onerror  -Dtests.seed=21F7DF3C0DA239EF -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=en-CA -Dtests.timezone=Pacific/Pago_Pago -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 5281 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of master:
[repro]   4/5 failed: org.apache.solr.cloud.autoscaling.sim.TestSimTriggerIntegration
[repro] git checkout 345a655f216258c406c384ada9aa6d5f14e254f9

[...truncated 8 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

[JENKINS] Lucene-Solr-repro - Build # 2584 - Still Unstable

Posted by Apache Jenkins Server <je...@builds.apache.org>.
Build: https://builds.apache.org/job/Lucene-Solr-repro/2584/

[...truncated 28 lines...]
[repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/416/consoleText

[repro] Revision: 9aa15a2accd47214dc2d76a035fa31450a079f62

[repro] Ant options: -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=ForceLeaderTest -Dtests.method=testReplicasInLIRNoLeader -Dtests.seed=7A776A947A0461A0 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.locale=en-NZ -Dtests.timezone=Pacific/Ponape -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 345a655f216258c406c384ada9aa6d5f14e254f9
[repro] git fetch
[repro] git checkout 9aa15a2accd47214dc2d76a035fa31450a079f62

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]    solr/core
[repro]       ForceLeaderTest
[repro] ant compile-test

[...truncated 3605 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.ForceLeaderTest" -Dtests.showOutput=onerror -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.seed=7A776A947A0461A0 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.locale=en-NZ -Dtests.timezone=Pacific/Ponape -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 1791 lines...]
   [junit4]   2> 47492 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.AbstractFullDistribZkTestBase ERROR: org.apache.solr.common.SolrException: Could not find a healthy node to handle the request. ... Sleeping for 1 seconds before re-try ...
   [junit4]   2> 47758 ERROR (indexFetcher-80-thread-1) [    ] o.a.s.h.ReplicationHandler Index fetch failed :org.apache.solr.common.SolrException: No registered leader was found after waiting for 4000ms , collection: forceleader_test_collection slice: shard1 saw state=DocCollection(forceleader_test_collection//collections/forceleader_test_collection/state.json/15)={
   [junit4]   2>   "pullReplicas":"0",
   [junit4]   2>   "replicationFactor":"0",
   [junit4]   2>   "shards":{"shard1":{
   [junit4]   2>       "range":"80000000-7fffffff",
   [junit4]   2>       "state":"active",
   [junit4]   2>       "replicas":{
   [junit4]   2>         "core_node2":{
   [junit4]   2>           "core":"forceleader_test_collection_shard1_replica_t1",
   [junit4]   2>           "base_url":"http://127.0.0.1:45355/gsn/uk",
   [junit4]   2>           "node_name":"127.0.0.1:45355_gsn%2Fuk",
   [junit4]   2>           "state":"down",
   [junit4]   2>           "type":"TLOG"},
   [junit4]   2>         "core_node4":{
   [junit4]   2>           "state":"down",
   [junit4]   2>           "base_url":"http://127.0.0.1:43543/gsn/uk",
   [junit4]   2>           "core":"forceleader_test_collection_shard1_replica_t3",
   [junit4]   2>           "node_name":"127.0.0.1:43543_gsn%2Fuk",
   [junit4]   2>           "force_set_state":"false",
   [junit4]   2>           "type":"TLOG"},
   [junit4]   2>         "core_node6":{
   [junit4]   2>           "state":"down",
   [junit4]   2>           "base_url":"http://127.0.0.1:38474/gsn/uk",
   [junit4]   2>           "core":"forceleader_test_collection_shard1_replica_t5",
   [junit4]   2>           "node_name":"127.0.0.1:38474_gsn%2Fuk",
   [junit4]   2>           "force_set_state":"false",
   [junit4]   2>           "type":"TLOG"}}}},
   [junit4]   2>   "router":{"name":"compositeId"},
   [junit4]   2>   "maxShardsPerNode":"1",
   [junit4]   2>   "autoAddReplicas":"false",
   [junit4]   2>   "nrtReplicas":"0",
   [junit4]   2>   "tlogReplicas":"3"} with live_nodes=[127.0.0.1:43543_gsn%2Fuk, 127.0.0.1:36990_gsn%2Fuk, 127.0.0.1:38474_gsn%2Fuk]
   [junit4]   2> 	at org.apache.solr.common.cloud.ZkStateReader.getLeaderRetry(ZkStateReader.java:902)
   [junit4]   2> 	at org.apache.solr.common.cloud.ZkStateReader.getLeaderRetry(ZkStateReader.java:879)
   [junit4]   2> 	at org.apache.solr.handler.IndexFetcher.getLeaderReplica(IndexFetcher.java:688)
   [junit4]   2> 	at org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:381)
   [junit4]   2> 	at org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:346)
   [junit4]   2> 	at org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:425)
   [junit4]   2> 	at org.apache.solr.handler.ReplicationHandler.lambda$setupPolling$13(ReplicationHandler.java:1171)
   [junit4]   2> 	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
   [junit4]   2> 	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
   [junit4]   2> 	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
   [junit4]   2> 	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
   [junit4]   2> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   [junit4]   2> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   [junit4]   2> 	at java.lang.Thread.run(Thread.java:748)
   [junit4]   2> 
   [junit4]   2> 47758 INFO  (recoveryExecutor-66-thread-1-processing-n:127.0.0.1:38474_gsn%2Fuk x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.RecoveryStrategy Finished recovery process, successful=[false]
   [junit4]   2> 48171 ERROR (indexFetcher-73-thread-1) [    ] o.a.s.h.ReplicationHandler Index fetch failed :org.apache.solr.common.SolrException: No registered leader was found after waiting for 4000ms , collection: forceleader_test_collection slice: shard1 saw state=DocCollection(forceleader_test_collection//collections/forceleader_test_collection/state.json/15)={
   [junit4]   2>   "pullReplicas":"0",
   [junit4]   2>   "replicationFactor":"0",
   [junit4]   2>   "shards":{"shard1":{
   [junit4]   2>       "range":"80000000-7fffffff",
   [junit4]   2>       "state":"active",
   [junit4]   2>       "replicas":{
   [junit4]   2>         "core_node2":{
   [junit4]   2>           "core":"forceleader_test_collection_shard1_replica_t1",
   [junit4]   2>           "base_url":"http://127.0.0.1:45355/gsn/uk",
   [junit4]   2>           "node_name":"127.0.0.1:45355_gsn%2Fuk",
   [junit4]   2>           "state":"down",
   [junit4]   2>           "type":"TLOG"},
   [junit4]   2>         "core_node4":{
   [junit4]   2>           "state":"down",
   [junit4]   2>           "base_url":"http://127.0.0.1:43543/gsn/uk",
   [junit4]   2>           "core":"forceleader_test_collection_shard1_replica_t3",
   [junit4]   2>           "node_name":"127.0.0.1:43543_gsn%2Fuk",
   [junit4]   2>           "force_set_state":"false",
   [junit4]   2>           "type":"TLOG"},
   [junit4]   2>         "core_node6":{
   [junit4]   2>           "state":"down",
   [junit4]   2>           "base_url":"http://127.0.0.1:38474/gsn/uk",
   [junit4]   2>           "core":"forceleader_test_collection_shard1_replica_t5",
   [junit4]   2>           "node_name":"127.0.0.1:38474_gsn%2Fuk",
   [junit4]   2>           "force_set_state":"false",
   [junit4]   2>           "type":"TLOG"}}}},
   [junit4]   2>   "router":{"name":"compositeId"},
   [junit4]   2>   "maxShardsPerNode":"1",
   [junit4]   2>   "autoAddReplicas":"false",
   [junit4]   2>   "nrtReplicas":"0",
   [junit4]   2>   "tlogReplicas":"3"} with live_nodes=[127.0.0.1:43543_gsn%2Fuk, 127.0.0.1:36990_gsn%2Fuk, 127.0.0.1:38474_gsn%2Fuk]
   [junit4]   2> 	at org.apache.solr.common.cloud.ZkStateReader.getLeaderRetry(ZkStateReader.java:902)
   [junit4]   2> 	at org.apache.solr.common.cloud.ZkStateReader.getLeaderRetry(ZkStateReader.java:879)
   [junit4]   2> 	at org.apache.solr.handler.IndexFetcher.getLeaderReplica(IndexFetcher.java:688)
   [junit4]   2> 	at org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:381)
   [junit4]   2> 	at org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:346)
   [junit4]   2> 	at org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:425)
   [junit4]   2> 	at org.apache.solr.handler.ReplicationHandler.lambda$setupPolling$13(ReplicationHandler.java:1171)
   [junit4]   2> 	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
   [junit4]   2> 	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
   [junit4]   2> 	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
   [junit4]   2> 	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
   [junit4]   2> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   [junit4]   2> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   [junit4]   2> 	at java.lang.Thread.run(Thread.java:748)
   [junit4]   2> 
   [junit4]   2> 48172 INFO  (recoveryExecutor-38-thread-1-processing-n:127.0.0.1:43543_gsn%2Fuk x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.RecoveryStrategy Finished recovery process, successful=[false]
   [junit4]   2> 48172 INFO  (updateExecutor-37-thread-2-processing-n:127.0.0.1:43543_gsn%2Fuk x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.DefaultSolrCoreState Running recovery
   [junit4]   2> 48172 INFO  (updateExecutor-37-thread-2-processing-n:127.0.0.1:43543_gsn%2Fuk x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.ActionThrottle Throttling recovery attempts - waiting for 6207ms
   [junit4]   2> 48493 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=0 commError=false errorCode=510 
   [junit4]   2> 48493 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 48493 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 48494 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=1 commError=false errorCode=510 
   [junit4]   2> 48494 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 48494 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 48495 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=2 commError=false errorCode=510 
   [junit4]   2> 48495 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 48495 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 48496 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=3 commError=false errorCode=510 
   [junit4]   2> 48496 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 48496 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 48496 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=4 commError=false errorCode=510 
   [junit4]   2> 48497 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 48497 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 48497 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=5 commError=false errorCode=510 
   [junit4]   2> 48497 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 48497 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.AbstractFullDistribZkTestBase ERROR: org.apache.solr.common.SolrException: Could not find a healthy node to handle the request. ... Sleeping for 1 seconds before re-try ...
   [junit4]   2> 49498 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=0 commError=false errorCode=510 
   [junit4]   2> 49499 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 49499 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 49499 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=1 commError=false errorCode=510 
   [junit4]   2> 49499 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 49499 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 49500 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=2 commError=false errorCode=510 
   [junit4]   2> 49500 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 49500 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 49501 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=3 commError=false errorCode=510 
   [junit4]   2> 49501 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 49501 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 49501 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=4 commError=false errorCode=510 
   [junit4]   2> 49501 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 49502 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 49502 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=5 commError=false errorCode=510 
   [junit4]   2> 49502 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 49502 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.AbstractFullDistribZkTestBase ERROR: org.apache.solr.common.SolrException: Could not find a healthy node to handle the request. ... Sleeping for 1 seconds before re-try ...
   [junit4]   2> 49759 INFO  (ScheduledTrigger-8-thread-3) [    ] o.a.s.c.a.SystemLogListener Collection .system does not exist, disabling logging.
   [junit4]   2> 49780 INFO  (qtp639435276-33) [n:127.0.0.1:36990_gsn%2Fuk    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.control_collection.shard1.replica_n1:INDEX.sizeInBytes} status=0 QTime=0
   [junit4]   2> 49782 INFO  (qtp639435276-29) [n:127.0.0.1:36990_gsn%2Fuk    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.control_collection.shard1.replica_n1:INDEX.sizeInBytes} status=0 QTime=0
   [junit4]   2> 49784 INFO  (qtp639435276-30) [n:127.0.0.1:36990_gsn%2Fuk    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.control_collection.shard1.replica_n1:INDEX.sizeInBytes} status=0 QTime=0
   [junit4]   2> 49787 INFO  (qtp639435276-32) [n:127.0.0.1:36990_gsn%2Fuk    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=1
   [junit4]   2> 49789 INFO  (qtp639435276-31) [n:127.0.0.1:36990_gsn%2Fuk    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0
   [junit4]   2> 49793 INFO  (qtp639435276-33) [n:127.0.0.1:36990_gsn%2Fuk    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0
   [junit4]   2> 49796 INFO  (qtp639435276-29) [n:127.0.0.1:36990_gsn%2Fuk    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0
   [junit4]   2> 49798 INFO  (qtp639435276-30) [n:127.0.0.1:36990_gsn%2Fuk    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0
   [junit4]   2> 49800 INFO  (SocketProxy-Acceptor-38474) [    ] o.a.s.c.s.c.SocketProxy accepted Socket[addr=/127.0.0.1,port=48848,localport=38474], receiveBufferSize:531000
   [junit4]   2> 49801 INFO  (SocketProxy-Acceptor-38474) [    ] o.a.s.c.s.c.SocketProxy proxy connection Socket[addr=/127.0.0.1,port=46202,localport=44678], receiveBufferSize=530904
   [junit4]   2> 49803 INFO  (qtp1309884849-132) [n:127.0.0.1:38474_gsn%2Fuk    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.collection1.shard2.replica_n5:INDEX.sizeInBytes&key=solr.core.forceleader_test_collection.shard1.replica_t5:INDEX.sizeInBytes} status=0 QTime=1
   [junit4]   2> 49805 INFO  (qtp1309884849-134) [n:127.0.0.1:38474_gsn%2Fuk    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.collection1.shard2.replica_n5:INDEX.sizeInBytes&key=solr.core.forceleader_test_collection.shard1.replica_t5:INDEX.sizeInBytes} status=0 QTime=0
   [junit4]   2> 49807 INFO  (qtp1309884849-133) [n:127.0.0.1:38474_gsn%2Fuk    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.collection1.shard2.replica_n5:INDEX.sizeInBytes&key=solr.core.forceleader_test_collection.shard1.replica_t5:INDEX.sizeInBytes} status=0 QTime=0
   [junit4]   2> 49810 INFO  (qtp1309884849-135) [n:127.0.0.1:38474_gsn%2Fuk    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=1
   [junit4]   2> 49814 INFO  (qtp1309884849-131) [n:127.0.0.1:38474_gsn%2Fuk    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=1
   [junit4]   2> 49816 INFO  (qtp1309884849-132) [n:127.0.0.1:38474_gsn%2Fuk    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=1
   [junit4]   2> 49819 INFO  (qtp1309884849-134) [n:127.0.0.1:38474_gsn%2Fuk    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=1
   [junit4]   2> 49822 INFO  (qtp1309884849-133) [n:127.0.0.1:38474_gsn%2Fuk    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=1
   [junit4]   2> 49823 INFO  (zkCallback-71-thread-1) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.SyncStrategy Sync replicas to http://127.0.0.1:38474/gsn/uk/forceleader_test_collection_shard1_replica_t5/
   [junit4]   2> 49823 INFO  (zkCallback-71-thread-1) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t5 url=http://127.0.0.1:38474/gsn/uk START replicas=[http://127.0.0.1:43543/gsn/uk/forceleader_test_collection_shard1_replica_t3/] nUpdates=100
   [junit4]   2> 49824 INFO  (SocketProxy-Acceptor-43543) [    ] o.a.s.c.s.c.SocketProxy accepted Socket[addr=/127.0.0.1,port=48100,localport=43543], receiveBufferSize:531000
   [junit4]   2> 49825 INFO  (qtp756160782-98) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t3]  webapp=/gsn/uk path=/get params={distrib=false&qt=/get&fingerprint=false&getVersions=100&wt=javabin&version=2} status=0 QTime=0
   [junit4]   2> 49826 INFO  (zkCallback-71-thread-1) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t5 url=http://127.0.0.1:38474/gsn/uk  Received 1 versions from http://127.0.0.1:43543/gsn/uk/forceleader_test_collection_shard1_replica_t3/ fingerprint:null
   [junit4]   2> 49827 INFO  (qtp756160782-97) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t3]  webapp=/gsn/uk path=/get params={distrib=false&qt=/get&checkCanHandleVersionRanges=false&wt=javabin&version=2} status=0 QTime=0
   [junit4]   2> 49828 INFO  (zkCallback-71-thread-1) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t5 url=http://127.0.0.1:38474/gsn/uk  No additional versions requested. ourHighThreshold=1621216223095160832 otherLowThreshold=1621216223095160832 ourHighest=1621216223095160832 otherHighest=1621216223095160832
   [junit4]   2> 49828 INFO  (zkCallback-71-thread-1) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t5 url=http://127.0.0.1:38474/gsn/uk DONE. sync succeeded
   [junit4]   2> 49828 INFO  (zkCallback-71-thread-1) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.SyncStrategy Sync Success - now sync replicas to me
   [junit4]   2> 49828 INFO  (zkCallback-71-thread-1) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.SyncStrategy http://127.0.0.1:38474/gsn/uk/forceleader_test_collection_shard1_replica_t5/: try and ask http://127.0.0.1:43543/gsn/uk/forceleader_test_collection_shard1_replica_t3/ to sync
   [junit4]   2> 49829 INFO  (qtp756160782-99) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t3 url=http://127.0.0.1:43543/gsn/uk START replicas=[http://127.0.0.1:38474/gsn/uk/forceleader_test_collection_shard1_replica_t5/] nUpdates=100
   [junit4]   2> 49831 INFO  (qtp1309884849-135) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.IndexFingerprint IndexFingerprint millis:0.0 result:{maxVersionSpecified=9223372036854775807, maxVersionEncountered=0, maxInHash=0, versionsHash=0, numVersions=0, numDocs=0, maxDoc=0}
   [junit4]   2> 49831 INFO  (qtp1309884849-135) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t5]  webapp=/gsn/uk path=/get params={distrib=false&qt=/get&getFingerprint=9223372036854775807&wt=javabin&version=2} status=0 QTime=0
   [junit4]   2> 49831 INFO  (SocketProxy-Acceptor-43543) [    ] o.a.s.c.s.c.SocketProxy proxy connection Socket[addr=/127.0.0.1,port=40831,localport=53104], receiveBufferSize=530904
   [junit4]   2> 49832 INFO  (qtp756160782-99) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.IndexFingerprint IndexFingerprint millis:0.0 result:{maxVersionSpecified=9223372036854775807, maxVersionEncountered=0, maxInHash=0, versionsHash=0, numVersions=0, numDocs=0, maxDoc=0}
   [junit4]   2> 49832 INFO  (qtp756160782-99) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.PeerSync We are already in sync. No need to do a PeerSync 
   [junit4]   2> 49832 INFO  (qtp756160782-99) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t3]  webapp=/gsn/uk path=/get params={distrib=false&qt=/get&getVersions=100&sync=http://127.0.0.1:38474/gsn/uk/forceleader_test_collection_shard1_replica_t5/&wt=javabin&version=2} status=0 QTime=3
   [junit4]   2> 49832 INFO  (qtp756160782-100) [n:127.0.0.1:43543_gsn%2Fuk    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.forceleader_test_collection.shard1.replica_t3:INDEX.sizeInBytes&key=solr.core.collection1.shard2.replica_n1:INDEX.sizeInBytes} status=0 QTime=0
   [junit4]   2> 49833 INFO  (zkCallback-71-thread-1) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.SyncStrategy http://127.0.0.1:38474/gsn/uk/forceleader_test_collection_shard1_replica_t5/:  sync completed with http://127.0.0.1:43543/gsn/uk/forceleader_test_collection_shard1_replica_t3/
   [junit4]   2> 49834 WARN  (zkCallback-71-thread-1) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.ShardLeaderElectionContext The previous leader marked me forceleader_test_collection_shard1_replica_t5 as down and I haven't recovered yet, so I shouldn't be the leader.
   [junit4]   2> 49834 ERROR (zkCallback-71-thread-1) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.ShardLeaderElectionContext There was a problem trying to register as the leader:org.apache.solr.common.SolrException: Leader Initiated Recovery prevented leadership
   [junit4]   2> 	at org.apache.solr.cloud.ShardLeaderElectionContext.checkLIR(ElectionContext.java:631)
   [junit4]   2> 	at org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:460)
   [junit4]   2> 	at org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:171)
   [junit4]   2> 	at org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:136)
   [junit4]   2> 	at org.apache.solr.cloud.LeaderElector.access$200(LeaderElector.java:57)
   [junit4]   2> 	at org.apache.solr.cloud.LeaderElector$ElectionWatcher.process(LeaderElector.java:349)
   [junit4]   2> 	at org.apache.solr.common.cloud.SolrZkClient$3.lambda$process$1(SolrZkClient.java:287)
   [junit4]   2> 	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
   [junit4]   2> 	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
   [junit4]   2> 	at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
   [junit4]   2> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   [junit4]   2> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   [junit4]   2> 	at java.lang.Thread.run(Thread.java:748)
   [junit4]   2> 
   [junit4]   2> 49834 INFO  (zkCallback-71-thread-1) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.ShardLeaderElectionContext There may be a better leader candidate than us - going back into recovery
   [junit4]   2> 49834 INFO  (qtp756160782-96) [n:127.0.0.1:43543_gsn%2Fuk    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.forceleader_test_collection.shard1.replica_t3:INDEX.sizeInBytes&key=solr.core.collection1.shard2.replica_n1:INDEX.sizeInBytes} status=0 QTime=0
   [junit4]   2> 49835 INFO  (zkCallback-71-thread-1) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.ShardLeaderElectionContextBase No version found for ephemeral leader parent node, won't remove previous leader registration.
   [junit4]   2> 49835 WARN  (updateExecutor-65-thread-1-processing-n:127.0.0.1:38474_gsn%2Fuk x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.RecoveryStrategy Stopping recovery for core=[forceleader_test_collection_shard1_replica_t5] coreNodeName=[core_node6]
   [junit4]   2> 49835 INFO  (updateExecutor-65-thread-1-processing-n:127.0.0.1:38474_gsn%2Fuk x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.DefaultSolrCoreState Running recovery
   [junit4]   2> 49835 INFO  (updateExecutor-65-thread-1-processing-n:127.0.0.1:38474_gsn%2Fuk x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.ActionThrottle Throttling recovery attempts - waiting for 6195ms
   [junit4]   2> 49840 INFO  (zkCallback-47-thread-1) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
   [junit4]   2> 49840 WARN  (zkCallback-47-thread-1) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.RecoveryStrategy Stopping recovery for core=[forceleader_test_collection_shard1_replica_t3] coreNodeName=[core_node4]
   [junit4]   2> 49840 INFO  (qtp756160782-98) [n:127.0.0.1:43543_gsn%2Fuk    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.forceleader_test_collection.shard1.replica_t3:INDEX.sizeInBytes&key=solr.core.collection1.shard2.replica_n1:INDEX.sizeInBytes} status=0 QTime=1
   [junit4]   2> 49844 INFO  (qtp756160782-97) [n:127.0.0.1:43543_gsn%2Fuk    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=1
   [junit4]   2> 49844 INFO  (zkCallback-71-thread-1) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 49844 INFO  (zkCallback-71-thread-2) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 49844 INFO  (zkCallback-71-thread-4) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 49845 INFO  (zkCallback-47-thread-4) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 49845 INFO  (zkCallback-47-thread-2) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 49845 INFO  (zkCallback-47-thread-3) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 49849 INFO  (qtp756160782-99) [n:127.0.0.1:43543_gsn%2Fuk    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=1
   [junit4]   2> 49852 INFO  (qtp756160782-100) [n:127.0.0.1:43543_gsn%2Fuk    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=1
   [junit4]   2> 49855 INFO  (qtp756160782-96) [n:127.0.0.1:43543_gsn%2Fuk    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=1
   [junit4]   2> 49857 INFO  (qtp756160782-98) [n:127.0.0.1:43543_gsn%2Fuk    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=1
   [junit4]   2> 49867 INFO  (AutoscalingActionExecutor-9-thread-1) [    ] o.a.s.c.a.ExecutePlanAction No operations to execute for event: {
   [junit4]   2>   "id":"19bf27b251dc2dT2a8ggqdqlrzufiespe694ucg6",
   [junit4]   2>   "source":".auto_add_replicas",
   [junit4]   2>   "eventTime":7247051634105389,
   [junit4]   2>   "eventType":"NODELOST",
   [junit4]   2>   "properties":{
   [junit4]   2>     "eventTimes":[7247051634105389],
   [junit4]   2>     "preferredOperation":"movereplica",
   [junit4]   2>     "_enqueue_time_":7247061644222564,
   [junit4]   2>     "nodeNames":["127.0.0.1:45355_gsn%2Fuk"]}}
   [junit4]   2> 50504 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=0 commError=false errorCode=510 
   [junit4]   2> 50504 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 50505 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 50507 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=1 commError=false errorCode=510 
   [junit4]   2> 50507 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 50507 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 50507 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=2 commError=false errorCode=510 
   [junit4]   2> 50507 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 50507 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 50508 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=3 commError=false errorCode=510 
   [junit4]   2> 50508 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 50508 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 50509 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=4 commError=false errorCode=510 
   [junit4]   2> 50509 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 50509 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 50509 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=5 commError=false errorCode=510 
   [junit4]   2> 50509 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 50510 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.AbstractFullDistribZkTestBase ERROR: org.apache.solr.common.SolrException: Could not find a healthy node to handle the request. ... Sleeping for 1 seconds before re-try ...
   [junit4]   2> 51511 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=0 commError=false errorCode=510 
   [junit4]   2> 51511 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 51511 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 51512 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=1 commError=false errorCode=510 
   [junit4]   2> 51512 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 51512 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 51512 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=2 commError=false errorCode=510 
   [junit4]   2> 51512 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 51512 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 51513 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=3 commError=false errorCode=510 
   [junit4]   2> 51513 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 51513 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 51514 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=4 commError=false errorCode=510 
   [junit4]   2> 51514 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 51514 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 51514 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=5 commError=false errorCode=510 
   [junit4]   2> 51514 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 51515 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.AbstractFullDistribZkTestBase ERROR: org.apache.solr.common.SolrException: Could not find a healthy node to handle the request. ... Sleeping for 1 seconds before re-try ...
   [junit4]   2> 52340 INFO  (zkCallback-47-thread-1) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.SyncStrategy Sync replicas to http://127.0.0.1:43543/gsn/uk/forceleader_test_collection_shard1_replica_t3/
   [junit4]   2> 52341 INFO  (zkCallback-47-thread-1) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t3 url=http://127.0.0.1:43543/gsn/uk START replicas=[http://127.0.0.1:38474/gsn/uk/forceleader_test_collection_shard1_replica_t5/] nUpdates=100
   [junit4]   2> 52343 INFO  (qtp1309884849-131) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t5]  webapp=/gsn/uk path=/get params={distrib=false&qt=/get&fingerprint=false&getVersions=100&wt=javabin&version=2} status=0 QTime=0
   [junit4]   2> 52344 INFO  (zkCallback-47-thread-1) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t3 url=http://127.0.0.1:43543/gsn/uk  Received 1 versions from http://127.0.0.1:38474/gsn/uk/forceleader_test_collection_shard1_replica_t5/ fingerprint:null
   [junit4]   2> 52346 INFO  (qtp1309884849-132) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t5]  webapp=/gsn/uk path=/get params={distrib=false&qt=/get&checkCanHandleVersionRanges=false&wt=javabin&version=2} status=0 QTime=0
   [junit4]   2> 52347 INFO  (zkCallback-47-thread-1) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t3 url=http://127.0.0.1:43543/gsn/uk  No additional versions requested. ourHighThreshold=1621216223095160832 otherLowThreshold=1621216223095160832 ourHighest=1621216223095160832 otherHighest=1621216223095160832
   [junit4]   2> 52347 INFO  (zkCallback-47-thread-1) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t3 url=http://127.0.0.1:43543/gsn/uk DONE. sync succeeded
   [junit4]   2> 52347 INFO  (zkCallback-47-thread-1) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.SyncStrategy Sync Success - now sync replicas to me
   [junit4]   2> 52347 INFO  (zkCallback-47-thread-1) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.SyncStrategy http://127.0.0.1:43543/gsn/uk/forceleader_test_collection_shard1_replica_t3/: try and ask http://127.0.0.1:38474/gsn/uk/forceleader_test_collection_shard1_replica_t5/ to sync
   [junit4]   2> 52349 INFO  (qtp1309884849-134) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t5 url=http://127.0.0.1:38474/gsn/uk START replicas=[http://127.0.0.1:43543/gsn/uk/forceleader_test_collection_shard1_replica_t3/] nUpdates=100
   [junit4]   2> 52355 INFO  (qtp756160782-97) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.IndexFingerprint IndexFingerprint millis:4.0 result:{maxVersionSpecified=9223372036854775807, maxVersionEncountered=0, maxInHash=0, versionsHash=0, numVersions=0, numDocs=0, maxDoc=0}
   [junit4]   2> 52355 INFO  (qtp756160782-97) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t3]  webapp=/gsn/uk path=/get params={distrib=false&qt=/get&getFingerprint=9223372036854775807&wt=javabin&version=2} status=0 QTime=5
   [junit4]   2> 52356 INFO  (qtp1309884849-134) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.IndexFingerprint IndexFingerprint millis:0.0 result:{maxVersionSpecified=9223372036854775807, maxVersionEncountered=0, maxInHash=0, versionsHash=0, numVersions=0, numDocs=0, maxDoc=0}
   [junit4]   2> 52356 INFO  (qtp1309884849-134) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.PeerSync We are already in sync. No need to do a PeerSync 
   [junit4]   2> 52356 INFO  (qtp1309884849-134) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t5]  webapp=/gsn/uk path=/get params={distrib=false&qt=/get&getVersions=100&sync=http://127.0.0.1:43543/gsn/uk/forceleader_test_collection_shard1_replica_t3/&wt=javabin&version=2} status=0 QTime=8
   [junit4]   2> 52357 INFO  (zkCallback-47-thread-1) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.SyncStrategy http://127.0.0.1:43543/gsn/uk/forceleader_test_collection_shard1_replica_t3/:  sync completed with http://127.0.0.1:38474/gsn/uk/forceleader_test_collection_shard1_replica_t5/
   [junit4]   2> 52358 WARN  (zkCallback-47-thread-1) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.ShardLeaderElectionContext The previous leader marked me forceleader_test_collection_shard1_replica_t3 as down and I haven't recovered yet, so I shouldn't be the leader.
   [junit4]   2> 52358 ERROR (zkCallback-47-thread-1) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.ShardLeaderElectionContext There was a problem trying to register as the leader:org.apache.solr.common.SolrException: Leader Initiated Recovery prevented leadership
   [junit4]   2> 	at org.apache.solr.cloud.ShardLeaderElectionContext.checkLIR(ElectionContext.java:631)
   [junit4]   2> 	at org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:460)
   [junit4]   2> 	at org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:171)
   [junit4]   2> 	at org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:136)
   [junit4]   2> 	at org.apache.solr.cloud.LeaderElector.access$200(LeaderElector.java:57)
   [junit4]   2> 	at org.apache.solr.cloud.LeaderElector$ElectionWatcher.process(LeaderElector.java:349)
   [junit4]   2> 	at org.apache.solr.common.cloud.SolrZkClient$3.lambda$process$1(SolrZkClient.java:287)
   [junit4]   2> 	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
   [junit4]   2> 	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
   [junit4]   2> 	at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
   [junit4]   2> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   [junit4]   2> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   [junit4]   2> 	at java.lang.Thread.run(Thread.java:748)
   [junit4]   2> 
   [junit4]   2> 52358 INFO  (zkCallback-47-thread-1) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.ShardLeaderElectionContext There may be a better leader candidate than us - going back into recovery
   [junit4]   2> 52359 INFO  (zkCallback-47-thread-1) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.ShardLeaderElectionContextBase No version found for ephemeral leader parent node, won't remove previous leader registration.
   [junit4]   2> 52359 WARN  (updateExecutor-37-thread-1-processing-n:127.0.0.1:43543_gsn%2Fuk x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.RecoveryStrategy Stopping recovery for core=[forceleader_test_collection_shard1_replica_t3] coreNodeName=[core_node4]
   [junit4]   2> 52362 INFO  (zkCallback-71-thread-4) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
   [junit4]   2> 52362 WARN  (zkCallback-71-thread-4) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.RecoveryStrategy Stopping recovery for core=[forceleader_test_collection_shard1_replica_t5] coreNodeName=[core_node6]
   [junit4]   2> 52363 INFO  (zkCallback-47-thread-1) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 52363 INFO  (zkCallback-47-thread-4) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 52363 INFO  (zkCallback-71-thread-2) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 52363 INFO  (zkCallback-71-thread-1) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 52363 INFO  (zkCallback-47-thread-3) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 52363 INFO  (zkCallback-71-thread-3) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 52516 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=0 commError=false errorCode=510 
   [junit4]   2> 52516 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 52516 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 52517 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=1 commError=false errorCode=510 
   [junit4]   2> 52517 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 52517 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 52517 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=2 commError=false errorCode=510 
   [junit4]   2> 52518 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 52518 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 52518 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=3 commError=false errorCode=510 
   [junit4]   2> 52518 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 52518 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 52519 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=4 commError=false errorCode=510 
   [junit4]   2> 52519 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 52519 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 52519 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=5 commError=false errorCode=510 
   [junit4]   2> 52520 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 52520 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.AbstractFullDistribZkTestBase No more retries available! Add batch failed due to: org.apache.solr.common.SolrException: Could not find a healthy node to handle the request.
   [junit4]   2> 52520 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.ForceLeaderTest Document couldn't be sent, which is expected.
   [junit4]   2> 52533 INFO  (zkConnectionManagerCallback-94-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 52536 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (3)
   [junit4]   2> 52538 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[7A776A947A0461A0]) [    ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:41246/solr ready
   [junit4]   2> 52539 INFO  (SocketProxy-Acceptor-43543) [    ] o.a.s.c.s.c.SocketProxy accepted Socket[addr=/127.0.0.1,port=48120,localport=43543], receiveBufferSize:531000
   [junit4]   2> 52540 INFO  (SocketProxy-Acceptor-43543) [    ] o.a.s.c.s.c.SocketProxy proxy connection Socket[addr=/127.0.0.1,port=40831,localport=53124], receiveBufferSize=530904
   [junit4]   2> 52541 INFO  (qtp756160782-100) [n:127.0.0.1:43543_gsn%2Fuk    ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :forceleader with params action=FORCELEADER&collection=forceleader_test_collection&shard=shard1&wt=javabin&version=2 and sendToOCPQueue=true
   [junit4]   2> 52541 INFO  (qtp756160782-100) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection   ] o.a.s.h.a.CollectionsHandler Force leader invoked, state: znodeVersion: 10
   [junit4]   2> live nodes:[127.0.0.1:43543_gsn%2Fuk, 127.0.0.1:36990_gsn%2Fuk, 127.0.0.1:38474_gsn%2Fuk]
   [junit4]   2> collections:{collection1=DocCollection(collection1//clusterstate.json/10)={
   [junit4]   2>   "pullReplicas":"0",
   [junit4]   2>   "replicationFactor":"1",
   [junit4]   2>   "shards":{
   [junit4]   2>     "shard1":{
   [junit4]   2>       "range":"80000000-ffffffff",
   [junit4]   2>       "state":"active",
   [junit4]   2>       "replicas":{"core_node4":{
   [junit4]   2>           "core":"collection1_shard1_replica_n2",
   [junit4]   2>           "base_url":"http://127.0.0.1:45355/gsn/uk",
   [junit4]   2>           "node_name":"127.0.0.1:45355_gsn%2Fuk",
   [junit4]   2>           "state":"down",
   [junit4]   2>           "type":"NRT",
   [junit4]   2>           "leader":"true"}}},
   [junit4]   2>     "shard2":{
   [junit4]   2>       "range":"0-7fffffff",
   [junit4]   2>       "state":"active",
   [junit4]   2>       "replicas":{
   [junit4]   2>         "core_node3":{
   [junit4]   2>           "core":"collection1_shard2_replica_n1",
   [junit4]   2>           "base_url":"http://127.0.0.1:43543/gsn/uk",
   [junit4]   2>           "node_name":"127.0.0.1:43543_gsn%2Fuk",
   [junit4]   2>           "state":"active",
   [junit4]   2>           "type":"NRT",
   [junit4]   2>           "leader":"true"},
   [junit4]   2>         "core_node6":{
   [junit4]   2>           "core":"collection1_shard2_replica_n5",
   [junit4]   2>           "base_url":"http://127.0.0.1:38474/gsn/uk",
   [junit4]   2>           "node_name":"127.0.0.1:38474_gsn%2Fuk",
   [junit4]   2>           "state":"active",
   [junit4]   2>           "type":"NRT"}}}},
   [junit4]   2>   "router":{"name":"compositeId"},
   [junit4]   2>   "maxShardsPerNode":"1",
   [junit4]   2>   "autoAddReplicas":"false",
   [junit4]   2>   "nrtReplicas":"1",
   [junit4]   2>   "tlogReplicas":"0"}, forceleader_test_collection=DocCollection(forceleader_test_collection//collections/forceleader_test_collection/state.json/17)={
   [junit4]   2>   "pullReplicas":"0",
   [junit4]   2>   "replicationFactor":"0",
   [junit4]   2>   "shards":{"shard1":{
   [junit4]   2>       "range":"80000000-7fffffff",
   [junit4]   2>       "state":"active",
   [junit4]   2>       "replicas":{
   [junit4]   2>         "core_node2":{
   [junit4]   2>           "core":"forceleader_test_collection_shard1_replica_t1",
   [junit4]   2>           "base_url":"http://127.0.0.1:45355/gsn/uk",
   [junit4]   2>           "node_name":"127.0.0.1:45355_gsn%2Fuk",
   [junit4]   2>           "state":"down",
   [junit4]   2>           "type":"TLOG"},
   [junit4]   2>         "core_node4":{
   [junit4]   2>           "state":"down",
   [junit4]   2>           "base_url":"http://127.0.0.1:43543/gsn/uk",
   [junit4]   2>           "core":"forceleader_test_collection_shard1_replica_t3",
   [junit4]   2>           "node_name":"127.0.0.1:43543_gsn%2Fuk",
   [junit4]   2>           "force_set_state":"false",
   [junit4]   2>           "type":"TLOG"},
   [junit4]   2>         "core_node6":{
   [junit4]   2>           "state":"down",
   [junit4]   2>           "base_url":"http://127.0.0.1:38474/gsn/uk",
   [junit4]   2>           "core":"forceleader_test_collection_shard1_replica_t5",
   [junit4]   2>           "node_name":"127.0.0.1:38474_gsn%2Fuk",
   [junit4]   2>           "force_set_state":"false",
   [junit4]   2>           "type":"TLOG"}}}},
   [junit4]   2>   "router":{"name":"compositeId"},
   [junit4]   2>   "maxShardsPerNode":"1",
   [junit4]   2>   "autoAddReplicas":"false",
   [junit4]   2>   "nrtReplicas":"0",
   [junit4]   2>   "tlogReplicas":"3"}, control_collection=LazyCollectionRef(control_collection)}
   [junit4]   2> 52547 INFO  (qtp756160782-100) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection   ] o.a.s.h.a.CollectionsHandler Cleaning out LIR data, which was:     /collections/forceleader_test_collection/leader_initiated_recovery/shard1 (2)
   [junit4]   2>      /collections/forceleader_test_collection/leader_initiated_recovery/shard1/core_node6 (0)
   [junit4]   2>      DATA:
   [junit4]   2>          {
   [junit4]   2>            "state":"down",
   [junit4]   2>            "createdByNodeName":"127.0.0.1:45355_gsn%2Fuk",
   [junit4]   2>            "createdByCoreNodeName":"core_node2"}
   [junit4]   2>      /collections/forceleader_test_collection/leader_initiated_recovery/shard1/core_node4 (0)
   [junit4]   2>      DATA:
   [junit4]   2>          {
   [junit4]   2>            "state":"down",
   [junit4]   2>            "createdByNodeName":"127.0.0.1:45355_gsn%2Fuk",
   [junit4]   2>            "createdByCoreNodeName":"core_node2"}
   [junit4]   2> 
   [junit4]   2> 54380 INFO  (recoveryExecutor-38-thread-1-processing-n:127.0.0.1:43543_gsn%2Fuk x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.RecoveryStrategy Starting recovery process. recoveringAfterStartup=false
   [junit4]   2> 54380 INFO  (recoveryExecutor-38-thread-1-processing-n:127.0.0.1:43543_gsn%2Fuk x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.ZkController forceleader_test_collection_shard1_replica_t3 stopping background replication from leader
   [junit4]   2> 54862 INFO  (zkCallback-71-thread-4) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.SyncStrategy Sync replicas to http://127.0.0.1:38474/gsn/uk/forceleader_test_collection_shard1_replica_t5/
   [junit4]   2> 54862 INFO  (zkCallback-71-thread-4) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t5 url=http://127.0.0.1:38474/gsn/uk START replicas=[http://127.0.0.1:43543/gsn/uk/forceleader_test_collection_shard1_replica_t3/] nUpdates=100
   [junit4]   2> 54865 INFO  (qtp756160782-96) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t3]  webapp=/gsn/uk path=/get params={distrib=false&qt=/get&fingerprint=false&getVersions=100&wt=javabin&version=2} status=0 QTime=0
   [junit4]   2> 54865 INFO  (zkCallback-71-thread-4) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t5 url=http://127.0.0.1:38474/gsn/uk  Received 1 versions from http://127.0.0.1:43543/gsn/uk/forceleader_test_collection_shard1_replica_t3/ fingerprint:null
   [junit4]   2> 54867 INFO  (qtp756160782-98) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t3]  webapp=/gsn/uk path=/get params={distrib=false&qt=/get&checkCanHandleVersionRanges=false&wt=javabin&version=2} status=0 QTime=0
   [junit4]   2> 54870 INFO  (zkCallback-71-thread-4) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t5 url=http://127.0.0.1:38474/gsn/uk  No additional versions requested. ourHighThreshold=1621216223095160832 otherLowThreshold=1621216223095160832 ourHighest=1621216223095160832 otherHighest=1621216223095160832
   [junit4]   2> 54870 INFO  (zkCallback-71-thread-4) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t5 url=http://127.0.0.1:38474/gsn/uk DONE. sync succeeded
   [junit4]   2> 54870 INFO  (zkCallback-71-thread-4) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.SyncStrategy Sync Success - now sync replicas to me
   [junit4]   2> 54871 INFO  (zkCallback-71-thread-4) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.SyncStrategy http://127.0.0.1:38474/gsn/uk/forceleader_test_collection_shard1_replica_t5/: try and ask http://127.0.0.1:43543/gsn/uk/forceleader_test_collection_shard1_replica_t3/ to sync
   [junit4]   2> 54872 INFO  (qtp756160782-97) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t3 url=http://127.0.0.1:43543/gsn/uk START replicas=[http://127.0.0.1:38474/gsn/uk/forceleader_test_collection_shard1_replica_t5/] nUpdates=100
   [junit4]   2> 54874 INFO  (qtp1309884849-133) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.IndexFingerprint IndexFingerprint millis:0.0 result:{maxVersionSpecified=9223372036854775807, maxVersionEncountered=0, maxInHash=0, versionsHash=0, numVersions=0, numDocs=0, maxDoc=0}
   [junit4]   2> 54874 INFO  (qtp1309884849-133) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t5]  webapp=/gsn/uk path=/get params={distrib=false&qt=/get&getFingerprint=9223372036854775807&wt=javabin&version=2} status=0 QTime=0
   [junit4]   2> 54875 INFO  (qtp756160782-97) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.IndexFingerprint IndexFingerprint millis:0.0 result:{maxVersionSpecified=9223372036854775807, maxVersionEncountered=0, maxInHash=0, versionsHash=0, numVersions=0, numDocs=0, maxDoc=0}
   [junit4]   2> 54875 INFO  (qtp756160782-97) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.PeerSync We are already in sync. No need to do a PeerSync 
   [junit4]   2> 54876 INFO  (qtp756160782-97) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t3]  webapp=/gsn/uk path=/get params={distrib=false&qt=/get&getVersions=100&sync=http://127.0.0.1:38474/gsn/uk/forceleader_test_collection_shard1_replica_t5/&wt=javabin&version=2} status=0 QTime=3
   [junit4]   2> 54876 INFO  (zkCallback-71-thread-4) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.SyncStrategy http://127.0.0.1:38474/gsn/uk/forceleader_test_collection_shard1_replica_t5/:  sync completed with http://127.0.0.1:43543/gsn/uk/forceleader_test_collection_shard1_replica_t3/
   [junit4]   2> 54877 INFO  (zkCallback-71-thread-4) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.ZkController forceleader_test_collection_shard1_replica_t5 stopping background replication from leader
   [junit4]   2> 54880 INFO  (zkCallback-71-thread-4) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.ShardLeaderElectionContext Replaying tlog before become new leader
   [junit4]   2> 54880 WARN  (recoveryExecutor-76-thread-1-processing-n:127.0.0.1:38474_gsn%2Fuk x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.UpdateLog Starting log replay tlog{file=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J2/temp/solr.cloud.ForceLeaderTest_7A776A947A0461A0-001/shard-3-001/cores/forceleader_test_collection_shard1_replica_t5/data/tlog/tlog.0000000000000000000 refcount=2} active=false starting pos=0 inSortedOrder=true
   [junit4]   2> 54886 INFO  (recoveryExecutor-76-thread-1-processing-n:127.0.0.1:38474_gsn%2Fuk x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.DirectUpdateHandler2 start commit{flags=2,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 54886 INFO  (recoveryExecutor-76-thread-1-processing-n:127.0.0.1:38474_gsn%2Fuk x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.SolrIndexWriter Calling setCommitData with IW:org.apache.solr.update.SolrIndexWriter@1903b417 commitCommandVersion:0
   [junit4]   2> 55032 INFO  (recoveryExecutor-76-thread-1-processing-n:127.0.0.1:38474_gsn%2Fuk x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.s.SolrIndexSearcher Opening [Searcher@75944417[forceleader_test_collection_shard1_replica_t5] main]
   [junit4]   2> 55040 INFO  (searcherExecutor-74-thread-1-processing-n:127.0.0.1:38474_gsn%2Fuk x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.SolrCore [forceleader_test_collection_shard1_replica_t5] Registered new searcher Searcher@75944417[forceleader_test_collection_shard1_replica_t5] main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(7.7.0):C1:[diagnostics={os=Linux, java.vendor=Oracle Corporation, java.version=1.8.0_191, java.vm.version=25.191-b12, lucene.version=7.7.0, os.arch=amd64, java.runtime.version=1.8.0_191-b12, source=flush, os.version=4.4.0-137-generic, timestamp=1546112291091}]:[attributes={Lucene50StoredFieldsFormat.mode=BEST_SPEED}])))}
   [junit4]   2> 55042 INFO  (recoveryExecutor-76-thread-1-processing-n:127.0.0.1:38474_gsn%2Fuk x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 55043 INFO  (recoveryExecutor-76-thread-1-processing-n:127.0.0.1:38474_gsn%2Fuk x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.p.LogUpdateProcessorFactory [forceleader_test_collection_shard1_replica_t5] {add=[1 (1621216223095160832)]} 0 162
   [junit4]   2> 55043 WARN  (recoveryExecutor-76-thread-1-processing-n:127.0.0.1:38474_gsn%2Fuk x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.UpdateLog Log replay finished. recoveryInfo=RecoveryInfo{adds=1 deletes=0 deleteByQuery=0 errors=0 positionOfStart=0}
   [junit4]   2> 55045 INFO  (zkCallback-71-thread-4) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.ShardLeaderElectionContextBase Creating leader registration node /collections/forceleader_test_collection/leaders/shard1/leader after winning as /collections/forceleader_test_collection/leader_elect/shard1/election/72532534578118674-core_node6-n_0000000006
   [junit4]   2> 55056 INFO  (zkCallback-47-thread-1) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 55056 INFO  (zkCallback-47-thread-3) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 55056 INFO  (zkCallback-47-thread-4) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 55057 INFO  (zkCallback-71-thread-1) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 55057 INFO  (zkCallback-71-thread-2) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 55057 INFO  (zkCallback-71-thread-3) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 55059 INFO  (SocketProxy-Acceptor-38474) [    ] o.a.s.c.s.c.SocketProxy accepted Socket[addr=/127.0.0.1,port=48882,localport=38474], receiveBufferSize:531000
   [junit4]   2> 55061 INFO  (zkCallback-71-thread-4) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.ShardLeaderElectionContext I am the new leader: http://127.0.0.1:38474/gsn/uk/forceleader_test_collection_shard1_replica_t5/ shard1
   [junit4]   2> 55067 INFO  (SocketProxy-Acceptor-38474) [    ] o.a.s.c.s.c.SocketProxy proxy connection Socket[addr=/127.0.0.1,port=46202,localport=44712], receiveBufferSize=530904
   [junit4]   2> 55090 INFO  (qtp1309884849-131) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t5]  webapp=/gsn/uk path=/admin/ping params={wt=javabin&version=2} hits=1 status=0 QTime=22
   [junit4]   2> 55091 INFO  (qtp1309884849-131) [n:127.0.0.1:38474_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t5]  webapp=/gsn/uk path=/admin/ping params={wt=javabin&version=2} status=0 QTime=22
   [junit4]   2> 55092 INFO  (recoveryExecutor-38-thread-1-processing-n:127.0.0.1:43543_gsn%2Fuk x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.RecoveryStrategy Begin buffering updates. core=[forceleader_test_collection_shard1_replica_t3]
   [junit4]   2> 55093 INFO  (recoveryExecutor-38-thread-1-processing-n:127.0.0.1:43543_gsn%2Fuk x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.UpdateLog Starting to buffer updates. FSUpdateLog{state=ACTIVE, tlog=tlog{file=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J2/temp/solr.cloud.ForceLeaderTest_7A776A947A0461A0-001/shard-1-001/cores/forceleader_test_collection_shard1_replica_t3/data/tlog/tlog.0000000000000000000 refcount=1}}
   [junit4]   2> 55093 INFO  (recoveryExecutor-38-thread-1-processing-n:127.0.0.1:43543_gsn%2Fuk x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.RecoveryStrategy Publishing state of core [forceleader_test_collection_shard1_replica_t3] as recovering, leader is [http://127.0.0.1:38474/gsn/uk/forceleader_test_collection_shard1_replica_t5/] and I am [http://127.0.0.1:43543/gsn/uk/forceleader_test_collection_shard1_replica_t3/]
   [junit4]   2> 55096 INFO  (recoveryExecutor-38-thread-1-processing-n:127.0.0.1:43543_gsn%2Fuk x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:43543_gsn%2Fuk c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.RecoveryStrategy Sending prep recovery command to [http://127.0.0.1:38474/gsn/uk]; [WaitForState: action=PREPRECOVERY&core=forceleader_test_collection_shard1_replica_t5&nodeName=127.0.0.1:43543_gsn%252Fuk&coreNodeName=core_node4&state=recovering&checkLive=true&onlyIfLeader=true&onlyIfLeaderActive=true]
   [junit4]   2> 55097 INFO  (qtp1309884849-132) [n:127.0.0.1:38474_gsn%2Fuk    x:forceleader_test_collection_shard1_replica_t5] o.a.s.h.a.PrepRecoveryOp Going to wait for coreNodeName: core_node4, state: recovering, checkLive: true, onlyIfLeader: true, onlyIfLeaderActive: true
   [junit4]   2> 55097 INFO  (qtp1309884849-132) [n:127.0.0.1:38474_gsn%2Fuk    x:forceleader_test_collection_shard1_

[...truncated too long message...]

 [junit4]   2> 433775 INFO  (closeThreadPool-466-thread-2) [    ] o.a.s.m.SolrMetricManager Closing metric reporters for registry=solr.jetty, tag=null
   [junit4]   2> 433775 INFO  (closeThreadPool-466-thread-2) [    ] o.a.s.m.r.SolrJmxReporter Closing reporter [org.apache.solr.metrics.reporters.SolrJmxReporter@50e9739e: rootName = null, domain = solr.jetty, service url = null, agent id = null] for registry solr.jetty / com.codahale.metrics.MetricRegistry@77159a3f
   [junit4]   2> 433776 INFO  (closeThreadPool-466-thread-2) [    ] o.a.s.m.SolrMetricManager Closing metric reporters for registry=solr.cluster, tag=null
   [junit4]   2> 433780 INFO  (OverseerAutoScalingTriggerThread-72532560122281996-127.0.0.1:39195_gsn%2Fuk-n_0000000001) [    ] o.a.s.c.a.OverseerTriggerThread OverseerTriggerThread woken up but we are closed, exiting.
   [junit4]   2> 433784 WARN  (NIOServerCxn.Factory:0.0.0.0/0.0.0.0:0) [    ] o.a.z.s.NIOServerCnxn Unable to read additional data from client sessionid 0x101affaa8bd000f, likely client has closed socket
   [junit4]   2> 433795 INFO  (closeThreadPool-475-thread-2) [    ] o.a.s.c.Overseer Overseer (id=72532560122281996-127.0.0.1:39195_gsn%2Fuk-n_0000000001) closing
   [junit4]   2> 433824 INFO  (zkCallback-454-thread-4) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (4) -> (3)
   [junit4]   2> 433825 INFO  (closeThreadPool-466-thread-6) [    ] o.a.s.c.Overseer Overseer (id=72532560122281996-127.0.0.1:39195_gsn%2Fuk-n_0000000001) closing
   [junit4]   2> 433826 WARN  (NIOServerCxn.Factory:0.0.0.0/0.0.0.0:0) [    ] o.a.z.s.NIOServerCnxn Unable to read additional data from client sessionid 0x101affaa8bd000e, likely client has closed socket
   [junit4]   2> 433827 WARN  (NIOServerCxn.Factory:0.0.0.0/0.0.0.0:0) [    ] o.a.z.s.NIOServerCnxn Unable to read additional data from client sessionid 0x101affaa8bd0013, likely client has closed socket
   [junit4]   2> 433827 INFO  (zkCallback-454-thread-3) [    ] o.a.s.c.OverseerElectionContext I am going to be the leader 127.0.0.1:39367_gsn%2Fuk
   [junit4]   2> 433836 INFO  (closeThreadPool-466-thread-6) [    ] o.e.j.s.AbstractConnector Stopped ServerConnector@39f556b5{HTTP/1.1,[http/1.1]}{127.0.0.1:0}
   [junit4]   2> 433837 INFO  (closeThreadPool-466-thread-8) [    ] o.e.j.s.AbstractConnector Stopped ServerConnector@26f3605a{HTTP/1.1,[http/1.1]}{127.0.0.1:0}
   [junit4]   2> 433837 INFO  (closeThreadPool-466-thread-8) [    ] o.e.j.s.h.ContextHandler Stopped o.e.j.s.ServletContextHandler@e3b1aa7{/gsn/uk,null,UNAVAILABLE}
   [junit4]   2> 433837 INFO  (closeThreadPool-466-thread-8) [    ] o.e.j.s.session node0 Stopped scavenging
   [junit4]   2> 433842 INFO  (closeThreadPool-466-thread-6) [    ] o.e.j.s.h.ContextHandler Stopped o.e.j.s.ServletContextHandler@5c483dc4{/gsn/uk,null,UNAVAILABLE}
   [junit4]   2> 433842 WARN  (closeThreadPool-466-thread-8) [    ] o.a.s.c.s.c.SocketProxy Closing 8 connections to: http://127.0.0.1:34716/gsn/uk, target: http://127.0.0.1:39229/gsn/uk
   [junit4]   2> 433843 INFO  (closeThreadPool-466-thread-7) [    ] o.e.j.s.AbstractConnector Stopped ServerConnector@662b7f5f{HTTP/1.1,[http/1.1]}{127.0.0.1:0}
   [junit4]   2> 433844 INFO  (closeThreadPool-466-thread-6) [    ] o.e.j.s.session node0 Stopped scavenging
   [junit4]   2> 433845 INFO  (closeThreadPool-466-thread-7) [    ] o.e.j.s.h.ContextHandler Stopped o.e.j.s.ServletContextHandler@46994a09{/gsn/uk,null,UNAVAILABLE}
   [junit4]   2> 433846 INFO  (closeThreadPool-466-thread-7) [    ] o.e.j.s.session node0 Stopped scavenging
   [junit4]   2> 433846 WARN  (closeThreadPool-466-thread-7) [    ] o.a.s.c.s.c.SocketProxy Closing 11 connections to: http://127.0.0.1:43494/gsn/uk, target: http://127.0.0.1:32785/gsn/uk
   [junit4]   2> 433849 WARN  (closeThreadPool-466-thread-6) [    ] o.a.s.c.s.c.SocketProxy Closing 7 connections to: http://127.0.0.1:39195/gsn/uk, target: http://127.0.0.1:43515/gsn/uk
   [junit4]   2> 433854 INFO  (closeThreadPool-466-thread-2) [    ] o.e.j.s.AbstractConnector Stopped ServerConnector@60371782{HTTP/1.1,[http/1.1]}{127.0.0.1:36439}
   [junit4]   2> 433854 INFO  (closeThreadPool-466-thread-2) [    ] o.e.j.s.h.ContextHandler Stopped o.e.j.s.ServletContextHandler@4841fef{/gsn/uk,null,UNAVAILABLE}
   [junit4]   2> 433854 INFO  (closeThreadPool-466-thread-2) [    ] o.e.j.s.session node0 Stopped scavenging
   [junit4]   2> 433862 WARN  (closeThreadPool-466-thread-2) [    ] o.a.s.c.s.c.SocketProxy Closing 3 connections to: http://127.0.0.1:39367/gsn/uk, target: http://127.0.0.1:36439/gsn/uk
   [junit4]   2> 433862 INFO  (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[7A776A947A0461A0]) [    ] o.a.s.c.ZkTestServer Shutting down ZkTestServer.
   [junit4]   2> 433868 INFO  (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[7A776A947A0461A0]) [    ] o.a.s.c.ZkTestServer parse host and port list: 127.0.0.1:46013
   [junit4]   2> 433868 INFO  (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[7A776A947A0461A0]) [    ] o.a.s.c.ZkTestServer connecting to 127.0.0.1 46013
   [junit4]   2> 433871 WARN  (ZkTestServer Run Thread) [    ] o.a.s.c.ZkTestServer Watch limit violations: 
   [junit4]   2> Maximum concurrent create/delete watches above limit:
   [junit4]   2> 
   [junit4]   2> 	46	/solr/collections/forceleader_lower_terms_collection/terms/shard1
   [junit4]   2> 	13	/solr/aliases.json
   [junit4]   2> 	9	/solr/collections/collection1/terms/shard2
   [junit4]   2> 	5	/solr/security.json
   [junit4]   2> 	5	/solr/configs/conf1
   [junit4]   2> 	3	/solr/collections/forceleader_lower_terms_collection/state.json
   [junit4]   2> 	3	/solr/collections/control_collection/terms/shard1
   [junit4]   2> 	2	/solr/collections/collection1/terms/shard1
   [junit4]   2> 
   [junit4]   2> Maximum concurrent data watches above limit:
   [junit4]   2> 
   [junit4]   2> 	44	/solr/collections/forceleader_lower_terms_collection/state.json
   [junit4]   2> 	30	/solr/collections/collection1/state.json
   [junit4]   2> 	14	/solr/collections/control_collection/state.json
   [junit4]   2> 	13	/solr/clusterprops.json
   [junit4]   2> 	13	/solr/clusterstate.json
   [junit4]   2> 	2	/solr/autoscaling.json
   [junit4]   2> 	2	/solr/collections/forceleader_lower_terms_collection/leader_elect/shard1/election/72532560122281998-core_node3-n_0000000001
   [junit4]   2> 
   [junit4]   2> Maximum concurrent children watches above limit:
   [junit4]   2> 
   [junit4]   2> 	13	/solr/collections
   [junit4]   2> 	12	/solr/live_nodes
   [junit4]   2> 	2	/solr/overseer/queue
   [junit4]   2> 	2	/solr/autoscaling/events/.scheduled_maintenance
   [junit4]   2> 	2	/solr/autoscaling/events/.auto_add_replicas
   [junit4]   2> 	2	/solr/overseer/queue-work
   [junit4]   2> 	2	/solr/overseer/collection-queue-work
   [junit4]   2> 
   [junit4] OK      38.4s J0 | ForceLeaderTest.testReplicasInLowerTerms
   [junit4]   2> NOTE: leaving temporary files on disk at: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.ForceLeaderTest_7A776A947A0461A0-001
   [junit4]   2> Dec 29, 2018 7:44:30 PM com.carrotsearch.randomizedtesting.ThreadLeakControl checkThreadLeaks
   [junit4]   2> WARNING: Will linger awaiting termination of 1 leaked thread(s).
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): {multiDefault=PostingsFormat(name=Memory), a_t=PostingsFormat(name=Memory), id=Lucene50(blocksize=128), text=FST50}, docValues:{range_facet_l_dv=DocValuesFormat(name=Lucene70), _version_=DocValuesFormat(name=Asserting), intDefault=DocValuesFormat(name=Asserting), id_i1=DocValuesFormat(name=Asserting), range_facet_i_dv=DocValuesFormat(name=Lucene70), intDvoDefault=DocValuesFormat(name=Lucene70), range_facet_l=DocValuesFormat(name=Lucene70), timestamp=DocValuesFormat(name=Lucene70)}, maxPointsInLeafNode=459, maxMBSortInHeap=6.114907444039353, sim=RandomSimilarity(queryNorm=true): {}, locale=en-NZ, timezone=Pacific/Ponape
   [junit4]   2> NOTE: Linux 4.4.0-137-generic amd64/Oracle Corporation 1.8.0_191 (64-bit)/cpus=4,threads=1,free=242019128,total=504365056
   [junit4]   2> NOTE: All tests run in this JVM: [ForceLeaderTest, ForceLeaderTest]
   [junit4] Completed [5/5 (3!)] on J0 in 208.23s, 3 tests, 1 error, 1 skipped <<< FAILURES!
   [junit4] 
   [junit4] 
   [junit4] Tests with failures [seed: 7A776A947A0461A0]:
   [junit4]   - org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader
   [junit4]   - org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader
   [junit4]   - org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader
   [junit4] 
   [junit4] 
   [junit4] JVM J0:     1.01 ..  4573.00 =  4571.99s
   [junit4] JVM J1:     0.86 ..  3325.50 =  3324.64s
   [junit4] JVM J2:     0.86 ..  1879.17 =  1878.31s
   [junit4] Execution time total: 1 hour 16 minutes 13 seconds
   [junit4] Tests summary: 5 suites, 15 tests, 3 errors, 5 ignored

BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/common-build.xml:1572: The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/common-build.xml:1099: There were test failures: 5 suites, 15 tests, 3 errors, 5 ignored [seed: 7A776A947A0461A0]

Total time: 76 minutes 16 seconds

[repro] Setting last failure code to 256

[repro] Failures:
[repro]   3/5 failed: org.apache.solr.cloud.ForceLeaderTest
[repro] git checkout 345a655f216258c406c384ada9aa6d5f14e254f9
Previous HEAD position was 9aa15a2... SOLR-12973: Admin UI Nodes view support for replica* replica names. (Daniel Collins, Christine Poerschke, janhoy)
HEAD is now at 345a655... SOLR-12973: Admin UI Nodes view support for replica* replica names. (Daniel Collins, Christine Poerschke, janhoy)
[repro] Exiting with code 256
Archiving artifacts
Recording test results
Build step 'Publish JUnit test result report' changed build result to UNSTABLE
Email was triggered for: Unstable (Test Failures)
Sending email for trigger: Unstable (Test Failures)