You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2019/03/30 05:51:59 UTC

[JENKINS] Lucene-Solr-repro - Build # 3088 - Unstable

Build: https://builds.apache.org/job/Lucene-Solr-repro/3088/

[...truncated 29 lines...]
[repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-8.x/11/consoleText

[repro] Revision: 03a3562f782a795afb3e5dbb474aca216273cc4d

[repro] Ant options: -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-8.x/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=StressHdfsTest -Dtests.method=test -Dtests.seed=C6AA855DD07506D9 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.badapples=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-8.x/test-data/enwiki.random.lines.txt -Dtests.locale=es-HN -Dtests.timezone=America/Martinique -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=HdfsCollectionsAPIDistributedZkTest -Dtests.method=testCollectionsAPI -Dtests.seed=C6AA855DD07506D9 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.badapples=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-8.x/test-data/enwiki.random.lines.txt -Dtests.locale=ar-JO -Dtests.timezone=BET -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 07b37ff26becf214bae000dff08b9091d31327a8
[repro] git fetch
[repro] git checkout 03a3562f782a795afb3e5dbb474aca216273cc4d

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]    solr/core
[repro]       HdfsCollectionsAPIDistributedZkTest
[repro]       StressHdfsTest
[repro] ant compile-test

[...truncated 3576 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 -Dtests.class="*.HdfsCollectionsAPIDistributedZkTest|*.StressHdfsTest" -Dtests.showOutput=onerror -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-8.x/test-data/enwiki.random.lines.txt -Dtests.seed=C6AA855DD07506D9 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.badapples=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-8.x/test-data/enwiki.random.lines.txt -Dtests.locale=ar-JO -Dtests.timezone=BET -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 84647 lines...]
   [junit4]   2> 713920 ERROR (Finalizer) [    ] o.a.s.c.SolrCore REFCOUNT ERROR: unreferenced org.apache.solr.core.SolrCore@705bd61a (delete_data_dir_shard3_replica_n6) has a reference count of -1
   [junit4]   2> 713920 ERROR (Finalizer) [    ] o.a.s.c.SolrCore REFCOUNT ERROR: unreferenced org.apache.solr.core.SolrCore@6ee50c15 (delete_data_dir_shard3_replica_n8) has a reference count of -1
   [junit4]   2> 713920 ERROR (Finalizer) [    ] o.a.s.c.SolrCore REFCOUNT ERROR: unreferenced org.apache.solr.core.SolrCore@2ad28ff4 (delete_data_dir_shard1_replica_n1) has a reference count of -1
   [junit4]   2> 713924 INFO  (closeThreadPool-258-thread-1) [    ] o.a.s.h.a.MetricsHistoryHandler No .system collection, keeping metrics history in memory.
   [junit4]   2> 714290 INFO  (closeThreadPool-258-thread-1) [    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.node' (registry 'solr.node') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@259bf6dc
   [junit4]   2> 714305 INFO  (TEST-StressHdfsTest.test-seed#[C6AA855DD07506D9]) [    ] o.a.s.c.AbstractFullDistribZkTestBase create jetty 5 in directory /home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002/shard-5-001 of type NRT
   [junit4]   2> 714326 WARN  (closeThreadPool-258-thread-2) [    ] o.e.j.s.AbstractConnector Ignoring deprecated socket close linger time
   [junit4]   2> 714326 INFO  (closeThreadPool-258-thread-2) [    ] o.a.s.c.s.e.JettySolrRunner Start Jetty (original configured port=0)
   [junit4]   2> 714326 INFO  (closeThreadPool-258-thread-2) [    ] o.a.s.c.s.e.JettySolrRunner Trying to start Jetty on port 0 try number 1 ...
   [junit4]   2> 714326 INFO  (closeThreadPool-258-thread-2) [    ] o.e.j.s.Server jetty-9.4.14.v20181114; built: 2018-11-14T21:20:31.478Z; git: c4550056e785fb5665914545889f21dc136ad9e6; jvm 1.8.0_191-b12
   [junit4]   2> 714358 INFO  (closeThreadPool-258-thread-2) [    ] o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 714358 INFO  (closeThreadPool-258-thread-2) [    ] o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 714358 INFO  (closeThreadPool-258-thread-2) [    ] o.e.j.s.session node0 Scavenging every 660000ms
   [junit4]   2> 714358 INFO  (closeThreadPool-258-thread-2) [    ] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@316881f8{/_kxy/k,null,AVAILABLE}
   [junit4]   2> 714446 INFO  (closeThreadPool-258-thread-1) [    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jvm' (registry 'solr.jvm') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@259bf6dc
   [junit4]   2> 714483 INFO  (closeThreadPool-258-thread-1) [    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jetty' (registry 'solr.jetty') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@259bf6dc
   [junit4]   2> 714485 INFO  (closeThreadPool-258-thread-1) [    ] o.a.s.c.CorePropertiesLocator Found 0 core definitions underneath /home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002/shard-4-001/cores
   [junit4]   2> 714503 INFO  (closeThreadPool-258-thread-2) [    ] o.e.j.s.AbstractConnector Started ServerConnector@55a7342c{HTTP/1.1,[http/1.1, h2c]}{127.0.0.1:44932}
   [junit4]   2> 714503 INFO  (closeThreadPool-258-thread-2) [    ] o.e.j.s.Server Started @714615ms
   [junit4]   2> 714503 INFO  (closeThreadPool-258-thread-2) [    ] o.a.s.c.s.e.JettySolrRunner Jetty properties: {solr.data.dir=hdfs://lucene2-us-west.apache.org:37806/hdfs__lucene2-us-west.apache.org_37806__home_jenkins_jenkins-slave_workspace_Lucene-Solr-repro_solr_build_solr-core_test_J0_temp_solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002_tempDir-002_jetty5, replicaType=NRT, solrconfig=solrconfig.xml, hostContext=/_kxy/k, hostPort=44932, coreRootDirectory=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002/shard-5-001/cores}
   [junit4]   2> 714504 ERROR (closeThreadPool-258-thread-2) [    ] o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be missing or incomplete.
   [junit4]   2> 714504 INFO  (closeThreadPool-258-thread-2) [    ] o.a.s.s.SolrDispatchFilter Using logger factory org.apache.logging.slf4j.Log4jLoggerFactory
   [junit4]   2> 714504 INFO  (closeThreadPool-258-thread-2) [    ] o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr™ version 8.1.0
   [junit4]   2> 714504 INFO  (closeThreadPool-258-thread-2) [    ] o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 714504 INFO  (closeThreadPool-258-thread-2) [    ] o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 714504 INFO  (closeThreadPool-258-thread-2) [    ] o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 2019-03-30T05:42:58.342Z
   [junit4]   2> 714635 INFO  (zkConnectionManagerCallback-336-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 714706 INFO  (closeThreadPool-258-thread-2) [    ] o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in ZooKeeper)
   [junit4]   2> 714706 INFO  (closeThreadPool-258-thread-2) [    ] o.a.s.c.SolrXmlConfig Loading container configuration from /home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002/shard-5-001/solr.xml
   [junit4]   2> 714833 WARN  (NIOServerCxn.Factory:0.0.0.0/0.0.0.0:0) [    ] o.a.z.s.NIOServerCnxn Unable to read additional data from client sessionid 0x108716157480018, likely client has closed socket
   [junit4]   2> 714848 INFO  (closeThreadPool-258-thread-2) [    ] o.a.s.c.SolrXmlConfig Configuration parameter autoReplicaFailoverWorkLoopDelay is ignored
   [junit4]   2> 714848 INFO  (closeThreadPool-258-thread-2) [    ] o.a.s.c.SolrXmlConfig Configuration parameter autoReplicaFailoverBadNodeExpiration is ignored
   [junit4]   2> 714869 INFO  (closeThreadPool-258-thread-2) [    ] o.a.s.c.SolrXmlConfig MBean server found: com.sun.jmx.mbeanserver.JmxMBeanServer@259bf6dc, but no JMX reporters were configured - adding default JMX reporter.
   [junit4]   2> 716509 INFO  (TEST-StressHdfsTest.test-seed#[C6AA855DD07506D9]) [    ] o.a.s.c.AbstractFullDistribZkTestBase create jetty 6 in directory /home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002/shard-6-001 of type NRT
   [junit4]   2> 716510 WARN  (closeThreadPool-258-thread-1) [    ] o.e.j.s.AbstractConnector Ignoring deprecated socket close linger time
   [junit4]   2> 716510 INFO  (closeThreadPool-258-thread-1) [    ] o.a.s.c.s.e.JettySolrRunner Start Jetty (original configured port=0)
   [junit4]   2> 716510 INFO  (closeThreadPool-258-thread-1) [    ] o.a.s.c.s.e.JettySolrRunner Trying to start Jetty on port 0 try number 1 ...
   [junit4]   2> 716510 INFO  (closeThreadPool-258-thread-1) [    ] o.e.j.s.Server jetty-9.4.14.v20181114; built: 2018-11-14T21:20:31.478Z; git: c4550056e785fb5665914545889f21dc136ad9e6; jvm 1.8.0_191-b12
   [junit4]   2> 716621 INFO  (closeThreadPool-258-thread-1) [    ] o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 716621 INFO  (closeThreadPool-258-thread-1) [    ] o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 716621 INFO  (closeThreadPool-258-thread-1) [    ] o.e.j.s.session node0 Scavenging every 600000ms
   [junit4]   2> 716621 INFO  (closeThreadPool-258-thread-1) [    ] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@1bbeb5e5{/_kxy/k,null,AVAILABLE}
   [junit4]   2> 716922 INFO  (closeThreadPool-258-thread-1) [    ] o.e.j.s.AbstractConnector Started ServerConnector@1a36a32c{HTTP/1.1,[http/1.1, h2c]}{127.0.0.1:45307}
   [junit4]   2> 716922 INFO  (closeThreadPool-258-thread-1) [    ] o.e.j.s.Server Started @717034ms
   [junit4]   2> 716922 INFO  (closeThreadPool-258-thread-1) [    ] o.a.s.c.s.e.JettySolrRunner Jetty properties: {solr.data.dir=hdfs://lucene2-us-west.apache.org:37806/hdfs__lucene2-us-west.apache.org_37806__home_jenkins_jenkins-slave_workspace_Lucene-Solr-repro_solr_build_solr-core_test_J0_temp_solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002_tempDir-002_jetty6, replicaType=NRT, solrconfig=solrconfig.xml, hostContext=/_kxy/k, hostPort=45307, coreRootDirectory=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/../../../../../../../../../../home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002/shard-6-001/cores}
   [junit4]   2> 716923 ERROR (closeThreadPool-258-thread-1) [    ] o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be missing or incomplete.
   [junit4]   2> 716923 INFO  (closeThreadPool-258-thread-1) [    ] o.a.s.s.SolrDispatchFilter Using logger factory org.apache.logging.slf4j.Log4jLoggerFactory
   [junit4]   2> 716923 INFO  (closeThreadPool-258-thread-1) [    ] o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr™ version 8.1.0
   [junit4]   2> 716923 INFO  (closeThreadPool-258-thread-1) [    ] o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 716923 INFO  (closeThreadPool-258-thread-1) [    ] o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 716923 INFO  (closeThreadPool-258-thread-1) [    ] o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 2019-03-30T05:43:00.761Z
   [junit4]   2> 717214 INFO  (zkConnectionManagerCallback-339-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 717299 INFO  (closeThreadPool-258-thread-1) [    ] o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in ZooKeeper)
   [junit4]   2> 717370 INFO  (closeThreadPool-258-thread-1) [    ] o.a.s.c.SolrXmlConfig Loading container configuration from /home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002/shard-6-001/solr.xml
   [junit4]   2> 717374 INFO  (closeThreadPool-258-thread-1) [    ] o.a.s.c.SolrXmlConfig Configuration parameter autoReplicaFailoverWorkLoopDelay is ignored
   [junit4]   2> 717374 INFO  (closeThreadPool-258-thread-1) [    ] o.a.s.c.SolrXmlConfig Configuration parameter autoReplicaFailoverBadNodeExpiration is ignored
   [junit4]   2> 717375 INFO  (closeThreadPool-258-thread-1) [    ] o.a.s.c.SolrXmlConfig MBean server found: com.sun.jmx.mbeanserver.JmxMBeanServer@259bf6dc, but no JMX reporters were configured - adding default JMX reporter.
   [junit4]   2> 718372 INFO  (closeThreadPool-258-thread-1) [    ] o.a.s.h.c.HttpShardHandlerFactory Host whitelist initialized: WhitelistHostChecker [whitelistHosts=null, whitelistHostCheckingEnabled=false]
   [junit4]   2> 718485 WARN  (closeThreadPool-258-thread-1) [    ] o.e.j.u.s.S.config No Client EndPointIdentificationAlgorithm configured for SslContextFactory@6cabe3fd[provider=null,keyStore=null,trustStore=null]
   [junit4]   2> 718683 WARN  (closeThreadPool-258-thread-1) [    ] o.e.j.u.s.S.config No Client EndPointIdentificationAlgorithm configured for SslContextFactory@62779b07[provider=null,keyStore=null,trustStore=null]
   [junit4]   2> 718684 INFO  (closeThreadPool-258-thread-1) [    ] o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:35719/solr
   [junit4]   2> 718791 INFO  (zkConnectionManagerCallback-346-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 718864 INFO  (closeThreadPool-258-thread-2) [    ] o.a.s.h.c.HttpShardHandlerFactory Host whitelist initialized: WhitelistHostChecker [whitelistHosts=null, whitelistHostCheckingEnabled=false]
   [junit4]   2> 718900 INFO  (zkConnectionManagerCallback-348-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 718905 WARN  (closeThreadPool-258-thread-2) [    ] o.e.j.u.s.S.config No Client EndPointIdentificationAlgorithm configured for SslContextFactory@1c5ef5c[provider=null,keyStore=null,trustStore=null]
   [junit4]   2> 718992 INFO  (closeThreadPool-258-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (5)
   [junit4]   2> 719028 INFO  (closeThreadPool-258-thread-1) [    ] o.a.s.c.ZkController Publish node=127.0.0.1:45307__kxy%2Fk as DOWN
   [junit4]   2> 719046 INFO  (closeThreadPool-258-thread-1) [    ] o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 4 transient cores
   [junit4]   2> 719046 INFO  (closeThreadPool-258-thread-1) [    ] o.a.s.c.ZkController Register node as live in ZooKeeper:/live_nodes/127.0.0.1:45307__kxy%2Fk
   [junit4]   2> 719158 INFO  (zkCallback-244-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (5) -> (6)
   [junit4]   2> 719158 INFO  (zkCallback-315-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (5) -> (6)
   [junit4]   2> 719158 INFO  (zkCallback-275-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (5) -> (6)
   [junit4]   2> 719158 INFO  (zkCallback-306-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (5) -> (6)
   [junit4]   2> 719158 INFO  (zkCallback-290-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (5) -> (6)
   [junit4]   2> 719158 INFO  (zkCallback-268-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (5) -> (6)
   [junit4]   2> 719158 INFO  (zkCallback-237-thread-4) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (5) -> (6)
   [junit4]   2> 719159 INFO  (zkCallback-256-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (5) -> (6)
   [junit4]   2> 719160 WARN  (closeThreadPool-258-thread-2) [    ] o.e.j.u.s.S.config No Client EndPointIdentificationAlgorithm configured for SslContextFactory@615709ed[provider=null,keyStore=null,trustStore=null]
   [junit4]   2> 719161 INFO  (closeThreadPool-258-thread-2) [    ] o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:35719/solr
   [junit4]   2> 719168 INFO  (zkCallback-297-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (5) -> (6)
   [junit4]   2> 719188 INFO  (zkCallback-325-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (5) -> (6)
   [junit4]   2> 719194 INFO  (TEST-StressHdfsTest.test-seed#[C6AA855DD07506D9]) [    ] o.a.s.c.AbstractFullDistribZkTestBase create jetty 7 in directory /home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002/shard-7-001 of type NRT
   [junit4]   2> 719194 INFO  (zkCallback-332-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (5) -> (6)
   [junit4]   2> 719230 WARN  (closeThreadPool-258-thread-3) [    ] o.e.j.s.AbstractConnector Ignoring deprecated socket close linger time
   [junit4]   2> 719230 INFO  (closeThreadPool-258-thread-3) [    ] o.a.s.c.s.e.JettySolrRunner Start Jetty (original configured port=0)
   [junit4]   2> 719230 INFO  (closeThreadPool-258-thread-3) [    ] o.a.s.c.s.e.JettySolrRunner Trying to start Jetty on port 0 try number 1 ...
   [junit4]   2> 719230 INFO  (closeThreadPool-258-thread-3) [    ] o.e.j.s.Server jetty-9.4.14.v20181114; built: 2018-11-14T21:20:31.478Z; git: c4550056e785fb5665914545889f21dc136ad9e6; jvm 1.8.0_191-b12
   [junit4]   2> 719243 INFO  (closeThreadPool-258-thread-3) [    ] o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 719243 INFO  (closeThreadPool-258-thread-3) [    ] o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 719243 INFO  (closeThreadPool-258-thread-3) [    ] o.e.j.s.session node0 Scavenging every 600000ms
   [junit4]   2> 719243 INFO  (closeThreadPool-258-thread-3) [    ] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@4371324d{/_kxy/k,null,AVAILABLE}
   [junit4]   2> 719245 INFO  (zkCallback-347-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (5) -> (6)
   [junit4]   2> 719267 INFO  (closeThreadPool-258-thread-3) [    ] o.e.j.s.AbstractConnector Started ServerConnector@5bbb8a0a{HTTP/1.1,[http/1.1, h2c]}{127.0.0.1:37427}
   [junit4]   2> 719267 INFO  (closeThreadPool-258-thread-3) [    ] o.e.j.s.Server Started @719379ms
   [junit4]   2> 719267 INFO  (closeThreadPool-258-thread-3) [    ] o.a.s.c.s.e.JettySolrRunner Jetty properties: {solr.data.dir=hdfs://lucene2-us-west.apache.org:37806/hdfs__lucene2-us-west.apache.org_37806__home_jenkins_jenkins-slave_workspace_Lucene-Solr-repro_solr_build_solr-core_test_J0_temp_solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002_tempDir-002_jetty7, replicaType=NRT, solrconfig=solrconfig.xml, hostContext=/_kxy/k, hostPort=37427, coreRootDirectory=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/../../../../../../../../../../home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002/shard-7-001/cores}
   [junit4]   2> 719268 ERROR (closeThreadPool-258-thread-3) [    ] o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be missing or incomplete.
   [junit4]   2> 719268 INFO  (closeThreadPool-258-thread-3) [    ] o.a.s.s.SolrDispatchFilter Using logger factory org.apache.logging.slf4j.Log4jLoggerFactory
   [junit4]   2> 719268 INFO  (closeThreadPool-258-thread-3) [    ] o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr™ version 8.1.0
   [junit4]   2> 719268 INFO  (closeThreadPool-258-thread-3) [    ] o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 719268 INFO  (closeThreadPool-258-thread-3) [    ] o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 719268 INFO  (closeThreadPool-258-thread-3) [    ] o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 2019-03-30T05:43:03.106Z
   [junit4]   2> 719358 INFO  (zkConnectionManagerCallback-356-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 719427 INFO  (zkConnectionManagerCallback-358-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 719631 INFO  (closeThreadPool-258-thread-3) [    ] o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in ZooKeeper)
   [junit4]   2> 719631 INFO  (closeThreadPool-258-thread-3) [    ] o.a.s.c.SolrXmlConfig Loading container configuration from /home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002/shard-7-001/solr.xml
   [junit4]   2> 719632 INFO  (zkConnectionManagerCallback-360-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 719632 INFO  (zkConnectionManagerCallback-365-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 719720 INFO  (closeThreadPool-258-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (6)
   [junit4]   2> 719721 INFO  (closeThreadPool-258-thread-1) [    ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:35719/solr ready
   [junit4]   2> 719735 INFO  (closeThreadPool-258-thread-1) [    ] o.a.s.h.a.MetricsHistoryHandler No .system collection, keeping metrics history in memory.
   [junit4]   2> 719737 INFO  (closeThreadPool-258-thread-2) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (6)
   [junit4]   2> 719773 INFO  (closeThreadPool-258-thread-2) [    ] o.a.s.c.ZkController Publish node=127.0.0.1:44932__kxy%2Fk as DOWN
   [junit4]   2> 719775 INFO  (closeThreadPool-258-thread-2) [    ] o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 4 transient cores
   [junit4]   2> 719775 INFO  (closeThreadPool-258-thread-2) [    ] o.a.s.c.ZkController Register node as live in ZooKeeper:/live_nodes/127.0.0.1:44932__kxy%2Fk
   [junit4]   2> 719795 INFO  (zkCallback-275-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (6) -> (7)
   [junit4]   2> 719795 INFO  (zkCallback-237-thread-4) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (6) -> (7)
   [junit4]   2> 719809 INFO  (zkCallback-290-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (6) -> (7)
   [junit4]   2> 719809 INFO  (zkCallback-244-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (6) -> (7)
   [junit4]   2> 719809 INFO  (zkCallback-268-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (6) -> (7)
   [junit4]   2> 719809 INFO  (zkCallback-306-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (6) -> (7)
   [junit4]   2> 719809 INFO  (zkCallback-315-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (6) -> (7)
   [junit4]   2> 719836 INFO  (zkCallback-256-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (6) -> (7)
   [junit4]   2> 719837 INFO  (zkCallback-364-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (6) -> (7)
   [junit4]   2> 719837 INFO  (zkCallback-332-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (6) -> (7)
   [junit4]   2> 719837 INFO  (zkCallback-347-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (6) -> (7)
   [junit4]   2> 719837 INFO  (zkCallback-359-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (6) -> (7)
   [junit4]   2> 719841 INFO  (closeThreadPool-258-thread-3) [    ] o.a.s.c.SolrXmlConfig Configuration parameter autoReplicaFailoverWorkLoopDelay is ignored
   [junit4]   2> 719841 INFO  (closeThreadPool-258-thread-3) [    ] o.a.s.c.SolrXmlConfig Configuration parameter autoReplicaFailoverBadNodeExpiration is ignored
   [junit4]   2> 719842 INFO  (closeThreadPool-258-thread-3) [    ] o.a.s.c.SolrXmlConfig MBean server found: com.sun.jmx.mbeanserver.JmxMBeanServer@259bf6dc, but no JMX reporters were configured - adding default JMX reporter.
   [junit4]   2> 719844 INFO  (zkCallback-297-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (6) -> (7)
   [junit4]   2> 719844 INFO  (zkCallback-325-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (6) -> (7)
   [junit4]   2> 719899 INFO  (closeThreadPool-258-thread-1) [    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.node' (registry 'solr.node') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@259bf6dc
   [junit4]   2> 720117 INFO  (closeThreadPool-258-thread-1) [    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jvm' (registry 'solr.jvm') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@259bf6dc
   [junit4]   2> 720117 INFO  (closeThreadPool-258-thread-1) [    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jetty' (registry 'solr.jetty') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@259bf6dc
   [junit4]   2> 720119 INFO  (closeThreadPool-258-thread-1) [    ] o.a.s.c.CorePropertiesLocator Found 0 core definitions underneath /home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/../../../../../../../../../../home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002/shard-6-001/cores
   [junit4]   2> 720476 INFO  (zkConnectionManagerCallback-374-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 720550 INFO  (closeThreadPool-258-thread-2) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (7)
   [junit4]   2> 720551 INFO  (closeThreadPool-258-thread-2) [    ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:35719/solr ready
   [junit4]   2> 720552 INFO  (closeThreadPool-258-thread-2) [    ] o.a.s.h.a.MetricsHistoryHandler No .system collection, keeping metrics history in memory.
   [junit4]   2> 720734 INFO  (closeThreadPool-258-thread-2) [    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.node' (registry 'solr.node') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@259bf6dc
   [junit4]   2> 720881 INFO  (closeThreadPool-258-thread-2) [    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jvm' (registry 'solr.jvm') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@259bf6dc
   [junit4]   2> 720881 INFO  (closeThreadPool-258-thread-2) [    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jetty' (registry 'solr.jetty') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@259bf6dc
   [junit4]   2> 720915 INFO  (closeThreadPool-258-thread-2) [    ] o.a.s.c.CorePropertiesLocator Found 0 core definitions underneath /home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002/shard-5-001/cores
   [junit4]   2> 722344 INFO  (closeThreadPool-258-thread-3) [    ] o.a.s.h.c.HttpShardHandlerFactory Host whitelist initialized: WhitelistHostChecker [whitelistHosts=null, whitelistHostCheckingEnabled=false]
   [junit4]   2> 722361 WARN  (closeThreadPool-258-thread-3) [    ] o.e.j.u.s.S.config No Client EndPointIdentificationAlgorithm configured for SslContextFactory@16d48b39[provider=null,keyStore=null,trustStore=null]
   [junit4]   2> 722384 WARN  (closeThreadPool-258-thread-3) [    ] o.e.j.u.s.S.config No Client EndPointIdentificationAlgorithm configured for SslContextFactory@3071030b[provider=null,keyStore=null,trustStore=null]
   [junit4]   2> 722418 INFO  (closeThreadPool-258-thread-3) [    ] o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:35719/solr
   [junit4]   2> 722508 INFO  (zkConnectionManagerCallback-381-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 722581 INFO  (zkConnectionManagerCallback-383-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 722617 INFO  (closeThreadPool-258-thread-3) [n:127.0.0.1:37427__kxy%2Fk    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (7)
   [junit4]   2> 722637 INFO  (closeThreadPool-258-thread-3) [n:127.0.0.1:37427__kxy%2Fk    ] o.a.s.c.ZkController Publish node=127.0.0.1:37427__kxy%2Fk as DOWN
   [junit4]   2> 722638 INFO  (closeThreadPool-258-thread-3) [n:127.0.0.1:37427__kxy%2Fk    ] o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 4 transient cores
   [junit4]   2> 722638 INFO  (closeThreadPool-258-thread-3) [n:127.0.0.1:37427__kxy%2Fk    ] o.a.s.c.ZkController Register node as live in ZooKeeper:/live_nodes/127.0.0.1:37427__kxy%2Fk
   [junit4]   2> 722639 INFO  (zkCallback-325-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (7) -> (8)
   [junit4]   2> 722676 INFO  (zkCallback-347-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (7) -> (8)
   [junit4]   2> 722676 INFO  (zkCallback-332-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (7) -> (8)
   [junit4]   2> 722676 INFO  (zkCallback-256-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (7) -> (8)
   [junit4]   2> 722676 INFO  (zkCallback-364-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (7) -> (8)
   [junit4]   2> 722684 INFO  (zkCallback-382-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (7) -> (8)
   [junit4]   2> 722685 INFO  (zkCallback-315-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (7) -> (8)
   [junit4]   2> 722685 INFO  (zkCallback-290-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (7) -> (8)
   [junit4]   2> 722685 INFO  (zkCallback-268-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (7) -> (8)
   [junit4]   2> 722685 INFO  (zkCallback-306-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (7) -> (8)
   [junit4]   2> 722685 INFO  (zkCallback-275-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (7) -> (8)
   [junit4]   2> 722685 INFO  (zkCallback-244-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (7) -> (8)
   [junit4]   2> 722688 INFO  (zkCallback-297-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (7) -> (8)
   [junit4]   2> 722692 INFO  (zkCallback-237-thread-3) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (7) -> (8)
   [junit4]   2> 722693 INFO  (zkCallback-359-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (7) -> (8)
   [junit4]   2> 722693 INFO  (zkCallback-373-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (7) -> (8)
   [junit4]   2> 723001 INFO  (zkConnectionManagerCallback-390-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 723090 INFO  (closeThreadPool-258-thread-3) [n:127.0.0.1:37427__kxy%2Fk    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (8)
   [junit4]   2> 723091 INFO  (closeThreadPool-258-thread-3) [n:127.0.0.1:37427__kxy%2Fk    ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:35719/solr ready
   [junit4]   2> 723126 INFO  (closeThreadPool-258-thread-3) [n:127.0.0.1:37427__kxy%2Fk    ] o.a.s.h.a.MetricsHistoryHandler No .system collection, keeping metrics history in memory.
   [junit4]   2> 723332 INFO  (closeThreadPool-258-thread-3) [n:127.0.0.1:37427__kxy%2Fk    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.node' (registry 'solr.node') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@259bf6dc
   [junit4]   2> 723477 INFO  (closeThreadPool-258-thread-3) [n:127.0.0.1:37427__kxy%2Fk    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jvm' (registry 'solr.jvm') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@259bf6dc
   [junit4]   2> 723477 INFO  (closeThreadPool-258-thread-3) [n:127.0.0.1:37427__kxy%2Fk    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jetty' (registry 'solr.jetty') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@259bf6dc
   [junit4]   2> 723514 INFO  (closeThreadPool-258-thread-3) [n:127.0.0.1:37427__kxy%2Fk    ] o.a.s.c.CorePropertiesLocator Found 0 core definitions underneath /home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/../../../../../../../../../../home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002/shard-7-001/cores
   [junit4]   2> 724796 INFO  (qtp171835094-18185) [n:127.0.0.1:33860__kxy%2Fk    ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :addreplica with params node=127.0.0.1:45307__kxy%252Fk&action=ADDREPLICA&collection=collection1&shard=shard1&type=NRT&wt=javabin&version=2 and sendToOCPQueue=true
   [junit4]   2> 724796 INFO  (qtp171835094-18186) [n:127.0.0.1:33860__kxy%2Fk    ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :addreplica with params node=127.0.0.1:35682__kxy%252Fk&action=ADDREPLICA&collection=collection1&shard=shard1&type=NRT&wt=javabin&version=2 and sendToOCPQueue=true
   [junit4]   2> 724797 INFO  (qtp171835094-18187) [n:127.0.0.1:33860__kxy%2Fk    ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :addreplica with params node=127.0.0.1:33860__kxy%252Fk&action=ADDREPLICA&collection=collection1&shard=shard1&type=NRT&wt=javabin&version=2 and sendToOCPQueue=true
   [junit4]   2> 724797 INFO  (qtp171835094-18184) [n:127.0.0.1:33860__kxy%2Fk    ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :addreplica with params node=127.0.0.1:37427__kxy%252Fk&action=ADDREPLICA&collection=collection1&shard=shard1&type=NRT&wt=javabin&version=2 and sendToOCPQueue=true
   [junit4]   2> 724808 INFO  (qtp171835094-18181) [n:127.0.0.1:33860__kxy%2Fk    ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :addreplica with params node=127.0.0.1:44932__kxy%252Fk&action=ADDREPLICA&collection=collection1&shard=shard1&type=NRT&wt=javabin&version=2 and sendToOCPQueue=true
   [junit4]   2> 724845 INFO  (qtp171835094-18377) [n:127.0.0.1:33860__kxy%2Fk    ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :addreplica with params node=127.0.0.1:42427__kxy%252Fk&action=ADDREPLICA&collection=collection1&shard=shard1&type=NRT&wt=javabin&version=2 and sendToOCPQueue=true
   [junit4]   2> 724873 INFO  (qtp171835094-18378) [n:127.0.0.1:33860__kxy%2Fk    ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :addreplica with params node=127.0.0.1:37605__kxy%252Fk&action=ADDREPLICA&collection=collection1&shard=shard1&type=NRT&wt=javabin&version=2 and sendToOCPQueue=true
   [junit4]   2> 725075 INFO  (OverseerThreadFactory-203-thread-3-processing-n:127.0.0.1:40557__kxy%2Fk) [n:127.0.0.1:40557__kxy%2Fk c:collection1 s:shard1  ] o.a.s.c.a.c.AddReplicaCmd Node Identified 127.0.0.1:37427__kxy%2Fk for creating new replica of shard shard1 for collection collection1
   [junit4]   2> 725077 INFO  (OverseerThreadFactory-203-thread-3-processing-n:127.0.0.1:40557__kxy%2Fk) [n:127.0.0.1:40557__kxy%2Fk c:collection1 s:shard1  ] o.a.s.c.a.c.AddReplicaCmd Returning CreateReplica command.
   [junit4]   2> 725359 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk    x:collection1_shard1_replica_n1] o.a.s.h.a.CoreAdminOperation core create command qt=/admin/cores&collection.configName=conf1&name=collection1_shard1_replica_n1&action=CREATE&collection=collection1&shard=shard1&wt=javabin&version=2&replicaType=NRT
   [junit4]   2> 726503 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.c.SolrConfig Using Lucene MatchVersion: 8.1.0
   [junit4]   2> 726633 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.s.IndexSchema [collection1_shard1_replica_n1] Schema name=test
   [junit4]   2> 727815 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.s.IndexSchema Loaded schema test/1.0 with uniqueid field id
   [junit4]   2> 727961 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.c.CoreContainer Creating SolrCore 'collection1_shard1_replica_n1' using configuration from collection collection1, trusted=true
   [junit4]   2> 727961 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.core.collection1.shard1.replica_n1' (registry 'solr.core.collection1.shard1.replica_n1') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@259bf6dc
   [junit4]   2> 727962 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory solr.hdfs.home=hdfs://lucene2-us-west.apache.org:37806/solr_hdfs_home
   [junit4]   2> 727962 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory Solr Kerberos Authentication disabled
   [junit4]   2> 727962 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.c.SolrCore [[collection1_shard1_replica_n1] ] Opening new SolrCore at [/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002/shard-7-001/cores/collection1_shard1_replica_n1], dataDir=[hdfs://lucene2-us-west.apache.org:37806/solr_hdfs_home/collection1/core_node2/data/]
   [junit4]   2> 727963 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory creating directory factory for path hdfs://lucene2-us-west.apache.org:37806/solr_hdfs_home/collection1/core_node2/data/snapshot_metadata
   [junit4]   2> 728015 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct memory allocation set to [true]
   [junit4]   2> 728015 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of [4194304] will allocate [1] slabs and use ~[4194304] bytes
   [junit4]   2> 728015 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache
   [junit4]   2> 728276 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 728301 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory creating directory factory for path hdfs://lucene2-us-west.apache.org:37806/solr_hdfs_home/collection1/core_node2/data
   [junit4]   2> 728389 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory creating directory factory for path hdfs://lucene2-us-west.apache.org:37806/solr_hdfs_home/collection1/core_node2/data/index
   [junit4]   2> 728409 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct memory allocation set to [true]
   [junit4]   2> 728409 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of [4194304] will allocate [1] slabs and use ~[4194304] bytes
   [junit4]   2> 728409 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache
   [junit4]   2> 728483 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 728484 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class org.apache.lucene.index.MockRandomMergePolicy: org.apache.lucene.index.MockRandomMergePolicy@58f75db
   [junit4]   2> 729016 WARN  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.c.RequestHandlers INVALID paramSet a in requestHandler {type = requestHandler,name = /dump,class = DumpRequestHandler,attributes = {initParams=a, name=/dump, class=DumpRequestHandler},args = {defaults={a=A,b=B}}}
   [junit4]   2> 729434 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.u.UpdateHandler Using UpdateLog implementation: org.apache.solr.update.HdfsUpdateLog
   [junit4]   2> 729434 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.u.UpdateLog Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 729434 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.u.HdfsUpdateLog Initializing HdfsUpdateLog: tlogDfsReplication=2
   [junit4]   2> 729490 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.u.CommitTracker Hard AutoCommit: disabled
   [junit4]   2> 729490 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.u.CommitTracker Soft AutoCommit: disabled
   [junit4]   2> 729561 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: maxMergeAtOnce=40, maxMergeAtOnceExplicit=32, maxMergedSegmentMB=60.9423828125, floorSegmentMB=0.3017578125, forceMergeDeletesPctAllowed=3.9179815532163307, segmentsPerTier=46.0, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=0.0, deletesPctAllowed=40.79942449338278
   [junit4]   2> 729634 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.s.SolrIndexSearcher Opening [Searcher@24545db3[collection1_shard1_replica_n1] main]
   [junit4]   2> 729649 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: /configs/conf1
   [junit4]   2> 729650 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 729650 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms.
   [junit4]   2> 729651 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.u.UpdateLog Could not find max version in index or recent updates, using new clock 1629408034542321664
   [junit4]   2> 729787 INFO  (searcherExecutor-276-thread-1-processing-n:127.0.0.1:37427__kxy%2Fk x:collection1_shard1_replica_n1 c:collection1 s:shard1) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.c.SolrCore [collection1_shard1_replica_n1] Registered new searcher Searcher@24545db3[collection1_shard1_replica_n1] main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 729866 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.c.ZkShardTerms Successful update of terms at /collections/collection1/terms/shard1 to Terms{values={core_node2=0}, version=0}
   [junit4]   2> 729867 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContextBase make sure parent is created /collections/collection1/leaders/shard1
   [junit4]   2> 730072 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue.
   [junit4]   2> 730072 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
   [junit4]   2> 730072 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.c.SyncStrategy Sync replicas to http://127.0.0.1:37427/_kxy/k/collection1_shard1_replica_n1/
   [junit4]   2> 730072 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.c.SyncStrategy Sync Success - now sync replicas to me
   [junit4]   2> 730072 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.c.SyncStrategy http://127.0.0.1:37427/_kxy/k/collection1_shard1_replica_n1/ has no replicas
   [junit4]   2> 730072 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContextBase Creating leader registration node /collections/collection1/leaders/shard1/leader after winning as /collections/collection1/leader_elect/shard1/election/74434056741716002-core_node2-n_0000000000
   [junit4]   2> 730074 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContext I am the new leader: http://127.0.0.1:37427/_kxy/k/collection1_shard1_replica_n1/ shard1
   [junit4]   2> 730081 INFO  (zkCallback-382-thread-1) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8])
   [junit4]   2> 730095 INFO  (zkCallback-382-thread-2) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8])
   [junit4]   2> 730145 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.c.ZkController I am the leader, no recovery necessary
   [junit4]   2> 730146 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n1] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores params={qt=/admin/cores&collection.configName=conf1&name=collection1_shard1_replica_n1&action=CREATE&collection=collection1&shard=shard1&wt=javabin&version=2&replicaType=NRT} status=0 QTime=4787
   [junit4]   2> 730218 INFO  (qtp171835094-18184) [n:127.0.0.1:33860__kxy%2Fk c:collection1   ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections params={node=127.0.0.1:37427__kxy%252Fk&action=ADDREPLICA&collection=collection1&shard=shard1&type=NRT&wt=javabin&version=2} status=0 QTime=5421
   [junit4]   2> 730290 INFO  (OverseerCollectionConfigSetProcessor-74434056741715972-127.0.0.1:40557__kxy%2Fk-n_0000000000) [n:127.0.0.1:40557__kxy%2Fk    ] o.a.s.c.OverseerTaskQueue Response ZK path: /overseer/collection-queue-work/qnr-0000000007 doesn't exist.  Requestor may have disconnected from ZooKeeper
   [junit4]   2> 730307 INFO  (OverseerThreadFactory-203-thread-4-processing-n:127.0.0.1:40557__kxy%2Fk) [n:127.0.0.1:40557__kxy%2Fk c:collection1 s:shard1  ] o.a.s.c.a.c.AddReplicaCmd Node Identified 127.0.0.1:45307__kxy%2Fk for creating new replica of shard shard1 for collection collection1
   [junit4]   2> 730308 INFO  (OverseerThreadFactory-203-thread-4-processing-n:127.0.0.1:40557__kxy%2Fk) [n:127.0.0.1:40557__kxy%2Fk c:collection1 s:shard1  ] o.a.s.c.a.c.AddReplicaCmd Returning CreateReplica command.
   [junit4]   2> 730337 INFO  (zkCallback-382-thread-1) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8])
   [junit4]   2> 730443 INFO  (zkCallback-382-thread-2) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8])
   [junit4]   2> 730443 INFO  (zkCallback-382-thread-1) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8])
   [junit4]   2> 730448 INFO  (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk    x:collection1_shard1_replica_n3] o.a.s.h.a.CoreAdminOperation core create command qt=/admin/cores&collection.configName=conf1&name=collection1_shard1_replica_n3&action=CREATE&collection=collection1&shard=shard1&wt=javabin&version=2&replicaType=NRT
   [junit4]   2> 730642 INFO  (zkCallback-382-thread-2) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8])
   [junit4]   2> 730642 INFO  (zkCallback-382-thread-1) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8])
   [junit4]   2> 730666 INFO  (zkCallback-382-thread-3) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8])
   [junit4]   2> 731634 INFO  (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n3] o.a.s.c.SolrConfig Using Lucene MatchVersion: 8.1.0
   [junit4]   2> 731744 INFO  (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n3] o.a.s.s.IndexSchema [collection1_shard1_replica_n3] Schema name=test
   [junit4]   2> 732650 INFO  (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n3] o.a.s.s.IndexSchema Loaded schema test/1.0 with uniqueid field id
   [junit4]   2> 732812 INFO  (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n3] o.a.s.c.CoreContainer Creating SolrCore 'collection1_shard1_replica_n3' using configuration from collection collection1, trusted=true
   [junit4]   2> 732813 INFO  (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n3] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.core.collection1.shard1.replica_n3' (registry 'solr.core.collection1.shard1.replica_n3') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@259bf6dc
   [junit4]   2> 732813 INFO  (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n3] o.a.s.c.HdfsDirectoryFactory solr.hdfs.home=hdfs://lucene2-us-west.apache.org:37806/solr_hdfs_home
   [junit4]   2> 732813 INFO  (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n3] o.a.s.c.HdfsDirectoryFactory Solr Kerberos Authentication disabled
   [junit4]   2> 732813 INFO  (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n3] o.a.s.c.SolrCore [[collection1_shard1_replica_n3] ] Opening new SolrCore at [/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002/shard-6-001/cores/collection1_shard1_replica_n3], dataDir=[hdfs://lucene2-us-west.apache.org:37806/solr_hdfs_home/collection1/core_node4/data/]
   [junit4]   2> 732884 INFO  (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n3] o.a.s.c.HdfsDirectoryFactory creating directory factory for path hdfs://lucene2-us-west.apache.org:37806/solr_hdfs_home/collection1/core_node4/data/snapshot_metadata
   [junit4]   2> 733050 INFO  (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n3] o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct memory allocation set to [true]
   [junit4]   2> 733050 INFO  (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n3] o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of [4194304] will allocate [1] slabs and use ~[4194304] bytes
   [junit4]   2> 733050 INFO  (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n3] o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache
   [junit4]   2> 733118 INFO  (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n3] o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 733154 INFO  (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n3] o.a.s.c.HdfsDirectoryFactory creating directory factory for path hdfs://lucene2-us-west.apache.org:37806/solr_hdfs_home/collection1/core_node4/data
   [junit4]   2> 733458 INFO  (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n3] o.a.s.c.HdfsDirectoryFactory creating directory factory for path hdfs://lucene2-us-west.apache.org:37806/solr_hdfs_home/collection1/core_node4/data/index
   [junit4]   2> 733517 INFO  (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n3] o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct memory allocation set to [true]
   [junit4]   2> 733517 INFO  (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n3] o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of [4194304] will allocate [1] slabs and use ~[4194304] bytes
   [junit4]   2> 733517 INFO  (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n3] o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache
   [junit4]   2> 733587 INFO  (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n3] o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 733588 INFO  (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n3] o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class org.apache.lucene.index.MockRandomMergePolicy: org.apache.lucene.index.MockRandomMergePolicy@4c88fa37
   [junit4]   2> 734374 WARN  (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n3] o.a.s.c.RequestHandlers INVALID paramSet a in requestHandler {type = requestHandler,name = /dump,class = DumpRequestHandler,attributes = {initParams=a, name=/dump, class=DumpRequestHandler},args = {defaults={a=A,b=B}}}
   [junit4]   2> 734792 INFO  (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n3] o.a.s.u.UpdateHandler Using UpdateLog implementation: org.apache.solr.update.HdfsUpdateLog
   [junit4]   2> 734792 INFO  (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n3] o.a.s.u.UpdateLog Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 734792 INFO  (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n3] o.a.s.u.HdfsUpdateLog Initializing HdfsUpdateLog: tlogDfsReplication=2
   [junit4]   2> 734974 INFO  (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n3] o.a.s.u.CommitTracker Hard AutoCommit: disabled
   [junit4]   2> 734974 INFO  (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n3] o.a.s.u.CommitTracker Soft AutoCommit: disabled
   [junit4]   2> 734976 INFO  (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n3] o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: maxMergeAtOnce=40, maxMergeAtOnceExplicit=32, maxMergedSegmentMB=60.9423828125, floorSegmentMB=0.3017578125, forceMergeDeletesPctAllowed=3.9179815532163307, segmentsPerTier=46.0, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=0.0, deletesPctAllowed=40.79942449338278
   [junit4]   2> 735068 INFO  (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n3] o.a.s.s.SolrIndexSearcher Opening [Searcher@c0e15fe[collection1_shard1_replica_n3] main]
   [junit4]   2> 735082 INFO  (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n3] o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: /configs/conf1
   [junit4]   2> 735083 INFO  (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n3] o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 735083 INFO  (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n3] o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms.
   [junit4]   2> 735083 INFO  (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n3] o.a.s.u.UpdateLog Could not find max version in index or recent updates, using new clock 1629408040238186496
   [junit4]   2> 735100 INFO  (searcherExecutor-281-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n3] o.a.s.c.SolrCore [collection1_shard1_replica_n3] Registered new searcher Searcher@c0e15fe[collection1_shard1_replica_n3] main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 735137 INFO  (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n3] o.a.s.c.ZkShardTerms Successful update of terms at /collections/collection1/terms/shard1 to Terms{values={core_node2=0, core_node4=0}, version=1}
   [junit4]   2> 735137 INFO  (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n3] o.a.s.c.ShardLeaderElectionContextBase make sure parent is created /collections/collection1/leaders/shard1
   [junit4]   2> 735195 INFO  (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n3] o.a.s.c.ZkController Core needs to recover:collection1_shard1_replica_n3
   [junit4]   2> 735224 INFO  (updateExecutor-342-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.DefaultSolrCoreState Running recovery
   [junit4]   2> 735249 INFO  (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Starting recovery process. recoveringAfterStartup=true
   [junit4]   2> 735250 INFO  (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n3] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores params={qt=/admin/cores&collection.configName=conf1&name=collection1_shard1_replica_n3&action=CREATE&collection=collection1&shard=shard1&wt=javabin&version=2&replicaType=NRT} status=0 QTime=4801
   [junit4]   2> 735250 INFO  (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy startupVersions is empty
   [junit4]   2> 735280 INFO  (qtp1708575716-18319) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp=/_kxy/k path=/admin/ping params={wt=javabin&version=2} hits=0 status=0 QTime=13
   [junit4]   2> 735280 INFO  (qtp1708575716-18319) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp=/_kxy/k path=/admin/ping params={wt=javabin&version=2} status=0 QTime=13
   [junit4]   2> 735281 INFO  (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Begin buffering updates. core=[collection1_shard1_replica_n3]
   [junit4]   2> 735354 INFO  (qtp171835094-18185) [n:127.0.0.1:33860__kxy%2Fk c:collection1   ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections params={node=127.0.0.1:45307__kxy%252Fk&action=ADDREPLICA&collection=collection1&shard=shard1&type=NRT&wt=javabin&version=2} status=0 QTime=10558
   [junit4]   2> 735357 INFO  (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.UpdateLog Starting to buffer updates. HDFSUpdateLog{state=ACTIVE, tlog=null}
   [junit4]   2> 735357 INFO  (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Publishing state of core [collection1_shard1_replica_n3] as recovering, leader is [http://127.0.0.1:37427/_kxy/k/collection1_shard1_replica_n1/] and I am [http://127.0.0.1:45307/_kxy/k/collection1_shard1_replica_n3/]
   [junit4]   2> 735558 INFO  (zkCallback-382-thread-3) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8])
   [junit4]   2> 735558 INFO  (zkCallback-382-thread-2) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8])
   [junit4]   2> 735558 INFO  (zkCallback-382-thread-1) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8])
   [junit4]   2> 735571 INFO  (zkCallback-347-thread-1) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8])
   [junit4]   2> 735572 INFO  (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Sending prep recovery command to [http://127.0.0.1:37427/_kxy/k]; [WaitForState: action=PREPRECOVERY&core=collection1_shard1_replica_n1&nodeName=127.0.0.1:45307__kxy%252Fk&coreNodeName=core_node4&state=recovering&checkLive=true&onlyIfLeader=true&onlyIfLeaderActive=true]
   [junit4]   2> 735577 INFO  (zkCallback-347-thread-1) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8])
   [junit4]   2> 735577 INFO  (zkCallback-347-thread-2) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8])
   [junit4]   2> 735593 INFO  (qtp1708575716-18315) [n:127.0.0.1:37427__kxy%2Fk    x:collection1_shard1_replica_n1] o.a.s.h.a.PrepRecoveryOp Going to wait for coreNodeName: core_node4, state: recovering, checkLive: true, onlyIfLeader: true, onlyIfLeaderActive: true
   [junit4]   2> 735595 INFO  (qtp1708575716-18315) [n:127.0.0.1:37427__kxy%2Fk    x:collection1_shard1_replica_n1] o.a.s.h.a.PrepRecoveryOp In WaitForState(recovering): collection=collection1, shard=shard1, thisCore=collection1_shard1_replica_n1, leaderDoesNotNeedRecovery=false, isLeader? true, live=true, checkLive=true, currentState=recovering, localState=active, nodeName=127.0.0.1:45307__kxy%2Fk, coreNodeName=core_node4, onlyIfActiveCheckResult=false, nodeProps: core_node4:{"dataDir":"hdfs://lucene2-us-west.apache.org:37806/solr_hdfs_home/collection1/core_node4/data/","base_url":"http://127.0.0.1:45307/_kxy/k","node_name":"127.0.0.1:45307__kxy%2Fk","type":"NRT","ulogDir":"hdfs://lucene2-us-west.apache.org:37806/solr_hdfs_home/collection1/core_node4/data/tlog","core":"collection1_shard1_replica_n3","shared_storage":"true","state":"recovering"}
   [junit4]   2> 735595 INFO  (qtp1708575716-18315) [n:127.0.0.1:37427__kxy%2Fk    x:collection1_shard1_replica_n1] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores params={nodeName=127.0.0.1:45307__kxy%252Fk&onlyIfLeaderActive=true&core=collection1_shard1_replica_n1&coreNodeName=core_node4&action=PREPRECOVERY&checkLive=true&state=recovering&onlyIfLeader=true&wt=javabin&version=2} status=0 QTime=1
   [junit4]   2> 736098 INFO  (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Attempting to PeerSync from [http://127.0.0.1:37427/_kxy/k/collection1_shard1_replica_n1/] - recoveringAfterStartup=[true]
   [junit4]   2> 736098 WARN  (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.PeerSyncWithLeader no frame of reference to tell if we've missed updates
   [junit4]   2> 736098 INFO  (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy PeerSync Recovery was not successful - trying replication.
   [junit4]   2> 736098 INFO  (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Starting Replication Recovery.
   [junit4]   2> 736098 INFO  (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Attempting to replicate from [http://127.0.0.1:37427/_kxy/k/collection1_shard1_replica_n1/].
   [junit4]   2> 736118 INFO  (qtp1708575716-18316) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 start commit{_version_=1629408041323462656,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 736152 INFO  (qtp1708575716-18316) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 No uncommitted changes. Skipping IW.commit.
   [junit4]   2> 736230 INFO  (qtp760765234-18284) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.DistributedUpdateProcessor Ignoring commit while not ACTIVE - state: BUFFERING replay: false
   [junit4]   2> 736230 INFO  (qtp760765234-18284) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3]  webapp=/_kxy/k path=/update params={update.distrib=FROMLEADER&waitSearcher=true&openSearcher=false&commit=true&softCommit=false&distrib.from=http://127.0.0.1:37427/_kxy/k/collection1_shard1_replica_n1/&commit_end_point=replicas&wt=javabin&version=2&expungeDeletes=false}{commit=} 0 0
   [junit4]   2> 736285 INFO  (qtp1708575716-18316) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 736286 INFO  (qtp1708575716-18316) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1]  webapp=/_kxy/k path=/update params={waitSearcher=true&openSearcher=false&commit=true&softCommit=false&wt=javabin&version=2}{commit=} 0 187
   [junit4]   2> 736341 INFO  (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp=/_kxy/k path=/replication params={qt=/replication&wt=javabin&version=2&command=indexversion} status=0 QTime=53
   [junit4]   2> 736342 INFO  (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher Master's generation: 1
   [junit4]   2> 736342 INFO  (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher Master's version: 0
   [junit4]   2> 736342 INFO  (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher Slave's generation: 1
   [junit4]   2> 736342 INFO  (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher Slave's version: 0
   [junit4]   2> 736342 INFO  (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher New index in Master. Deleting mine...
   [junit4]   2> 736369 INFO  (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.s.SolrIndexSearcher Opening [Searcher@2e148c2c[collection1_shard1_replica_n3] main]
   [junit4]   2> 736410 INFO  (OverseerCollectionConfigSetProcessor-74434056741715972-127.0.0.1:40557__kxy%2Fk-n_0000000000) [n:127.0.0.1:40557__kxy%2Fk    ] o.a.s.c.OverseerTaskQueue Response ZK path: /overseer/collection-queue-work/qnr-0000000004 doesn't exist.  Requestor may have disconnected from ZooKeeper
   [junit4]   2> 736412 INFO  (searcherExecutor-281-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.SolrCore [collection1_shard1_replica_n3] Registered new searcher Searcher@2e148c2c[collection1_shard1_replica_n3] main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 736412 INFO  (OverseerThreadFactory-203-thread-5-processing-n:127.0.0.1:40557__kxy%2Fk) [n:127.0.0.1:40557__kxy%2Fk c:collection1 s:shard1  ] o.a.s.c.a.c.AddReplicaCmd Node Identified 127.0.0.1:35682__kxy%2Fk for creating new replica of shard shard1 for collection collection1
   [junit4]   2> 736413 INFO  (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy No replay needed.
   [junit4]   2> 736413 INFO  (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Replication Recovery was successful.
   [junit4]   2> 736413 INFO  (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Registering as Active after recovery.
   [junit4]   2> 736431 INFO  (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Updating version bucket highest from index after successful recovery.
   [junit4]   2> 736431 INFO  (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.UpdateLog Could not find max version in index or recent updates, using new clock 1629408041651666944
   [junit4]   2> 736432 INFO  (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Finished recovery process, successful=[true]
   [junit4]   2> 736447 INFO  (OverseerThreadFactory-203-thread-5-processing-n:127.0.0.1:40557__kxy%2Fk) [n:127.0.0.1:40557__kxy%2Fk c:collection1 s:shard1  ] o.a.s.c.a.c.AddReplicaCmd Returning CreateReplica command.
   [junit4]   2> 736539 INFO  (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk    x:collection1_shard1_replica_n5] o.a.s.h.a.CoreAdminOperation core create command qt=/admin/cores&collection.configName=conf1&name=collection1_shard1_replica_n5&action=CREATE&collection=collection1&shard=shard1&wt=javabin&version=2&replicaType=NRT
   [junit4]   2> 736556 INFO  (zkCallback-382-thread-1) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8])
   [junit4]   2> 736556 INFO  (zkCallback-382-thread-3) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8])
   [junit4]   2> 736556 INFO  (zkCallback-382-thread-2) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8])
   [junit4]   2> 736560 INFO  (zkCallback-347-thread-2) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8])
   [junit4]   2> 736560 INFO  (zkCallback-347-thread-1) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8])
   [junit4]   2> 736560 INFO  (zkCallback-347-thread-3) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8])
   [junit4]   2> 736771 INFO  (zkCallback-347-thread-2) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8])
   [junit4]   2> 736771 INFO  (zkCallback-347-thread-1) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8])
   [junit4]   2> 736791 INFO  (zkCallback-347-thread-3) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8])
   [junit4]   2> 736791 INFO  (zkCallback-382-thread-1) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8])
   [junit4]   2> 736791 INFO  (zkCallback-382-thread-3) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8])
   [junit4]   2> 736791 INFO  (zkCallback-382-thread-2) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8])
   [junit4]   2> 737954 INFO  (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n5] o.a.s.c.SolrConfig Using Lucene MatchVersion: 8.1.0
   [junit4]   2> 738068 INFO  (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n5] o.a.s.s.IndexSchema [collection1_shard1_replica_n5] Schema name=test
   [junit4]   2> 739791 INFO  (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n5] o.a.s.s.IndexSchema Loaded schema test/1.0 with uniqueid field id
   [junit4]   2> 739954 INFO  (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n5] o.a.s.c.CoreContainer Creating SolrCore 'collection1_shard1_replica_n5' using configuration from collection collection1, trusted=true
   [junit4]   2> 739955 INFO  (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n5] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.core.collection1.shard1.replica_n5' (registry 'solr.core.collection1.shard1.replica_n5') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@259bf6dc
   [junit4]   2> 739971 INFO  (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n5] o.a.s.c.HdfsDirectoryFactory solr.hdfs.home=hdfs://lucene2-us-west.apache.org:37806/solr_hdfs_home
   [junit4]   2> 739971 INFO  (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n5] o.a.s.c.HdfsDirectoryFactory Solr Kerberos Authentication disabled
   [junit4]   2> 739971 INFO  (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n5] o.a.s.c.SolrCore [[collection1_shard1_replica_n5] ] Opening new SolrCore at [/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002/shard-1-001/cores/collection1_shard1_replica_n5], dataDir=[hdfs://lucene2-us-west.apache.org:37806/solr_hdfs_home/collection1/core_node6/data/]
   [junit4]   2> 739972 INFO  (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n5] o.a.s.c.HdfsDirectoryFactory creating directory factory for path hdfs://lucene2-us-west.apache.org:37806/solr_hdfs_home/collection1/core_node6/data/snapshot_metadata
   [junit4]   2> 740026 INFO  (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n5] o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct memory allocation set to [true]
   [junit4]   2> 740027 INFO  (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n5] o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of [4194304] will allocate [1] slabs and use ~[4194304] bytes
   [junit4]   2> 740027 INFO  (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n5] o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache
   [junit4]   2> 740158 INFO  (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n5] o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 740159 INFO  (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n5] o.a.s.c.HdfsDirectoryFactory creating directory factory for path hdfs://lucene2-us-west.apache.org:37806/solr_hdfs_home/collection1/core_node6/data
   [junit4]   2> 740316 INFO  (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n5] o.a.s.c.HdfsDirectoryFactory creating directory factory for path hdfs://lucene2-us-west.apache.org:37806/solr_hdfs_home/collection1/core_node6/data/index
   [junit4]   2> 740354 INFO  (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n5] o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct memory allocation set to [true]
   [junit4]   2> 740354 INFO  (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n5] o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of [4194304] will allocate [1] slabs and use ~[4194304] bytes
   [junit4]   2> 740354 INFO  (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n5] o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache
   [junit4]   2> 740462 INFO  (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n5] o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 740463 INFO  (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n5] o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class org.apache.lucene.index.MockRandomMergePolicy: org.apache.lucene.index.MockRandomMergePolicy@20bc513c
   [junit4]   2> 741025 WARN  (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n5] o.a.s.c.RequestHandlers INVALID paramSet a in requestHandler {type = requestHandler,name = /dump,class = DumpRequestHandler,attributes = {initParams=a, name=/dump, class=DumpRequestHandler},args = {defaults={a=A,b=B}}}
   [junit4]   2> 741301 INFO  (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n5] o.a.s.u.UpdateHandler Using UpdateLog implementation: org.apache.solr.update.HdfsUpdateLog
   [junit4]   2> 741301 INFO  (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n5] o.a.s.u.UpdateLog Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 741301 INFO  (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n5] o.a.s.u.HdfsUpdateLog Initializing HdfsUpdateLog: tlogDfsReplication=2
   [junit4]   2> 741351 INFO  (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n5] o.a.s.u.CommitTracker Hard AutoCommit: disabled
   [junit4]   2> 741351 INFO  (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n5] o.a.s.u.CommitTracker Soft AutoCommit: disabled
   [junit4]   2> 741365 INFO  (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n5] o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: maxMergeAtOnce=40, maxMergeAtOnceExplicit=32, maxMergedSegmentMB=60.9423828125, floorSegmentMB=0.3017578125, forceMergeDeletesPctAllowed=3.9179815532163307, segmentsPerTier=46.0, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=0.0, deletesPctAllowed=40.79942449338278
   [junit4]   2> 741562 INFO  (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n5] o.a.s.s.SolrIndexSearcher Opening [Searcher@385e2046[collection1_shard1_replica_n5] main]
   [junit4]   2> 741600 INFO  (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n5] o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: /configs/conf1
   [junit4]   2> 741600 INFO  (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n5] o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 741601 INFO  (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n5] o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms.
   [junit4]   2> 741601 INFO  (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n5] o.a.s.u.UpdateLog Could not find max version in index or recent updates, using new clock 1629408047072804864
   [junit4]   2> 741683 INFO  (searcherExecutor-286-thread-1-processing-n:127.0.0.1:35682__kxy%2Fk x:collection1_shard1_replica_n5 c:collection1 s:shard1) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1  x:collection1_shard1_replica_n5] o.a.s.c.SolrCore [collection1_shard1_replica_n5] Registered new searcher Searcher@385e2046[colle

[...truncated too long message...]

pache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
   [junit4]    > 	at java.lang.Thread.run(Thread.java:748)
   [junit4]    > Caused by: java.net.SocketTimeoutException: Read timed out
   [junit4]    > 	at java.net.SocketInputStream.socketRead0(Native Method)
   [junit4]    > 	at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
   [junit4]    > 	at java.net.SocketInputStream.read(SocketInputStream.java:171)
   [junit4]    > 	at java.net.SocketInputStream.read(SocketInputStream.java:141)
   [junit4]    > 	at org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
   [junit4]    > 	at org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:153)
   [junit4]    > 	at org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:282)
   [junit4]    > 	at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:138)
   [junit4]    > 	at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56)
   [junit4]    > 	at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259)
   [junit4]    > 	at org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163)
   [junit4]    > 	at org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:165)
   [junit4]    > 	at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273)
   [junit4]    > 	at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
   [junit4]    > 	at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272)
   [junit4]    > 	at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185)
   [junit4]    > 	at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
   [junit4]    > 	at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
   [junit4]    > 	at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
   [junit4]    > 	at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
   [junit4]    > 	at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
   [junit4]    > 	at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:548)
   [junit4]    > 	... 53 more
   [junit4]   2> 1012086 WARN  (SUITE-StressHdfsTest-seed#[C6AA855DD07506D9]-worker) [    ] o.a.h.h.s.d.DirectoryScanner DirectoryScanner: shutdown has been called
   [junit4]   2> 1012091 INFO  (SUITE-StressHdfsTest-seed#[C6AA855DD07506D9]-worker) [    ] o.e.j.s.h.ContextHandler Stopped o.e.j.w.WebAppContext@261d707e{datanode,/,null,UNAVAILABLE}{/datanode}
   [junit4]   2> 1012091 INFO  (SUITE-StressHdfsTest-seed#[C6AA855DD07506D9]-worker) [    ] o.e.j.s.AbstractConnector Stopped ServerConnector@64bb06bb{HTTP/1.1,[http/1.1]}{localhost:0}
   [junit4]   2> 1012091 INFO  (SUITE-StressHdfsTest-seed#[C6AA855DD07506D9]-worker) [    ] o.e.j.s.session node0 Stopped scavenging
   [junit4]   2> 1012092 INFO  (SUITE-StressHdfsTest-seed#[C6AA855DD07506D9]-worker) [    ] o.e.j.s.h.ContextHandler Stopped o.e.j.s.ServletContextHandler@57fc3dca{static,/static,jar:file:/home/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-3.2.0-tests.jar!/webapps/static,UNAVAILABLE}
   [junit4]   2> 1012146 WARN  (BP-1411784138-127.0.0.1-1553924534507 heartbeating to lucene2-us-west.apache.org/127.0.0.1:37806) [    ] o.a.h.h.s.d.IncrementalBlockReportManager IncrementalBlockReportManager interrupted
   [junit4]   2> 1012146 WARN  (BP-1411784138-127.0.0.1-1553924534507 heartbeating to lucene2-us-west.apache.org/127.0.0.1:37806) [    ] o.a.h.h.s.d.DataNode Ending block pool service for: Block pool BP-1411784138-127.0.0.1-1553924534507 (Datanode Uuid aea1f837-cb06-474c-b550-ecd13bc42b4b) service to lucene2-us-west.apache.org/127.0.0.1:37806
   [junit4]   2> 1012165 WARN  (refreshUsed-/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002/tempDir-001/hdfsBaseDir/data/data1/current/BP-1411784138-127.0.0.1-1553924534507) [    ] o.a.h.f.CachingGetSpaceUsed Thread Interrupted waiting to refresh disk information: sleep interrupted
   [junit4]   2> 1012205 WARN  (refreshUsed-/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002/tempDir-001/hdfsBaseDir/data/data2/current/BP-1411784138-127.0.0.1-1553924534507) [    ] o.a.h.f.CachingGetSpaceUsed Thread Interrupted waiting to refresh disk information: sleep interrupted
   [junit4]   2> 1012244 INFO  (SUITE-StressHdfsTest-seed#[C6AA855DD07506D9]-worker) [    ] o.e.j.s.h.ContextHandler Stopped o.e.j.w.WebAppContext@2bfecc87{hdfs,/,null,UNAVAILABLE}{/hdfs}
   [junit4]   2> 1012245 INFO  (SUITE-StressHdfsTest-seed#[C6AA855DD07506D9]-worker) [    ] o.e.j.s.AbstractConnector Stopped ServerConnector@4d3f14a{HTTP/1.1,[http/1.1]}{lucene2-us-west.apache.org:0}
   [junit4]   2> 1012245 INFO  (SUITE-StressHdfsTest-seed#[C6AA855DD07506D9]-worker) [    ] o.e.j.s.session node0 Stopped scavenging
   [junit4]   2> 1012245 INFO  (SUITE-StressHdfsTest-seed#[C6AA855DD07506D9]-worker) [    ] o.e.j.s.h.ContextHandler Stopped o.e.j.s.ServletContextHandler@16e65e64{static,/static,jar:file:/home/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-3.2.0-tests.jar!/webapps/static,UNAVAILABLE}
   [junit4]   2> NOTE: leaving temporary files on disk at: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002
   [junit4]   2> Mar 30, 2019 5:47:57 AM com.carrotsearch.randomizedtesting.ThreadLeakControl checkThreadLeaks
   [junit4]   2> WARNING: Will linger awaiting termination of 16 leaked thread(s).
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene80): {multiDefault=PostingsFormat(name=LuceneFixedGap), _root_=PostingsFormat(name=Asserting), id=Lucene50(blocksize=128), text=PostingsFormat(name=LuceneVarGapDocFreqInterval), txt_t=PostingsFormat(name=LuceneVarGapDocFreqInterval)}, docValues:{range_facet_l_dv=DocValuesFormat(name=Asserting), _version_=DocValuesFormat(name=Lucene80), intDefault=DocValuesFormat(name=Lucene80), id_i1=DocValuesFormat(name=Lucene80), range_facet_i_dv=DocValuesFormat(name=Lucene80), intDvoDefault=DocValuesFormat(name=Direct), range_facet_l=DocValuesFormat(name=Lucene80), timestamp=DocValuesFormat(name=Lucene80)}, maxPointsInLeafNode=684, maxMBSortInHeap=5.830008782199224, sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@11ed1428), locale=ar-JO, timezone=BET
   [junit4]   2> NOTE: Linux 4.4.0-112-generic amd64/Oracle Corporation 1.8.0_191 (64-bit)/cpus=4,threads=4,free=152333400,total=502267904
   [junit4]   2> NOTE: All tests run in this JVM: [StressHdfsTest, StressHdfsTest]
   [junit4] Completed [9/10 (6!)] on J0 in 346.01s, 1 test, 1 error <<< FAILURES!
   [junit4] 
   [junit4] HEARTBEAT J1 PID(10731@lucene2-us-west.apache.org): 2019-03-30T05:48:40, stalled for 69.8s at: StressHdfsTest.test
   [junit4] HEARTBEAT J1 PID(10731@lucene2-us-west.apache.org): 2019-03-30T05:49:40, stalled for  130s at: StressHdfsTest.test
   [junit4] HEARTBEAT J1 PID(10731@lucene2-us-west.apache.org): 2019-03-30T05:50:40, stalled for  190s at: StressHdfsTest.test
   [junit4] HEARTBEAT J1 PID(10731@lucene2-us-west.apache.org): 2019-03-30T05:51:40, stalled for  250s at: StressHdfsTest.test
   [junit4] Suite: org.apache.solr.cloud.hdfs.StressHdfsTest
   [junit4] OK       256s J1 | StressHdfsTest.test
   [junit4] Completed [10/10 (6!)] on J1 in 263.71s, 1 test
   [junit4] 
   [junit4] 
   [junit4] Tests with failures [seed: C6AA855DD07506D9]:
   [junit4]   - org.apache.solr.cloud.hdfs.StressHdfsTest.test
   [junit4]   - org.apache.solr.cloud.hdfs.StressHdfsTest.test
   [junit4]   - org.apache.solr.cloud.hdfs.StressHdfsTest.test
   [junit4]   - org.apache.solr.cloud.api.collections.HdfsCollectionsAPIDistributedZkTest.testCollectionsAPI
   [junit4]   - org.apache.solr.cloud.api.collections.HdfsCollectionsAPIDistributedZkTest.testCollectionsAPI
   [junit4]   - org.apache.solr.cloud.hdfs.StressHdfsTest.test
   [junit4] 
   [junit4] 
   [junit4] JVM J0:     1.65 ..  1019.30 =  1017.66s
   [junit4] JVM J1:     1.71 ..  1247.12 =  1245.41s
   [junit4] JVM J2:     1.75 ..   998.41 =   996.66s
   [junit4] Execution time total: 20 minutes 47 seconds
   [junit4] Tests summary: 10 suites, 100 tests, 5 errors, 1 failure

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/common-build.xml:1573: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/common-build.xml:1100: There were test failures: 10 suites, 100 tests, 5 errors, 1 failure [seed: C6AA855DD07506D9]

Total time: 20 minutes 50 seconds

[repro] Setting last failure code to 256

[repro] Failures:
[repro]   2/5 failed: org.apache.solr.cloud.api.collections.HdfsCollectionsAPIDistributedZkTest
[repro]   4/5 failed: org.apache.solr.cloud.hdfs.StressHdfsTest
[repro] git checkout 07b37ff26becf214bae000dff08b9091d31327a8
Previous HEAD position was 03a3562... SOLR-13349:High CPU usage in Solr due to Java 8 bug
HEAD is now at 07b37ff... SOLR-13355: Obey 'ALL' for handlers with other predefined perms
[repro] Exiting with code 256
Archiving artifacts
[Fast Archiver] No artifacts from Lucene-Solr-repro Repro-Lucene-Solr-BadApples-Tests-8.x#58 to compare, so performing full copy of artifacts
Recording test results
Build step 'Publish JUnit test result report' changed build result to UNSTABLE
Email was triggered for: Unstable (Test Failures)
Sending email for trigger: Unstable (Test Failures)

[JENKINS] Lucene-Solr-repro - Build # 3089 - Still Unstable

Posted by Apache Jenkins Server <je...@builds.apache.org>.
Build: https://builds.apache.org/job/Lucene-Solr-repro/3089/

[...truncated 29 lines...]
[repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-master/56/consoleText

[repro] Revision: b2941ff0da4987dec4774aa6b4bb7c597b362243

[repro] Ant options: -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=LeaderVoteWaitTimeoutTest -Dtests.method=basicTest -Dtests.seed=10CFE156AFBFCDCA -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.badapples=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.locale=ja-JP -Dtests.timezone=Asia/Dhaka -Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=ChaosMonkeySafeLeaderWithPullReplicasTest -Dtests.seed=10CFE156AFBFCDCA -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.badapples=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.locale=ar-SD -Dtests.timezone=Africa/Brazzaville -Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TestDistributedStatsComponentCardinality -Dtests.method=test -Dtests.seed=10CFE156AFBFCDCA -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.badapples=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.locale=es-DO -Dtests.timezone=Asia/Chongqing -Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TestDocTermOrdsUninvertLimit -Dtests.method=testTriggerUnInvertLimit -Dtests.seed=10CFE156AFBFCDCA -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.badapples=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.locale=de-GR -Dtests.timezone=Australia/Tasmania -Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 07b37ff26becf214bae000dff08b9091d31327a8
[repro] git fetch
[repro] git checkout b2941ff0da4987dec4774aa6b4bb7c597b362243

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]    solr/core
[repro]       TestDocTermOrdsUninvertLimit
[repro]       LeaderVoteWaitTimeoutTest
[repro]       TestDistributedStatsComponentCardinality
[repro]       ChaosMonkeySafeLeaderWithPullReplicasTest
[repro] ant compile-test

[...truncated 3564 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=20 -Dtests.class="*.TestDocTermOrdsUninvertLimit|*.LeaderVoteWaitTimeoutTest|*.TestDistributedStatsComponentCardinality|*.ChaosMonkeySafeLeaderWithPullReplicasTest" -Dtests.showOutput=onerror -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.seed=10CFE156AFBFCDCA -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.badapples=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.locale=de-GR -Dtests.timezone=Australia/Tasmania -Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[...truncated 593 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   4/5 failed: org.apache.solr.cloud.LeaderVoteWaitTimeoutTest
[repro]   4/5 failed: org.apache.solr.handler.component.TestDistributedStatsComponentCardinality
[repro]   5/5 failed: org.apache.solr.cloud.ChaosMonkeySafeLeaderWithPullReplicasTest
[repro]   5/5 failed: org.apache.solr.uninverting.TestDocTermOrdsUninvertLimit

[repro] Re-testing 100% failures at the tip of master
[repro] git fetch
[repro] git checkout master

[...truncated 4 lines...]
[repro] git merge --ff-only

[...truncated 196 lines...]
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]    solr/core
[repro]       TestDocTermOrdsUninvertLimit
[repro]       ChaosMonkeySafeLeaderWithPullReplicasTest
[repro] ant compile-test

[...truncated 3564 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 -Dtests.class="*.TestDocTermOrdsUninvertLimit|*.ChaosMonkeySafeLeaderWithPullReplicasTest" -Dtests.showOutput=onerror -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.seed=10CFE156AFBFCDCA -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.badapples=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.locale=de-GR -Dtests.timezone=Australia/Tasmania -Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[...truncated 41173 lines...]
   [junit4] ERROR: JVM J0 ended with an exception, command line: /usr/local/asfpackages/java/jdk1.8.0_191/jre/bin/java -ea -esa -Dtests.prefix=tests -Dtests.seed=10CFE156AFBFCDCA -Xmx512M -Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random -Dtests.postingsformat=random -Dtests.docvaluesformat=random -Dtests.locale=de-GR -Dtests.timezone=Australia/Tasmania -Dtests.directory=random -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.luceneMatchVersion=9.0.0 -Dtests.cleanthreads=perClass -Djava.util.logging.config.file=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/tools/junit4/logging.properties -Dtests.nightly=true -Dtests.weekly=false -Dtests.monster=false -Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=2 -DtempDir=./temp -Djava.io.tmpdir=./temp -Dcommon.dir=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene -Dclover.db.dir=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/build/clover/db -Djava.security.policy=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/tools/junit4/solr-tests.policy -Dtests.LUCENE_VERSION=9.0.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 -Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory -Djava.awt.headless=true -Djdk.map.althashing.threshold=0 -Dtests.src.home=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core -Djava.security.egd=file:/dev/./urandom -Djunit4.childvm.cwd=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0 -Djunit4.tempDir=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/temp -Djunit4.childvm.id=0 -Djunit4.childvm.count=3 -Dtests.leaveTemporary=false -Dtests.filterstacks=true -Dtests.maxfailures=10 -Dtests.badapples=true -Djava.security.manager=org.apache.lucene.util.TestSecurityManager -Dfile.encoding=US-ASCII -classpath /home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/classes/test:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-test-framework/classes/java:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/test-framework/lib/hamcrest-core-1.3.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/test-framework/lib/junit4-ant-2.7.2.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/src/test-files:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/build/test-framework/classes/java:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/build/codecs/classes/java:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-solrj/classes/java:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/classes/java:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/build/analysis/common/lucene-analyzers-common-9.0.0-SNAPSHOT.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/build/analysis/kuromoji/lucene-analyzers-kuromoji-9.0.0-SNAPSHOT.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/build/analysis/nori/lucene-analyzers-nori-9.0.0-SNAPSHOT.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/build/analysis/phonetic/lucene-analyzers-phonetic-9.0.0-SNAPSHOT.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/build/codecs/lucene-codecs-9.0.0-SNAPSHOT.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/build/backward-codecs/lucene-backward-codecs-9.0.0-SNAPSHOT.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/build/highlighter/lucene-highlighter-9.0.0-SNAPSHOT.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/build/memory/lucene-memory-9.0.0-SNAPSHOT.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/build/misc/lucene-misc-9.0.0-SNAPSHOT.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/build/spatial-extras/lucene-spatial-extras-9.0.0-SNAPSHOT.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/build/spatial3d/lucene-spatial3d-9.0.0-SNAPSHOT.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/build/expressions/lucene-expressions-9.0.0-SNAPSHOT.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/build/suggest/lucene-suggest-9.0.0-SNAPSHOT.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/build/grouping/lucene-grouping-9.0.0-SNAPSHOT.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/build/queries/lucene-queries-9.0.0-SNAPSHOT.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/build/queryparser/lucene-queryparser-9.0.0-SNAPSHOT.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/build/join/lucene-join-9.0.0-SNAPSHOT.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/build/sandbox/lucene-sandbox-9.0.0-SNAPSHOT.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/build/classification/lucene-classification-9.0.0-SNAPSHOT.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/antlr4-runtime-4.5.1-1.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/asm-5.1.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/asm-commons-5.1.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/avatica-core-1.13.0.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/caffeine-2.4.0.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/calcite-core-1.18.0.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/calcite-linq4j-1.18.0.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/commons-beanutils-1.9.3.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/commons-cli-1.2.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/commons-codec-1.11.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/commons-collections-3.2.2.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/commons-compiler-3.0.9.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/commons-configuration2-2.1.1.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/commons-exec-1.3.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/commons-fileupload-1.3.3.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/commons-io-2.5.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/commons-lang3-3.8.1.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/commons-math3-3.6.1.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/commons-text-1.6.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/curator-client-2.13.0.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/curator-framework-2.13.0.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/curator-recipes-2.13.0.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/disruptor-3.4.2.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/eigenbase-properties-1.1.5.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/guava-25.1-jre.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/hadoop-annotations-3.2.0.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/hadoop-auth-3.2.0.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/hadoop-common-3.2.0.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/hadoop-hdfs-client-3.2.0.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/hppc-0.8.1.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/htrace-core4-4.1.0-incubating.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/jackson-annotations-2.9.8.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/jackson-core-2.9.8.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/jackson-databind-2.9.8.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/jackson-dataformat-smile-2.9.8.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/janino-3.0.9.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/jcl-over-slf4j-1.7.24.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/jose4j-0.6.4.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/json-path-2.4.0.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/kerb-core-1.0.1.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/kerb-util-1.0.1.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/kerby-asn1-1.0.1.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/kerby-pkix-1.0.1.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/log4j-1.2-api-2.11.2.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/log4j-api-2.11.2.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/log4j-core-2.11.2.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/log4j-slf4j-impl-2.11.2.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/org.restlet-2.3.0.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/org.restlet.ext.servlet-2.3.0.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/protobuf-java-3.6.1.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/re2j-1.2.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/rrd4j-3.2.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/spatial4j-0.7.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/lib/t-digest-3.1.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/solrj/lib/commons-io-2.5.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/solrj/lib/commons-math3-3.6.1.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/solrj/lib/http2-client-9.4.14.v20181114.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/solrj/lib/http2-common-9.4.14.v20181114.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/solrj/lib/http2-hpack-9.4.14.v20181114.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/solrj/lib/http2-http-client-transport-9.4.14.v20181114.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/solrj/lib/httpclient-4.5.6.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/solrj/lib/httpcore-4.4.10.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/solrj/lib/httpmime-4.5.6.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/solrj/lib/jcl-over-slf4j-1.7.24.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/solrj/lib/jetty-alpn-client-9.4.14.v20181114.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/solrj/lib/jetty-alpn-java-client-9.4.14.v20181114.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/solrj/lib/jetty-client-9.4.14.v20181114.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/solrj/lib/jetty-http-9.4.14.v20181114.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/solrj/lib/jetty-io-9.4.14.v20181114.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/solrj/lib/jetty-util-9.4.14.v20181114.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/solrj/lib/noggit-0.8.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/solrj/lib/slf4j-api-1.7.24.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/solrj/lib/stax2-api-3.1.4.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/solrj/lib/woodstox-core-asl-4.4.1.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/solrj/lib/zookeeper-3.4.13.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/server/lib/gmetric4j-1.0.7.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/server/lib/http2-common-9.4.14.v20181114.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/server/lib/http2-hpack-9.4.14.v20181114.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/server/lib/http2-server-9.4.14.v20181114.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/server/lib/javax.servlet-api-3.1.0.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/server/lib/jetty-alpn-java-server-9.4.14.v20181114.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/server/lib/jetty-alpn-server-9.4.14.v20181114.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/server/lib/jetty-continuation-9.4.14.v20181114.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/server/lib/jetty-deploy-9.4.14.v20181114.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/server/lib/jetty-http-9.4.14.v20181114.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/server/lib/jetty-io-9.4.14.v20181114.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/server/lib/jetty-jmx-9.4.14.v20181114.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/server/lib/jetty-rewrite-9.4.14.v20181114.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/server/lib/jetty-security-9.4.14.v20181114.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/server/lib/jetty-server-9.4.14.v20181114.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/server/lib/jetty-servlet-9.4.14.v20181114.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/server/lib/jetty-servlets-9.4.14.v20181114.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/server/lib/jetty-util-9.4.14.v20181114.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/server/lib/jetty-webapp-9.4.14.v20181114.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/server/lib/jetty-xml-9.4.14.v20181114.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/server/lib/metrics-core-3.2.6.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/server/lib/metrics-ganglia-3.2.6.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/server/lib/metrics-graphite-3.2.6.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/server/lib/metrics-jetty9-3.2.6.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/server/lib/metrics-jvm-3.2.6.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/example/example-DIH/solr/db/lib/derby-10.9.1.0.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/example/example-DIH/solr/db/lib/hsqldb-2.4.0.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/build/core/classes/java:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/test-framework/lib/hamcrest-core-1.3.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/test-framework/lib/junit-4.12.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/test-framework/lib/randomizedtesting-runner-2.7.2.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/test-lib/antlr4-runtime-4.5.1-1.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/test-lib/byte-buddy-1.9.3.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/test-lib/commons-logging-1.1.3.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/test-lib/commons-text-1.6.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/test-lib/hadoop-common-3.2.0-tests.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/test-lib/hadoop-hdfs-3.2.0-tests.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/test-lib/hadoop-hdfs-3.2.0.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/test-lib/hadoop-minicluster-3.2.0.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/test-lib/hadoop-minikdc-3.2.0.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/test-lib/jersey-core-1.19.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/test-lib/jersey-server-1.19.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/test-lib/jersey-servlet-1.19.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/test-lib/kerb-admin-1.0.1.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/test-lib/kerb-client-1.0.1.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/test-lib/kerb-common-1.0.1.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/test-lib/kerb-crypto-1.0.1.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/test-lib/kerb-identity-1.0.1.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/test-lib/kerb-server-1.0.1.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/test-lib/kerb-simplekdc-1.0.1.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/test-lib/kerby-config-1.0.1.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/test-lib/kerby-kdc-1.0.1.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/test-lib/kerby-util-1.0.1.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/test-lib/mockito-core-2.23.4.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/test-lib/netty-all-4.0.52.Final.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core/test-lib/objenesis-2.6.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/build/analysis/icu/lucene-analyzers-icu-9.0.0-SNAPSHOT.jar:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/contrib/solr-analysis-extras/classes/java:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/build/queryparser/classes/test:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/build/backward-codecs/classes/test:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/contrib/analysis-extras/lib/icu4j-62.1.jar:/usr/share/ant/lib/ant-launcher.jar:/home/jenkins/.ant/lib/ivy-2.4.0.jar:/usr/share/ant/lib/ant-apache-oro.jar:/usr/share/ant/lib/ant-javamail.jar:/usr/share/ant/lib/ant-jsch.jar:/usr/share/ant/lib/ant-apache-resolver.jar:/usr/share/ant/lib/ant-apache-bsf.jar:/usr/share/ant/lib/ant-commons-logging.jar:/usr/share/ant/lib/ant-apache-xalan2.jar:/usr/share/ant/lib/ant-jdepend.jar:/usr/share/ant/lib/ant-junit.jar:/usr/share/ant/lib/ant-commons-net.jar:/usr/share/ant/lib/ant-apache-regexp.jar:/usr/share/ant/lib/ant-junit4.jar:/usr/share/ant/lib/ant-testutil.jar:/usr/share/ant/lib/ant-jmf.jar:/usr/share/ant/lib/ant-antlr.jar:/usr/share/ant/lib/ant-swing.jar:/usr/share/ant/lib/ant.jar:/usr/share/ant/lib/ant-apache-bcel.jar:/usr/share/ant/lib/ant-apache-log4j.jar:/usr/local/asfpackages/java/jdk1.8.0_191/lib/tools.jar:/home/jenkins/.ivy2/cache/com.carrotsearch.randomizedtesting/junit4-ant/jars/junit4-ant-2.7.2.jar com.carrotsearch.ant.tasks.junit4.slave.SlaveMainSafe -eventsfile /home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/temp/junit4-J0-20190330_084053_1677136841270423121329.events @/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/temp/junit4-J0-20190330_084053_1675977973006953327552.suites -stdin
   [junit4] ERROR: JVM J0 ended with an exception: Quit event not received from the forked process? This may indicate JVM crash or runner bugs.
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.JUnit4.executeSlave(JUnit4.java:1563)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.JUnit4.access$000(JUnit4.java:123)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.JUnit4$2.call(JUnit4.java:997)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.JUnit4$2.call(JUnit4.java:994)
   [junit4] 	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
   [junit4] 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   [junit4] 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   [junit4] 	at java.lang.Thread.run(Thread.java:748)

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/common-build.xml:1573: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/common-build.xml:1100: At least one slave process threw an exception, first: Quit event not received from the forked process? This may indicate JVM crash or runner bugs.

Total time: 7 minutes 31 seconds

[repro] Setting last failure code to 256

[repro] Failures at the tip of master:
[repro]   4/5 failed: org.apache.solr.cloud.ChaosMonkeySafeLeaderWithPullReplicasTest
[repro]   4/5 failed: org.apache.solr.uninverting.TestDocTermOrdsUninvertLimit
[repro] git checkout 07b37ff26becf214bae000dff08b9091d31327a8
Note: checking out '07b37ff26becf214bae000dff08b9091d31327a8'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b <new-branch-name>

HEAD is now at 07b37ff... SOLR-13355: Obey 'ALL' for handlers with other predefined perms
[repro] Exiting with code 256
Archiving artifacts
Recording test results
Build step 'Publish JUnit test result report' changed build result to UNSTABLE
Email was triggered for: Unstable (Test Failures)
Sending email for trigger: Unstable (Test Failures)