You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2018/12/28 12:41:50 UTC

[JENKINS] Lucene-Solr-repro - Build # 2580 - Still Unstable

Build: https://builds.apache.org/job/Lucene-Solr-repro/2580/

[...truncated 28 lines...]
[repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/415/consoleText

[repro] Revision: c236ad8a18a6d48a5d1be321fe22865052929064

[repro] Ant options: -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=HdfsRestartWhileUpdatingTest -Dtests.seed=889C124021662A08 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.locale=ar-QA -Dtests.timezone=Etc/GMT-1 -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=ShardSplitTest -Dtests.method=testSplitWithChaosMonkey -Dtests.seed=889C124021662A08 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.locale=de-LU -Dtests.timezone=America/Miquelon -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=ShardSplitTest -Dtests.seed=889C124021662A08 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.locale=de-LU -Dtests.timezone=America/Miquelon -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=FullSolrCloudDistribCmdsTest -Dtests.method=test -Dtests.seed=889C124021662A08 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.locale=ar-SA -Dtests.timezone=Africa/Conakry -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=RestartWhileUpdatingTest -Dtests.seed=889C124021662A08 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.locale=no -Dtests.timezone=America/Guatemala -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: c9c3ef0ef9251ee07fc65c1e06b5e887485f1f31
[repro] git fetch
[repro] git checkout c236ad8a18a6d48a5d1be321fe22865052929064

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]    solr/core
[repro]       HdfsRestartWhileUpdatingTest
[repro]       RestartWhileUpdatingTest
[repro]       FullSolrCloudDistribCmdsTest
[repro]       ShardSplitTest
[repro] ant compile-test

[...truncated 3605 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=20 -Dtests.class="*.HdfsRestartWhileUpdatingTest|*.RestartWhileUpdatingTest|*.FullSolrCloudDistribCmdsTest|*.ShardSplitTest" -Dtests.showOutput=onerror -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.seed=889C124021662A08 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.locale=ar-QA -Dtests.timezone=Etc/GMT-1 -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 95274 lines...]
   [junit4]   2> 1668713 ERROR (Finalizer) [    ] o.a.s.c.SolrCore REFCOUNT ERROR: unreferenced org.apache.solr.core.SolrCore@555e9e1 (control_collection_shard1_replica_n1) has a reference count of -1
   [junit4]   2> 1668713 ERROR (Finalizer) [    ] o.a.s.c.SolrCore REFCOUNT ERROR: unreferenced org.apache.solr.core.SolrCore@6e34e636 (collection1_shard1_replica_n1) has a reference count of -1
   [junit4]   2> 1668713 ERROR (Finalizer) [    ] o.a.s.c.SolrCore REFCOUNT ERROR: unreferenced org.apache.solr.core.SolrCore@32d08f19 (collection1_shard1_replica_n5) has a reference count of -1
   [junit4]   2> 1668713 ERROR (Finalizer) [    ] o.a.s.c.SolrCore REFCOUNT ERROR: unreferenced org.apache.solr.core.SolrCore@3c298823 (collection1_shard1_replica_n3) has a reference count of -1
   [junit4]   2> 1668713 ERROR (Finalizer) [    ] o.a.s.c.SolrCore REFCOUNT ERROR: unreferenced org.apache.solr.core.SolrCore@5116e08e (collection1_shard1_replica_n5) has a reference count of -1
   [junit4]   2> 1668713 ERROR (Finalizer) [    ] o.a.s.c.SolrCore REFCOUNT ERROR: unreferenced org.apache.solr.core.SolrCore@47f62185 (collection1_shard1_replica_n3) has a reference count of -1
   [junit4]   2> 1668714 ERROR (Finalizer) [    ] o.a.s.c.SolrCore REFCOUNT ERROR: unreferenced org.apache.solr.core.SolrCore@13515dc4 (collection1_shard1_replica_n1) has a reference count of -1
   [junit4]   2> 1669376 INFO  (recoveryExecutor-448-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.s.SolrIndexSearcher Opening [Searcher@692351e8[collection1_shard1_replica_n3] main]
   [junit4]   2> 1669376 INFO  (recoveryExecutor-448-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 1669378 INFO  (searcherExecutor-446-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.SolrCore [collection1_shard1_replica_n3] Registered new searcher Searcher@692351e8[collection1_shard1_replica_n3] main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(7.7.0):C3:[diagnostics={java.runtime.version=1.8.0_191-b12, java.vendor=Oracle Corporation, java.version=1.8.0_191, java.vm.version=25.191-b12, lucene.version=7.7.0, os=Linux, os.arch=amd64, os.version=4.4.0-112-generic, source=flush, timestamp=1546000038557}]:[attributes={Lucene50StoredFieldsFormat.mode=BEST_SPEED}]) Uninverting(_4(7.7.0):C34/12:[diagnostics={os=Linux, java.vendor=Oracle Corporation, java.version=1.8.0_191, java.vm.version=25.191-b12, lucene.version=7.7.0, os.arch=amd64, java.runtime.version=1.8.0_191-b12, source=flush, os.version=4.4.0-112-generic, timestamp=1546000100182}]:[attributes={Lucene50StoredFieldsFormat.mode=BEST_SPEED}]:delGen=1)))}
   [junit4]   2> 1669378 INFO  (recoveryExecutor-448-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3] {add=[2-0 (1621098535143342080), 1-1 (1621098536662728704), 0-1 (1621098536660631552), 1-2 (1621098538340450304), 0-2 (1621098538433773568), 1-3 (1621098538607837184), 1-4 (1621098538612031488), 0-3 (1621098538831183872), 0-4 (1621098538837475328), 1-5 (1621098538874175488), ... (34 adds)],delete=[1-2 (-1621098538649780224), 0-2 (-1621098538953867264), 1-5 (-1621098539088084992), 0-5 (-1621098539183505408), 1-7 (-1621098539230691328), 1-8 (-1621098539661656064), 0-7 (-1621098539676336128), 0-8 (-1621098541181042688), 3-2 (-1621098541971668992), 1-10 (-1621098542923776000), ... (12 deletes)]} 0 5580
   [junit4]   2> 1669379 WARN  (recoveryExecutor-448-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.UpdateLog Log replay finished. recoveryInfo=RecoveryInfo{adds=34 deletes=12 deleteByQuery=0 errors=0 positionOfStart=0}
   [junit4]   2> 1669379 INFO  (coreZkRegister-437-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.ZkController Core needs to recover:collection1_shard1_replica_n3
   [junit4]   2> 1669399 INFO  (updateExecutor-484-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.DefaultSolrCoreState Running recovery
   [junit4]   2> 1669420 INFO  (recoveryExecutor-420-thread-1-processing-n:127.0.0.1:43888_ x:collection1_shard1_replica_n1 c:collection1 s:shard1 r:core_node2) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.s.SolrIndexSearcher Opening [Searcher@2cc6d4b4[collection1_shard1_replica_n1] main]
   [junit4]   2> 1669422 INFO  (searcherExecutor-418-thread-1-processing-n:127.0.0.1:43888_ x:collection1_shard1_replica_n1 c:collection1 s:shard1 r:core_node2) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.SolrCore [collection1_shard1_replica_n1] Registered new searcher Searcher@2cc6d4b4[collection1_shard1_replica_n1] main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(7.7.0):C3:[diagnostics={java.runtime.version=1.8.0_191-b12, java.vendor=Oracle Corporation, java.version=1.8.0_191, java.vm.version=25.191-b12, lucene.version=7.7.0, os=Linux, os.arch=amd64, os.version=4.4.0-112-generic, source=flush, timestamp=1546000037749}]:[attributes={Lucene50StoredFieldsFormat.mode=BEST_SPEED}]) Uninverting(_i(7.7.0):C455/103:[diagnostics={os=Linux, java.vendor=Oracle Corporation, java.version=1.8.0_191, java.vm.version=25.191-b12, lucene.version=7.7.0, os.arch=amd64, java.runtime.version=1.8.0_191-b12, source=flush, os.version=4.4.0-112-generic, timestamp=1546000099780}]:[attributes={Lucene50StoredFieldsFormat.mode=BEST_SPEED}]:delGen=2) Uninverting(_j(7.7.0):C196/46:[diagnostics={os=Linux, java.vendor=Oracle Corporation, java.version=1.8.0_191, java.vm.version=25.191-b12, lucene.version=7.7.0, os.arch=amd64, java.runtime.version=1.8.0_191-b12, source=flush, os.version=4.4.0-112-generic, timestamp=1546000101439}]:[attributes={Lucene50StoredFieldsFormat.mode=BEST_SPEED}]:delGen=1)))}
   [junit4]   2> 1669506 INFO  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Starting recovery process. recoveringAfterStartup=true
   [junit4]   2> 1669506 INFO  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy startupVersions size=49 range=[1621098543241494528 to 1621098534143000576]
   [junit4]   2> 1669506 INFO  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Looks like a previous replication recovery did not complete - skipping peer sync.
   [junit4]   2> 1669512 INFO  (qtp30671144-1932) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/admin/ping params={wt=javabin&version=2} hits=505 status=0 QTime=0
   [junit4]   2> 1669512 INFO  (qtp30671144-1932) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/admin/ping params={wt=javabin&version=2} status=0 QTime=0
   [junit4]   2> 1669512 INFO  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Begin buffering updates. core=[collection1_shard1_replica_n3]
   [junit4]   2> 1669512 INFO  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.UpdateLog Starting to buffer updates. FSUpdateLog{state=ACTIVE, tlog=null}
   [junit4]   2> 1669512 INFO  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Publishing state of core [collection1_shard1_replica_n3] as recovering, leader is [https://127.0.0.1:43888/collection1_shard1_replica_n1/] and I am [https://127.0.0.1:46358/collection1_shard1_replica_n3/]
   [junit4]   2> 1669559 INFO  (recoveryExecutor-420-thread-1-processing-n:127.0.0.1:43888_ x:collection1_shard1_replica_n1 c:collection1 s:shard1 r:core_node2) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 1669559 INFO  (recoveryExecutor-420-thread-1-processing-n:127.0.0.1:43888_ x:collection1_shard1_replica_n1 c:collection1 s:shard1 r:core_node2) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1] {add=[3-0 (1621098534879100929), 2-0 (1621098535143342080), 1-1 (1621098536662728704), 0-1 (1621098536660631552), 1-2 (1621098538340450304), 0-2 (1621098538433773568), 1-3 (1621098538607837184), 1-4 (1621098538612031488), 0-3 (1621098538831183872), 0-4 (1621098538837475328), ... (344 adds)],delete=[1-2 (-1621098538649780224), 0-2 (-1621098538953867264), 1-5 (-1621098539088084992), 0-5 (-1621098539183505408), 1-7 (-1621098539230691328), 1-8 (-1621098539661656064), 0-7 (-1621098539676336128), 0-8 (-1621098541181042688), 3-2 (-1621098541971668992), 1-10 (-1621098542923776000), ... (149 deletes)]} 0 6852
   [junit4]   2> 1669559 WARN  (recoveryExecutor-420-thread-1-processing-n:127.0.0.1:43888_ x:collection1_shard1_replica_n1 c:collection1 s:shard1 r:core_node2) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.UpdateLog Log replay finished. recoveryInfo=RecoveryInfo{adds=344 deletes=149 deleteByQuery=0 errors=0 positionOfStart=0}
   [junit4]   2> 1669559 INFO  (coreZkRegister-409-thread-1-processing-n:127.0.0.1:43888_ x:collection1_shard1_replica_n1 c:collection1 s:shard1 r:core_node2) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.ZkController I am the leader, no recovery necessary
   [junit4]   2> 1669659 INFO  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.ZkShardTerms Successful update of terms at /collections/collection1/terms/shard1 to Terms{values={core_node6=7, core_node6_recovering=7, core_node2=8, core_node4_recovering=8, core_node4=8}, version=19}
   [junit4]   2> 1669661 INFO  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Sending prep recovery command to [https://127.0.0.1:43888]; [WaitForState: action=PREPRECOVERY&core=collection1_shard1_replica_n1&nodeName=127.0.0.1:46358_&coreNodeName=core_node4&state=recovering&checkLive=true&onlyIfLeader=true&onlyIfLeaderActive=true]
   [junit4]   2> 1669823 INFO  (qtp30671144-1902) [n:127.0.0.1:43888_    x:collection1_shard1_replica_n1] o.a.s.h.a.PrepRecoveryOp Going to wait for coreNodeName: core_node4, state: recovering, checkLive: true, onlyIfLeader: true, onlyIfLeaderActive: true
   [junit4]   2> 1669862 INFO  (qtp30671144-1902) [n:127.0.0.1:43888_    x:collection1_shard1_replica_n1] o.a.s.h.a.PrepRecoveryOp In WaitForState(recovering): collection=collection1, shard=shard1, thisCore=collection1_shard1_replica_n1, leaderDoesNotNeedRecovery=false, isLeader? true, live=true, checkLive=true, currentState=recovering, localState=active, nodeName=127.0.0.1:46358_, coreNodeName=core_node4, onlyIfActiveCheckResult=false, nodeProps: core_node4:{"core":"collection1_shard1_replica_n3","base_url":"https://127.0.0.1:46358","node_name":"127.0.0.1:46358_","state":"recovering","type":"NRT"}
   [junit4]   2> 1669862 INFO  (qtp30671144-1902) [n:127.0.0.1:43888_    x:collection1_shard1_replica_n1] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores params={nodeName=127.0.0.1:46358_&onlyIfLeaderActive=true&core=collection1_shard1_replica_n1&coreNodeName=core_node4&action=PREPRECOVERY&checkLive=true&state=recovering&onlyIfLeader=true&wt=javabin&version=2} status=0 QTime=38
   [junit4]   2> 1670392 INFO  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Starting Replication Recovery.
   [junit4]   2> 1670392 INFO  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Attempting to replicate from [https://127.0.0.1:43888/collection1_shard1_replica_n1/].
   [junit4]   2> 1670394 INFO  (qtp30671144-1901) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 start commit{_version_=1621098607058878464,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 1670394 INFO  (qtp30671144-1901) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 No uncommitted changes. Skipping IW.commit.
   [junit4]   2> 1670394 INFO  (qtp30671144-1901) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 1670548 INFO  (qtp1613599179-1988) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.DistributedUpdateProcessor Ignoring commit while not ACTIVE - state: BUFFERING replay: false
   [junit4]   2> 1670548 INFO  (qtp1613599179-1988) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3]  webapp= path=/update params={update.distrib=FROMLEADER&waitSearcher=true&openSearcher=false&commit=true&softCommit=false&distrib.from=https://127.0.0.1:43888/collection1_shard1_replica_n1/&commit_end_point=replicas&wt=javabin&version=2&expungeDeletes=false}{commit=} 0 0
   [junit4]   2> 1670549 INFO  (qtp30671144-1901) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1]  webapp= path=/update params={waitSearcher=true&openSearcher=false&commit=true&softCommit=false&wt=javabin&version=2}{commit=} 0 155
   [junit4]   2> 1670583 INFO  (qtp30671144-1904) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={qt=/replication&wt=javabin&version=2&command=indexversion} status=0 QTime=0
   [junit4]   2> 1670584 INFO  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher Master's generation: 3
   [junit4]   2> 1670584 INFO  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher Master's version: 1546000101387
   [junit4]   2> 1670584 INFO  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher Slave's generation: 3
   [junit4]   2> 1670584 INFO  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher Slave's version: 1546000100180
   [junit4]   2> 1670584 INFO  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher Starting replication process
   [junit4]   2> 1671040 INFO  (qtp30671144-1903) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&tlogFiles=false&wt=javabin&version=2&command=filelist} status=0 QTime=456
   [junit4]   2> 1671041 INFO  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher Number of files in latest index in master: 45
   [junit4]   2> 1671042 INFO  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher Starting download (fullCopy=true) to MMapDirectory@/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J1/temp/solr.cloud.hdfs.HdfsRestartWhileUpdatingTest_889C124021662A08-001/shard-3-001/cores/collection1_shard1_replica_n3/data/index.20181228132826561 lockFactory=org.apache.lucene.store.SingleInstanceLockFactory@7ff17f9a
   [junit4]   2> 1671042 WARN  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher File _0.si did not match. expected checksum is 2965971810 and actual is checksum 2442538431. expected length is 527 and actual length is 527
   [junit4]   2> 1671081 INFO  (qtp30671144-1905) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_0.si&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1671117 WARN  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher File _0_Lucene50_0.doc did not match. expected checksum is 2104197059 and actual is checksum 656707240. expected length is 122 and actual length is 122
   [junit4]   2> 1671166 INFO  (qtp30671144-1931) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_0_Lucene50_0.doc&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1671296 WARN  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher File _0_Lucene50_0.tim did not match. expected checksum is 790190322 and actual is checksum 2073534641. expected length is 259 and actual length is 259
   [junit4]   2> 1671353 INFO  (qtp30671144-1932) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_0_Lucene50_0.tim&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1671369 WARN  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher File _0_Lucene50_0.pos did not match. expected checksum is 2357043016 and actual is checksum 695596068. expected length is 86 and actual length is 86
   [junit4]   2> 1671390 INFO  (qtp30671144-1902) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_0_Lucene50_0.pos&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1671405 WARN  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher File _0.nvd did not match. expected checksum is 1653065007 and actual is checksum 2310045228. expected length is 59 and actual length is 59
   [junit4]   2> 1671426 INFO  (qtp30671144-1901) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_0.nvd&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1671442 WARN  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher File _0.fdx did not match. expected checksum is 3171413489 and actual is checksum 3420640460. expected length is 84 and actual length is 84
   [junit4]   2> 1671462 INFO  (qtp30671144-1904) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_0.fdx&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1671478 WARN  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher File _0_Lucene50_0.tip did not match. expected checksum is 557496496 and actual is checksum 2373194718. expected length is 145 and actual length is 145
   [junit4]   2> 1671499 INFO  (qtp30671144-1903) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_0_Lucene50_0.tip&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1671515 WARN  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher File _0.fdt did not match. expected checksum is 701565970 and actual is checksum 2241182346. expected length is 149 and actual length is 149
   [junit4]   2> 1671535 INFO  (qtp30671144-1905) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_0.fdt&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1671551 WARN  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher File _0_Lucene70_0.dvm did not match. expected checksum is 3144127171 and actual is checksum 963571579. expected length is 171 and actual length is 171
   [junit4]   2> 1671572 INFO  (qtp30671144-1931) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_0_Lucene70_0.dvm&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1671588 WARN  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher File _0.dii did not match. expected checksum is 2674277892 and actual is checksum 581718730. expected length is 69 and actual length is 69
   [junit4]   2> 1671609 INFO  (qtp30671144-1932) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_0.dii&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1671624 WARN  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher File _0_Lucene70_0.dvd did not match. expected checksum is 3004244888 and actual is checksum 3943664304. expected length is 77 and actual length is 77
   [junit4]   2> 1671645 INFO  (qtp30671144-1902) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_0_Lucene70_0.dvd&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1671661 WARN  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher File _0.dim did not match. expected checksum is 1061246725 and actual is checksum 3128116721. expected length is 107 and actual length is 107
   [junit4]   2> 1671681 INFO  (qtp30671144-1901) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_0.dim&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1671698 WARN  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher File _0.nvm did not match. expected checksum is 2774003920 and actual is checksum 517795199. expected length is 100 and actual length is 100
   [junit4]   2> 1671718 INFO  (qtp30671144-1904) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_0.nvm&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1671733 WARN  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher File _0.fnm did not match. expected checksum is 2019686880 and actual is checksum 213862034. expected length is 446 and actual length is 446
   [junit4]   2> 1671754 INFO  (qtp30671144-1903) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_0.fnm&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1671771 INFO  (qtp30671144-1903) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_i.nvd&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1671791 INFO  (qtp30671144-1903) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_i_Lucene70_0.dvd&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1671807 INFO  (qtp30671144-1903) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_i.fdt&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1671844 INFO  (qtp30671144-1903) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_i_Lucene50_0.pos&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1671864 INFO  (qtp30671144-1903) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_i.fdx&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1671880 INFO  (qtp30671144-1903) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_i_Lucene70_0.dvm&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1671900 INFO  (qtp30671144-1903) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_i_Lucene50_0.doc&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1671916 INFO  (qtp30671144-1903) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_i_Lucene50_0.tim&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1671936 INFO  (qtp30671144-1903) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_i_Lucene50_0.tip&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1671953 INFO  (qtp30671144-1903) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_i.si&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1671989 INFO  (qtp30671144-1903) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_i_2.liv&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1672009 INFO  (qtp30671144-1903) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_i.dii&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1672026 INFO  (qtp30671144-1903) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_i.nvm&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1672048 INFO  (qtp30671144-1903) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_i.fnm&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1672083 INFO  (qtp30671144-1903) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_i.dim&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1672099 INFO  (qtp30671144-1903) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_j_1.liv&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1672115 INFO  (qtp30671144-1903) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_j.dii&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1672135 INFO  (qtp30671144-1903) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_j.nvm&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1672155 INFO  (qtp30671144-1903) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_j.fnm&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1672172 INFO  (qtp30671144-1903) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_j.dim&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1672208 INFO  (qtp30671144-1903) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_j_Lucene70_0.dvd&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1672226 INFO  (qtp30671144-1903) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_j.nvd&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1672299 INFO  (qtp30671144-1905) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_j.si&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1672333 INFO  (qtp30671144-1931) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_j_Lucene70_0.dvm&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1672371 INFO  (qtp30671144-1931) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_j_Lucene50_0.pos&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1672406 INFO  (qtp30671144-1932) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_j.fdt&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1672438 INFO  (qtp30671144-1902) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_j_Lucene50_0.tip&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1672475 INFO  (qtp30671144-1901) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_j_Lucene50_0.doc&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1672523 INFO  (qtp30671144-1901) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_j_Lucene50_0.tim&checksum=true&wt=filestream&command=filecontent} status=0 QTime=12
   [junit4]   2> 1672545 INFO  (qtp30671144-1901) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_j.fdx&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1672580 WARN  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher File segments_3 did not match. expected checksum is 2023026866 and actual is checksum 3157973651. expected length is 315 and actual length is 250
   [junit4]   2> 1672600 INFO  (qtp30671144-1904) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=segments_3&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1672637 INFO  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher Bytes downloaded: 22585, Bytes skipped downloading: 0
   [junit4]   2> 1672637 INFO  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher Total time taken for download (fullCopy=true,bytesDownloaded=22585) : 2 secs (11292 bytes/sec) to MMapDirectory@/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J1/temp/solr.cloud.hdfs.HdfsRestartWhileUpdatingTest_889C124021662A08-001/shard-3-001/cores/collection1_shard1_replica_n3/data/index.20181228132826561 lockFactory=org.apache.lucene.store.SingleInstanceLockFactory@7ff17f9a
   [junit4]   2> 1672655 INFO  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.SolrCore Updating index properties... index=index.20181228132826561
   [junit4]   2> 1672672 INFO  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher removing old index directory MMapDirectory@/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J1/temp/solr.cloud.hdfs.HdfsRestartWhileUpdatingTest_889C124021662A08-001/shard-3-001/cores/collection1_shard1_replica_n3/data/index lockFactory=org.apache.lucene.store.SingleInstanceLockFactory@2e766d63
   [junit4]   2> 1672706 INFO  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class org.apache.lucene.index.LogByteSizeMergePolicy: [LogByteSizeMergePolicy: minMergeSize=1677721, mergeFactor=28, maxMergeSize=2147483648, maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=false, maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=0.0]
   [junit4]   2> 1672708 INFO  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.DefaultSolrCoreState New IndexWriter is ready to be used.
   [junit4]   2> 1672779 INFO  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.s.SolrIndexSearcher Opening [Searcher@1b6b5e67[collection1_shard1_replica_n3] main]
   [junit4]   2> 1672782 INFO  (searcherExecutor-446-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.SolrCore [collection1_shard1_replica_n3] Registered new searcher Searcher@1b6b5e67[collection1_shard1_replica_n3] main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(7.7.0):C3:[diagnostics={java.runtime.version=1.8.0_191-b12, java.vendor=Oracle Corporation, java.version=1.8.0_191, java.vm.version=25.191-b12, lucene.version=7.7.0, os=Linux, os.arch=amd64, os.version=4.4.0-112-generic, source=flush, timestamp=1546000037749}]:[attributes={Lucene50StoredFieldsFormat.mode=BEST_SPEED}]) Uninverting(_i(7.7.0):C455/103:[diagnostics={java.runtime.version=1.8.0_191-b12, java.vendor=Oracle Corporation, java.version=1.8.0_191, java.vm.version=25.191-b12, lucene.version=7.7.0, os=Linux, os.arch=amd64, os.version=4.4.0-112-generic, source=flush, timestamp=1546000099780}]:[attributes={Lucene50StoredFieldsFormat.mode=BEST_SPEED}]:delGen=2) Uninverting(_j(7.7.0):C196/46:[diagnostics={java.runtime.version=1.8.0_191-b12, java.vendor=Oracle Corporation, java.version=1.8.0_191, java.vm.version=25.191-b12, lucene.version=7.7.0, os=Linux, os.arch=amd64, os.version=4.4.0-112-generic, source=flush, timestamp=1546000101439}]:[attributes={Lucene50StoredFieldsFormat.mode=BEST_SPEED}]:delGen=1)))}
   [junit4]   2> 1672818 INFO  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy No replay needed.
   [junit4]   2> 1672851 INFO  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Replication Recovery was successful.
   [junit4]   2> 1672851 INFO  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Registering as Active after recovery.
   [junit4]   2> 1672854 INFO  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.ZkShardTerms Successful update of terms at /collections/collection1/terms/shard1 to Terms{values={core_node6=7, core_node6_recovering=7, core_node2=8, core_node4=8}, version=20}
   [junit4]   2> 1672854 INFO  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Updating version bucket highest from index after successful recovery.
   [junit4]   2> 1672874 INFO  (recoveryExecutor-485-thread-1-processing-n:127.0.0.1:46358_ x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Finished recovery process, successful=[true]
   [junit4]   2> 1673764 INFO  (recoveryExecutor-469-thread-1-processing-n:127.0.0.1:38188_ x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.c.RecoveryStrategy Starting recovery process. recoveringAfterStartup=true
   [junit4]   2> 1673765 INFO  (recoveryExecutor-469-thread-1-processing-n:127.0.0.1:38188_ x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.c.RecoveryStrategy Found new versions added after startup: num=[7]
   [junit4]   2> 1673765 INFO  (recoveryExecutor-469-thread-1-processing-n:127.0.0.1:38188_ x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.c.RecoveryStrategy currentVersions size=7 range=[1621098602065559552 to 1621098601762521088]
   [junit4]   2> 1673765 INFO  (recoveryExecutor-469-thread-1-processing-n:127.0.0.1:38188_ x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.c.RecoveryStrategy startupVersions is empty
   [junit4]   2> 1673765 INFO  (recoveryExecutor-469-thread-1-processing-n:127.0.0.1:38188_ x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.c.RecoveryStrategy Looks like a previous replication recovery did not complete - skipping peer sync.
   [junit4]   2> 1673780 INFO  (qtp30671144-1903) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/admin/ping params={wt=javabin&version=2} hits=505 status=0 QTime=0
   [junit4]   2> 1673780 INFO  (qtp30671144-1903) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/admin/ping params={wt=javabin&version=2} status=0 QTime=0
   [junit4]   2> 1673780 INFO  (recoveryExecutor-469-thread-1-processing-n:127.0.0.1:38188_ x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.c.RecoveryStrategy Begin buffering updates. core=[collection1_shard1_replica_n5]
   [junit4]   2> 1673780 INFO  (recoveryExecutor-469-thread-1-processing-n:127.0.0.1:38188_ x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.u.UpdateLog Starting to buffer updates. FSUpdateLog{state=BUFFERING, tlog=null}
   [junit4]   2> 1673780 INFO  (recoveryExecutor-469-thread-1-processing-n:127.0.0.1:38188_ x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.c.RecoveryStrategy Publishing state of core [collection1_shard1_replica_n5] as recovering, leader is [https://127.0.0.1:43888/collection1_shard1_replica_n1/] and I am [https://127.0.0.1:38188/collection1_shard1_replica_n5/]
   [junit4]   2> 1673782 INFO  (recoveryExecutor-469-thread-1-processing-n:127.0.0.1:38188_ x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.c.ZkShardTerms Successful update of terms at /collections/collection1/terms/shard1 to Terms{values={core_node6=8, core_node6_recovering=8, core_node2=8, core_node4=8}, version=21}
   [junit4]   2> 1673795 INFO  (recoveryExecutor-469-thread-1-processing-n:127.0.0.1:38188_ x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.c.RecoveryStrategy Sending prep recovery command to [https://127.0.0.1:43888]; [WaitForState: action=PREPRECOVERY&core=collection1_shard1_replica_n1&nodeName=127.0.0.1:38188_&coreNodeName=core_node6&state=recovering&checkLive=true&onlyIfLeader=true&onlyIfLeaderActive=true]
   [junit4]   2> 1673796 INFO  (qtp30671144-1905) [n:127.0.0.1:43888_    x:collection1_shard1_replica_n1] o.a.s.h.a.PrepRecoveryOp Going to wait for coreNodeName: core_node6, state: recovering, checkLive: true, onlyIfLeader: true, onlyIfLeaderActive: true
   [junit4]   2> 1673796 INFO  (qtp30671144-1905) [n:127.0.0.1:43888_    x:collection1_shard1_replica_n1] o.a.s.h.a.PrepRecoveryOp In WaitForState(recovering): collection=collection1, shard=shard1, thisCore=collection1_shard1_replica_n1, leaderDoesNotNeedRecovery=false, isLeader? true, live=true, checkLive=true, currentState=recovering, localState=active, nodeName=127.0.0.1:38188_, coreNodeName=core_node6, onlyIfActiveCheckResult=false, nodeProps: core_node6:{"core":"collection1_shard1_replica_n5","base_url":"https://127.0.0.1:38188","node_name":"127.0.0.1:38188_","state":"recovering","type":"NRT"}
   [junit4]   2> 1673796 INFO  (qtp30671144-1905) [n:127.0.0.1:43888_    x:collection1_shard1_replica_n1] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores params={nodeName=127.0.0.1:38188_&onlyIfLeaderActive=true&core=collection1_shard1_replica_n1&coreNodeName=core_node6&action=PREPRECOVERY&checkLive=true&state=recovering&onlyIfLeader=true&wt=javabin&version=2} status=0 QTime=0
   [junit4]   2> 1674296 INFO  (recoveryExecutor-469-thread-1-processing-n:127.0.0.1:38188_ x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.c.RecoveryStrategy Starting Replication Recovery.
   [junit4]   2> 1674297 INFO  (recoveryExecutor-469-thread-1-processing-n:127.0.0.1:38188_ x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.c.RecoveryStrategy Attempting to replicate from [https://127.0.0.1:43888/collection1_shard1_replica_n1/].
   [junit4]   2> 1674298 INFO  (qtp30671144-1931) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 start commit{_version_=1621098611152519168,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 1674298 INFO  (qtp30671144-1931) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 No uncommitted changes. Skipping IW.commit.
   [junit4]   2> 1674298 INFO  (qtp30671144-1931) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 1674354 INFO  (qtp1613599179-1989) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.DirectUpdateHandler2 start commit{_version_=1621098611211239424,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 1674354 INFO  (qtp1613599179-1989) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.DirectUpdateHandler2 No uncommitted changes. Skipping IW.commit.
   [junit4]   2> 1674442 INFO  (qtp1613599179-1989) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 1674442 INFO  (qtp1613599179-1989) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3]  webapp= path=/update params={update.distrib=FROMLEADER&waitSearcher=true&openSearcher=false&commit=true&softCommit=false&distrib.from=https://127.0.0.1:43888/collection1_shard1_replica_n1/&commit_end_point=replicas&wt=javabin&version=2&expungeDeletes=false}{commit=} 0 88
   [junit4]   2> 1674443 INFO  (qtp2139057069-1945) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.u.p.DistributedUpdateProcessor Ignoring commit while not ACTIVE - state: BUFFERING replay: false
   [junit4]   2> 1674443 INFO  (qtp2139057069-1945) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n5]  webapp= path=/update params={update.distrib=FROMLEADER&waitSearcher=true&openSearcher=false&commit=true&softCommit=false&distrib.from=https://127.0.0.1:43888/collection1_shard1_replica_n1/&commit_end_point=replicas&wt=javabin&version=2&expungeDeletes=false}{commit=} 0 0
   [junit4]   2> 1674443 INFO  (qtp30671144-1931) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1]  webapp= path=/update params={waitSearcher=true&openSearcher=false&commit=true&softCommit=false&wt=javabin&version=2}{commit=} 0 145
   [junit4]   2> 1674445 INFO  (qtp30671144-1932) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={qt=/replication&wt=javabin&version=2&command=indexversion} status=0 QTime=0
   [junit4]   2> 1674468 INFO  (recoveryExecutor-469-thread-1-processing-n:127.0.0.1:38188_ x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.h.IndexFetcher Master's generation: 3
   [junit4]   2> 1674468 INFO  (recoveryExecutor-469-thread-1-processing-n:127.0.0.1:38188_ x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.h.IndexFetcher Master's version: 1546000101387
   [junit4]   2> 1674468 INFO  (recoveryExecutor-469-thread-1-processing-n:127.0.0.1:38188_ x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.h.IndexFetcher Slave's generation: 2
   [junit4]   2> 1674468 INFO  (recoveryExecutor-469-thread-1-processing-n:127.0.0.1:38188_ x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.h.IndexFetcher Slave's version: 1546000037078
   [junit4]   2> 1674468 INFO  (recoveryExecutor-469-thread-1-processing-n:127.0.0.1:38188_ x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.h.IndexFetcher Starting replication process
   [junit4]   2> 1675424 INFO  (qtp30671144-1902) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&tlogFiles=false&wt=javabin&version=2&command=filelist} status=0 QTime=948
   [junit4]   2> 1675425 INFO  (recoveryExecutor-469-thread-1-processing-n:127.0.0.1:38188_ x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.h.IndexFetcher Number of files in latest index in master: 45
   [junit4]   2> 1675686 INFO  (recoveryExecutor-469-thread-1-processing-n:127.0.0.1:38188_ x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class org.apache.lucene.index.LogByteSizeMergePolicy: [LogByteSizeMergePolicy: minMergeSize=1677721, mergeFactor=28, maxMergeSize=2147483648, maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=false, maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=0.0]
   [junit4]   2> 1675700 INFO  (recoveryExecutor-469-thread-1-processing-n:127.0.0.1:38188_ x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.u.DefaultSolrCoreState New IndexWriter is ready to be used.
   [junit4]   2> 1675700 INFO  (recoveryExecutor-469-thread-1-processing-n:127.0.0.1:38188_ x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.h.IndexFetcher Starting download (fullCopy=false) to MMapDirectory@/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J1/temp/solr.cloud.hdfs.HdfsRestartWhileUpdatingTest_889C124021662A08-001/shard-2-001/cores/collection1_shard1_replica_n5/data/index.20181228132830945 lockFactory=org.apache.lucene.store.SingleInstanceLockFactory@db7d083
   [junit4]   2> 1675702 INFO  (qtp30671144-1901) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_i.nvd&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1675707 INFO  (qtp30671144-1904) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_i_Lucene70_0.dvd&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1675737 INFO  (qtp30671144-1904) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_i.fdt&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1675753 INFO  (qtp30671144-1904) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_i_Lucene50_0.pos&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1675774 INFO  (qtp30671144-1904) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_i.fdx&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1675791 INFO  (qtp30671144-1903) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_i_Lucene70_0.dvm&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1675871 INFO  (qtp30671144-1903) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_i_Lucene50_0.doc&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1675887 INFO  (qtp30671144-1903) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_i_Lucene50_0.tim&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1675907 INFO  (qtp30671144-1903) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_i_Lucene50_0.tip&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1675923 INFO  (qtp30671144-1903) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_i.si&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1675964 INFO  (qtp30671144-1905) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_i_2.liv&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1675984 INFO  (qtp30671144-1931) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_i.dii&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1676000 INFO  (qtp30671144-1931) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_i.nvm&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1676002 INFO  (qtp30671144-1932) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_i.fnm&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1676016 INFO  (qtp30671144-1902) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_i.dim&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1676047 INFO  (qtp30671144-1902) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_j_1.liv&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1676065 INFO  (qtp30671144-1902) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_j.dii&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1676082 INFO  (qtp30671144-1902) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_j.nvm&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1676102 INFO  (qtp30671144-1902) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_j.fnm&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1676118 INFO  (qtp30671144-1902) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_j.dim&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1676138 INFO  (qtp30671144-1902) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_j_Lucene70_0.dvd&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1676178 INFO  (qtp30671144-1901) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_j.nvd&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1676199 INFO  (qtp30671144-1901) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_j.si&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1676219 INFO  (qtp30671144-1901) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_j_Lucene70_0.dvm&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1676235 INFO  (qtp30671144-1901) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_j_Lucene50_0.pos&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1676256 INFO  (qtp30671144-1901) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_j.fdt&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1676272 INFO  (qtp30671144-1901) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_j_Lucene50_0.tip&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1676292 INFO  (qtp30671144-1901) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_j_Lucene50_0.doc&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1676308 INFO  (qtp30671144-1901) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_j_Lucene50_0.tim&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1676329 INFO  (qtp30671144-1901) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=_j.fdx&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1676346 INFO  (qtp30671144-1901) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/replication params={generation=3&qt=/replication&file=segments_3&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0
   [junit4]   2> 1676383 INFO  (recoveryExecutor-469-thread-1-processing-n:127.0.0.1:38188_ x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.h.IndexFetcher Bytes downloaded: 20184, Bytes skipped downloading: 0
   [junit4]   2> 1676383 INFO  (recoveryExecutor-469-thread-1-processing-n:127.0.0.1:38188_ x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.h.IndexFetcher Total time taken for download (fullCopy=false,bytesDownloaded=20184) : 1 secs (20184 bytes/sec) to MMapDirectory@/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J1/temp/solr.cloud.hdfs.HdfsRestartWhileUpdatingTest_889C124021662A08-001/shard-2-001/cores/collection1_shard1_replica_n5/data/index.20181228132830945 lockFactory=org.apache.lucene.store.SingleInstanceLockFactory@db7d083
   [junit4]   2> 1676432 INFO  (recoveryExecutor-469-thread-1-processing-n:127.0.0.1:38188_ x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class org.apache.lucene.index.MockRandomMergePolicy: org.apache.lucene.index.MockRandomMergePolicy@50ee2ec6
   [junit4]   2> 1676714 INFO  (recoveryExecutor-469-thread-1-processing-n:127.0.0.1:38188_ x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.u.DefaultSolrCoreState New IndexWriter is ready to be used.
   [junit4]   2> 1676802 INFO  (recoveryExecutor-469-thread-1-processing-n:127.0.0.1:38188_ x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.s.SolrIndexSearcher Opening [Searcher@4e3483e3[collection1_shard1_replica_n5] main]
   [junit4]   2> 1676804 INFO  (searcherExecutor-432-thread-1-processing-n:127.0.0.1:38188_ x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.c.SolrCore [collection1_shard1_replica_n5] Registered new searcher Searcher@4e3483e3[collection1_shard1_replica_n5] main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(7.7.0):C3:[diagnostics={java.runtime.version=1.8.0_191-b12, java.vendor=Oracle Corporation, java.version=1.8.0_191, java.vm.version=25.191-b12, lucene.version=7.7.0, os=Linux, os.arch=amd64, os.version=4.4.0-112-generic, source=flush, timestamp=1546000037749}]:[attributes={Lucene50StoredFieldsFormat.mode=BEST_SPEED}]) Uninverting(_i(7.7.0):C455/103:[diagnostics={java.runtime.version=1.8.0_191-b12, java.vendor=Oracle Corporation, java.version=1.8.0_191, java.vm.version=25.191-b12, lucene.version=7.7.0, os=Linux, os.arch=amd64, os.version=4.4.0-112-generic, source=flush, timestamp=1546000099780}]:[attributes={Lucene50StoredFieldsFormat.mode=BEST_SPEED}]:delGen=2) Uninverting(_j(7.7.0):C196/46:[diagnostics={java.runtime.version=1.8.0_191-b12, java.vendor=Oracle Corporation, java.version=1.8.0_191, java.vm.version=25.191-b12, lucene.version=7.7.0, os=Linux, os.arch=amd64, os.version=4.4.0-112-generic, source=flush, timestamp=1546000101439}]:[attributes={Lucene50StoredFieldsFormat.mode=BEST_SPEED}]:delGen=1)))}
   [junit4]   2> 1676805 INFO  (recoveryExecutor-469-thread-1-processing-n:127.0.0.1:38188_ x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.c.RecoveryStrategy No replay needed.
   [junit4]   2> 1676806 INFO  (recoveryExecutor-469-thread-1-processing-n:127.0.0.1:38188_ x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.c.RecoveryStrategy Replication Recovery was successful.
   [junit4]   2> 1676806 INFO  (recoveryExecutor-469-thread-1-processing-n:127.0.0.1:38188_ x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.c.RecoveryStrategy Registering as Active after recovery.
   [junit4]   2> 1676836 INFO  (recoveryExecutor-469-thread-1-processing-n:127.0.0.1:38188_ x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.c.ZkShardTerms Successful update of terms at /collections/collection1/terms/shard1 to Terms{values={core_node6=8, core_node2=8, core_node4=8}, version=22}
   [junit4]   2> 1676838 INFO  (TEST-HdfsRestartWhileUpdatingTest.test-seed#[889C124021662A08]) [    ] o.a.s.c.AbstractDistribZkTestBase Recoveries finished - collection: collection1
   [junit4]   2> 1676838 INFO  (recoveryExecutor-469-thread-1-processing-n:127.0.0.1:38188_ x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.c.RecoveryStrategy Updating version bucket highest from index after successful recovery.
   [junit4]   2> 1676842 INFO  (recoveryExecutor-469-thread-1-processing-n:127.0.0.1:38188_ x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.c.RecoveryStrategy Finished recovery process, successful=[true]
   [junit4]   2> 1676863 INFO  (qtp1394073135-1632) [n:127.0.0.1:34963_ c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 start commit{_version_=1621098613842116608,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 1676863 INFO  (qtp1394073135-1632) [n:127.0.0.1:34963_ c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.SolrIndexWriter Calling setCommitData with IW:org.apache.solr.update.SolrIndexWriter@195367fa commitCommandVersion:1621098613842116608
   [junit4]   2> 1676904 INFO  (qtp1394073135-1632) [n:127.0.0.1:34963_ c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.s.SolrIndexSearcher Opening [Searcher@10ae4da4[control_collection_shard1_replica_n1] main]
   [junit4]   2> 1676920 INFO  (qtp1394073135-1632) [n:127.0.0.1:34963_ c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 1676920 INFO  (searcherExecutor-355-thread-1-processing-n:127.0.0.1:34963_ x:control_collection_shard1_replica_n1 c:control_collection s:shard1 r:core_node2) [n:127.0.0.1:34963_ c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.c.SolrCore [control_collection_shard1_replica_n1] Registered new searcher Searcher@10ae4da4[control_collection_shard1_replica_n1] main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_1n(7.7.0):c655/252:[diagnostics={os=Linux, java.vendor=Oracle Corporation, java.version=1.8.0_191, java.vm.version=25.191-b12, lucene.version=7.7.0, mergeMaxNumSegments=-1, os.arch=amd64, java.runtime.version=1.8.0_191-b12, source=merge, mergeFactor=2, os.version=4.4.0-112-generic, timestamp=1546000093956}]:[attributes={Lucene50StoredFieldsFormat.mode=BEST_SPEED}]:delGen=1) Uninverting(_1v(7.7.0):C373/223:[diagnostics={os=Linux, java.vendor=Oracle Corporation, java.version=1.8.0_191, java.vm.version=25.191-b12, lucene.version=7.7.0, os.arch=amd64, java.runtime.version=1.8.0_191-b12, source=flush, os.version=4.4.0-112-generic, timestamp=1546000098705}]:[attributes={Lucene50StoredFieldsFormat.mode=BEST_SPEED}]:delGen=2) Uninverting(_1w(7.7.0):C158:[diagnostics={os=Linux, java.vendor=Oracle Corporation, java.version=1.8.0_191, java.vm.version=25.191-b12, lucene.version=7.7.0, os.arch=amd64, java.runtime.version=1.8.0_191-b12, source=flush, os.version=4.4.0-112-generic, timestamp=1546000101642}]:[attributes={Lucene50StoredFieldsFormat.mode=BEST_SPEED}])))}
   [junit4]   2> 1676920 INFO  (qtp1394073135-1632) [n:127.0.0.1:34963_ c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [control_collection_shard1_replica_n1]  webapp= path=/update params={waitSearcher=true&commit=true&softCommit=false&wt=javabin&version=2}{commit=} 0 57
   [junit4]   2> 1676932 INFO  (qtp30671144-1904) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 start commit{_version_=1621098613914468352,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 1676932 INFO  (qtp30671144-1904) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 No uncommitted changes. Skipping IW.commit.
   [junit4]   2> 1676932 INFO  (qtp30671144-1904) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 1676992 INFO  (qtp2139057069-1946) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.u.DirectUpdateHandler2 start commit{_version_=1621098613977382912,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 1676992 INFO  (qtp2139057069-1946) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.u.DirectUpdateHandler2 No uncommitted changes. Skipping IW.commit.
   [junit4]   2> 1676993 INFO  (qtp1613599179-1990) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.DirectUpdateHandler2 start commit{_version_=1621098613978431488,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 1676995 INFO  (qtp1613599179-1990) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.DirectUpdateHandler2 No uncommitted changes. Skipping IW.commit.
   [junit4]   2> 1676995 INFO  (qtp2139057069-1946) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 1676995 INFO  (qtp2139057069-1946) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n5]  webapp= path=/update params={update.distrib=FROMLEADER&waitSearcher=true&openSearcher=true&commit=true&softCommit=false&distrib.from=https://127.0.0.1:43888/collection1_shard1_replica_n1/&commit_end_point=replicas&wt=javabin&version=2&expungeDeletes=false}{commit=} 0 3
   [junit4]   2> 1676996 INFO  (qtp1613599179-1990) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 1676996 INFO  (qtp1613599179-1990) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3]  webapp= path=/update params={update.distrib=FROMLEADER&waitSearcher=true&openSearcher=true&commit=true&softCommit=false&distrib.from=https://127.0.0.1:43888/collection1_shard1_replica_n1/&commit_end_point=replicas&wt=javabin&version=2&expungeDeletes=false}{commit=} 0 3
   [junit4]   2> 1676996 INFO  (qtp30671144-1904) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1]  webapp= path=/update params={waitSearcher=true&commit=true&softCommit=false&wt=javabin&version=2}{commit=} 0 64
   [junit4]   2> 1677009 INFO  (qtp30671144-1931) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp= path=/select params={q=*:*&distrib=false&tests=checkShardConsistency&rows=0&wt=javabin&version=2} hits=505 status=0 QTime=0
   [junit4]   2> 1677020 INFO  (qtp2139057069-1944) [n:127.0.0.1:38188_ c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.c.S.Request [collection1_shard1_replica_n5]  webapp= path=/select params={q=*:*&distrib=false&tests=checkShardConsistency&rows=0&wt=javabin&version=2} hits=505 status=0 QTime=0
   [junit4]   2> 1677040 INFO  (qtp1613599179-1988) [n:127.0.0.1:46358_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.S.Request [collection1_shard1_replica_n3]  webapp= path=/select params={q=*:*&distrib=false&tests=checkShardConsistency&rows=0&wt=javabin&version=2} hits=505 status=0 QTime=0
   [junit4]   2> 1681059 INFO  (TEST-HdfsRestartWhileUpdatingTest.test-seed#[889C124021662A08]) [    ] o.a.s.c.AbstractFullDistribZkTestBase Wait for recoveries to finish - wait 30 for each attempt
   [junit4]   2> 1681059 INFO  (TEST-HdfsRestartWhileUpdatingTest.test-seed#[889C124021662A08]) [    ] o.a.s.c.AbstractDistribZkTestBase Wait for recoveries to finish - collection: collection1 failOnTimeout:true timeout (sec):30
   [junit4]   2> 1681060 INFO  (TEST-HdfsRestartWhileUpdatingTest.test-seed#[889C124021662A08]) [    ] o.a.s.c.AbstractDistribZkTestBase Recoveries finished - collection: collection1
   [junit4]   2> 1681062 INFO  (qtp1394073135-1628) [n:127.0.0.1:34963_ c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 start commit{_version_=1621098618245087232,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 1681062 INFO  (qtp1394073135-1628) [n:127.0.0.1:34963_ c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 No uncommitted changes. Skipping IW.commit.
   [junit4]   2> 1681062 INFO  (qtp1394073135-1628) [n:127.0.0.1:34963_ c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 1681062 INFO  (qtp1394073135-1628) [n:127.0.0.1:34963_ c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [control_collection_shard1_replica_n1]  webapp= path=/update params={waitSearcher=true&commit=true&softCommit=false&wt=javabin&version=2}{commit=} 0 0
   [junit4]   2> 1681080 INFO  (qtp30671144-1932) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 start commit{_version_=1621098618263961600,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 1681080 INFO  (qtp30671144-1932) [n:127.0.0.1:43888_ c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 No uncommitted changes. Skipping IW.commit.
   [junit4]   2> 1681080 INFO  (qtp30671144-1932) [n:127.0.0.1:43888_ c:coll

[...truncated too long message...]

), 29parent_f1_s=Lucene50(blocksize=128), 33parent_f1_s=PostingsFormat(name=Memory), 39parent_f1_s=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128))), 3child0_s=PostingsFormat(name=LuceneVarGapDocFreqInterval), 4child3_s=PostingsFormat(name=LuceneVarGapDocFreqInterval), 12parent_f1_s=PostingsFormat(name=LuceneVarGapDocFreqInterval), 22parent_f1_s=PostingsFormat(name=Memory), 1grand0_s=Lucene50(blocksize=128), 7parent_f1_s=Lucene50(blocksize=128), 2child1_s=PostingsFormat(name=LuceneVarGapDocFreqInterval), 32parent_f1_s=Lucene50(blocksize=128), 3parent_f2_s=PostingsFormat(name=Memory), 42parent_f1_s=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128))), 6grand3_s=Lucene50(blocksize=128), 31parent_f2_s=PostingsFormat(name=LuceneVarGapDocFreqInterval), 41parent_f2_s=PostingsFormat(name=Memory), 5grand2_s=PostingsFormat(name=LuceneVarGapDocFreqInterval), 11parent_f2_s=Lucene50(blocksize=128), 21parent_f2_s=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128))), 4grand1_s=Lucene50(blocksize=128), 7grand4_s=PostingsFormat(name=LuceneVarGapDocFreqInterval), 3grand0_s=PostingsFormat(name=LuceneVarGapDocFreqInterval), 9grand6_s=PostingsFormat(name=LuceneVarGapDocFreqInterval), 8grand1_s=Lucene50(blocksize=128), 9grand2_s=PostingsFormat(name=LuceneVarGapDocFreqInterval), 8grand5_s=Lucene50(blocksize=128), 40parent_f1_s=PostingsFormat(name=Memory), 9parent_f1_s=PostingsFormat(name=LuceneVarGapDocFreqInterval), 30parent_f1_s=PostingsFormat(name=LuceneVarGapDocFreqInterval), 14parent_f2_s=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128))), 20parent_f1_s=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128))), 1parent_f2_s=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128))), 7grand0_s=PostingsFormat(name=LuceneVarGapDocFreqInterval), text_t=PostingsFormat(name=Memory), 24parent_f2_s=PostingsFormat(name=LuceneVarGapDocFreqInterval), 34parent_f2_s=PostingsFormat(name=Memory), 13parent_f2_s=PostingsFormat(name=LuceneVarGapDocFreqInterval), type_s=PostingsFormat(name=Memory), 23parent_f2_s=PostingsFormat(name=Memory), 5child1_s=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128))), 10parent_f1_s=Lucene50(blocksize=128), 33parent_f2_s=Lucene50(blocksize=128), 22parent_f2_s=Lucene50(blocksize=128), 4child0_s=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128))), 12parent_f2_s=PostingsFormat(name=Memory), 42parent_f2_s=PostingsFormat(name=LuceneVarGapDocFreqInterval), 32parent_f2_s=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128))), 31parent_f1_s=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128))), 5grand1_s=PostingsFormat(name=Memory), 6grand4_s=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128))), 11parent_f1_s=PostingsFormat(name=Memory), 21parent_f1_s=Lucene50(blocksize=128), 4grand2_s=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128))), 7grand3_s=PostingsFormat(name=Memory), 8grand2_s=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128))), 9grand5_s=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128))), 8grand6_s=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128))), 9grand1_s=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128))), 41parent_f1_s=PostingsFormat(name=LuceneVarGapDocFreqInterval), 37parent_f1_s=PostingsFormat(name=Memory), 40parent_f2_s=Lucene50(blocksize=128), 14parent_f1_s=Lucene50(blocksize=128), 20parent_f2_s=PostingsFormat(name=LuceneVarGapDocFreqInterval), 34parent_f1_s=PostingsFormat(name=LuceneVarGapDocFreqInterval), 24parent_f1_s=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128))), 30parent_f2_s=PostingsFormat(name=Memory), 5parent_f1_s=PostingsFormat(name=LuceneVarGapDocFreqInterval), 6grand0_s=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128))), 10parent_f2_s=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128))), 5child0_s=Lucene50(blocksize=128), 8parent_f1_s=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128))), 27parent_f2_s=PostingsFormat(name=Memory), 2parent_f2_s=PostingsFormat(name=LuceneVarGapDocFreqInterval), 4child1_s=Lucene50(blocksize=128), 2grand1_s=PostingsFormat(name=LuceneVarGapDocFreqInterval), 17parent_f2_s=PostingsFormat(name=LuceneVarGapDocFreqInterval), 3child2_s=Lucene50(blocksize=128), 16parent_f2_s=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128))), 26parent_f2_s=Lucene50(blocksize=128), 1child0_s=Lucene50(blocksize=128), 36parent_f2_s=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128))), 7grand6_s=Lucene50(blocksize=128), 18parent_f1_s=Lucene50(blocksize=128), 28parent_f1_s=PostingsFormat(name=Memory), 5grand0_s=Lucene50(blocksize=128), 6grand5_s=PostingsFormat(name=LuceneVarGapDocFreqInterval), 7grand2_s=Lucene50(blocksize=128), 15parent_f1_s=PostingsFormat(name=Memory), 4grand3_s=PostingsFormat(name=LuceneVarGapDocFreqInterval), 9grand8_s=Lucene50(blocksize=128), 8grand3_s=PostingsFormat(name=LuceneVarGapDocFreqInterval), 6parent_f1_s=PostingsFormat(name=Memory), 25parent_f1_s=Lucene50(blocksize=128), 9grand4_s=Lucene50(blocksize=128), 8grand7_s=PostingsFormat(name=LuceneVarGapDocFreqInterval), 0parent_f1_s=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128))), 3grand2_s=Lucene50(blocksize=128), 35parent_f1_s=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128))), 9grand0_s=Lucene50(blocksize=128), 19parent_f2_s=Lucene50(blocksize=128), 4parent_f2_s=Lucene50(blocksize=128), 37parent_f2_s=Lucene50(blocksize=128), 5child4_s=Lucene50(blocksize=128), 38parent_f1_s=PostingsFormat(name=LuceneVarGapDocFreqInterval), 5grand4_s=Lucene50(blocksize=128), 6grand1_s=PostingsFormat(name=LuceneVarGapDocFreqInterval), 5parent_f2_s=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128))), 16parent_f1_s=PostingsFormat(name=LuceneVarGapDocFreqInterval), 8parent_f2_s=Lucene50(blocksize=128), 39parent_f2_s=PostingsFormat(name=LuceneVarGapDocFreqInterval), 29parent_f2_s=PostingsFormat(name=Memory), 2parent_f1_s=PostingsFormat(name=Memory), 4child2_s=PostingsFormat(name=Memory), rnd_b=PostingsFormat(name=LuceneVarGapDocFreqInterval), 27parent_f1_s=PostingsFormat(name=LuceneVarGapDocFreqInterval), 17parent_f1_s=PostingsFormat(name=Memory), 2grand0_s=PostingsFormat(name=Memory), 3child1_s=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128))), 7parent_f2_s=PostingsFormat(name=Memory), 26parent_f1_s=PostingsFormat(name=Memory), 2child0_s=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128))), 3parent_f1_s=Lucene50(blocksize=128), 36parent_f1_s=Lucene50(blocksize=128)}, docValues:{_version_=DocValuesFormat(name=Memory)}, maxPointsInLeafNode=1495, maxMBSortInHeap=5.803080861739665, sim=RandomSimilarity(queryNorm=false): {}, locale=ar-QA, timezone=Etc/GMT-1
   [junit4]   2> NOTE: Linux 4.4.0-112-generic amd64/Oracle Corporation 1.8.0_191 (64-bit)/cpus=4,threads=1,free=143879976,total=489684992
   [junit4]   2> NOTE: All tests run in this JVM: [FullSolrCloudDistribCmdsTest, RestartWhileUpdatingTest, HdfsRestartWhileUpdatingTest, FullSolrCloudDistribCmdsTest]
   [junit4] Completed [20/20 (7!)] on J1 in 668.91s, 1 test, 1 error <<< FAILURES!
   [junit4] 
   [junit4] 
   [junit4] Tests with failures [seed: 889C124021662A08]:
   [junit4]   - org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test
   [junit4]   - org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test
   [junit4]   - org.apache.solr.cloud.RestartWhileUpdatingTest.test
   [junit4]   - org.apache.solr.cloud.RestartWhileUpdatingTest (suite)
   [junit4]   - org.apache.solr.cloud.RestartWhileUpdatingTest (suite)
   [junit4]   - org.apache.solr.cloud.hdfs.HdfsRestartWhileUpdatingTest (suite)
   [junit4]   - org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test
   [junit4] 
   [junit4] 
   [junit4] JVM J0:     0.77 ..  1560.92 =  1560.15s
   [junit4] JVM J1:     0.79 ..  2424.91 =  2424.12s
   [junit4] JVM J2:     0.73 ..  1585.75 =  1585.03s
   [junit4] Execution time total: 40 minutes 24 seconds
   [junit4] Tests summary: 20 suites, 70 tests, 7 suite-level errors, 3 errors, 1 failure, 51 ignored

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/common-build.xml:1572: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/common-build.xml:1099: There were test failures: 20 suites, 70 tests, 7 suite-level errors, 3 errors, 1 failure, 51 ignored [seed: 889C124021662A08]

Total time: 40 minutes 26 seconds

[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.api.collections.ShardSplitTest
[repro]   1/5 failed: org.apache.solr.cloud.hdfs.HdfsRestartWhileUpdatingTest
[repro]   3/5 failed: org.apache.solr.cloud.FullSolrCloudDistribCmdsTest
[repro]   3/5 failed: org.apache.solr.cloud.RestartWhileUpdatingTest
[repro] git checkout c9c3ef0ef9251ee07fc65c1e06b5e887485f1f31
Previous HEAD position was c236ad8... SOLR-13088: Add zplot Stream Evaluator to plot math expressions in Apache Zeppelin
HEAD is now at c9c3ef0... LUCENE-8625: int overflow in ByteBuffersDataInput.sliceBufferList
[repro] Exiting with code 256
Archiving artifacts
Recording test results
Build step 'Publish JUnit test result report' changed build result to UNSTABLE
Email was triggered for: Unstable (Test Failures)
Sending email for trigger: Unstable (Test Failures)