You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by Policeman Jenkins Server <je...@thetaphi.de> on 2015/05/21 00:49:07 UTC

[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.7.0_80) - Build # 12584 - Failure!

Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12584/
Java: 64bit/jdk1.7.0_80 -XX:-UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 60176 lines...]
-documentation-lint:
    [jtidy] Checking for broken html (such as invalid tags)...
   [delete] Deleting directory /home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/build/jtidy_tmp
     [echo] Checking for broken links...
     [exec] 
     [exec] Crawl/parse...
     [exec] 
     [exec] Verify...
     [exec] 
     [exec] file:///home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/docs/solr-core/org/apache/solr/search/facet/SumAgg.Merger.html
     [exec]   BROKEN LINK: file:///home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/docs/solr-core/org/apache/solr/search/facet/FacetMerger.Context.html
     [exec]   BROKEN LINK: file:///home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/docs/solr-core/org/apache/solr/search/facet/FacetMerger.Context.html
     [exec]   BROKEN LINK: file:///home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/docs/solr-core/org/apache/solr/search/facet/FacetMerger.Context.html
     [exec] 
     [exec] Broken javadocs links were found!

BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:536: The following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:90: The following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build.xml:621: The following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build.xml:634: The following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:2547: exec returned: 1

Total time: 56 minutes 43 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b60) - Build # 12587 - Still Failing!

Posted by Policeman Jenkins Server <je...@thetaphi.de>.
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12587/
Java: 64bit/jdk1.9.0-ea-b60 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIAsyncDistributedZkTest.testSolrJAPICalls

Error Message:
Shard split did not complete. Last recorded state: running expected:<[completed]> but was:<[running]>

Stack Trace:
org.junit.ComparisonFailure: Shard split did not complete. Last recorded state: running expected:<[completed]> but was:<[running]>
	at __randomizedtesting.SeedInfo.seed([8AB2E1776CCFAE0C:D2D66D166AA506D8]:0)
	at org.junit.Assert.assertEquals(Assert.java:125)
	at org.apache.solr.cloud.CollectionsAPIAsyncDistributedZkTest.testSolrJAPICalls(CollectionsAPIAsyncDistributedZkTest.java:101)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:502)
	at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
	at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
	at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
	at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
	at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
	at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
	at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
	at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
	at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
	at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
	at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
	at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
	at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
	at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
	at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
	at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
	at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
	at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
	at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
	at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
	at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
	at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
	at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
	at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
	at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
	at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
	at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
	at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
	at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
	at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
	at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
	at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
	at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
	at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
	at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
	at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
	at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 10103 lines...]
   [junit4] Suite: org.apache.solr.cloud.CollectionsAPIAsyncDistributedZkTest
   [junit4]   2> Creating dataDir: /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/init-core-data-001
   [junit4]   2> 317297 T1514 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl (false) and clientAuth (false)
   [junit4]   2> 317297 T1514 oas.BaseDistributedSearchTestCase.initHostContext Setting hostContext system property: /ynfk/o
   [junit4]   2> 317299 T1514 oasc.ZkTestServer.run STARTING ZK TEST SERVER
   [junit4]   2> 317300 T1515 oasc.ZkTestServer$2$1.setClientPort client port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 317300 T1515 oasc.ZkTestServer$ZKServerMain.runFromConfig Starting server
   [junit4]   2> 317400 T1514 oasc.ZkTestServer.run start zk server on port:47056
   [junit4]   2> 317400 T1514 oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default ZkCredentialsProvider
   [junit4]   2> 317401 T1514 oascc.ConnectionManager.waitForConnected Waiting for client to connect to ZooKeeper
   [junit4]   2> 317403 T1522 oascc.ConnectionManager.process Watcher org.apache.solr.common.cloud.ConnectionManager@74f6dea3 name:ZooKeeperConnection Watcher:127.0.0.1:47056 got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 317403 T1514 oascc.ConnectionManager.waitForConnected Client is connected to ZooKeeper
   [junit4]   2> 317404 T1514 oascc.SolrZkClient.createZkACLProvider Using default ZkACLProvider
   [junit4]   2> 317404 T1514 oascc.SolrZkClient.makePath makePath: /solr
   [junit4]   2> 317405 T1514 oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default ZkCredentialsProvider
   [junit4]   2> 317406 T1514 oascc.ConnectionManager.waitForConnected Waiting for client to connect to ZooKeeper
   [junit4]   2> 317407 T1525 oascc.ConnectionManager.process Watcher org.apache.solr.common.cloud.ConnectionManager@1ca1324e name:ZooKeeperConnection Watcher:127.0.0.1:47056/solr got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 317407 T1514 oascc.ConnectionManager.waitForConnected Client is connected to ZooKeeper
   [junit4]   2> 317407 T1514 oascc.SolrZkClient.createZkACLProvider Using default ZkACLProvider
   [junit4]   2> 317407 T1514 oascc.SolrZkClient.makePath makePath: /collections/collection1
   [junit4]   2> 317408 T1514 oascc.SolrZkClient.makePath makePath: /collections/collection1/shards
   [junit4]   2> 317408 T1514 oascc.SolrZkClient.makePath makePath: /collections/control_collection
   [junit4]   2> 317409 T1514 oascc.SolrZkClient.makePath makePath: /collections/control_collection/shards
   [junit4]   2> 317409 T1514 oasc.AbstractZkTestCase.putConfig put /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml to /configs/conf1/solrconfig.xml
   [junit4]   2> 317410 T1514 oascc.SolrZkClient.makePath makePath: /configs/conf1/solrconfig.xml
   [junit4]   2> 317411 T1514 oasc.AbstractZkTestCase.putConfig put /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/schema.xml to /configs/conf1/schema.xml
   [junit4]   2> 317411 T1514 oascc.SolrZkClient.makePath makePath: /configs/conf1/schema.xml
   [junit4]   2> 317411 T1514 oasc.AbstractZkTestCase.putConfig put /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/solrconfig.snippet.randomindexconfig.xml to /configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2> 317412 T1514 oascc.SolrZkClient.makePath makePath: /configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2> 317412 T1514 oasc.AbstractZkTestCase.putConfig put /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/stopwords.txt to /configs/conf1/stopwords.txt
   [junit4]   2> 317412 T1514 oascc.SolrZkClient.makePath makePath: /configs/conf1/stopwords.txt
   [junit4]   2> 317413 T1514 oasc.AbstractZkTestCase.putConfig put /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/protwords.txt to /configs/conf1/protwords.txt
   [junit4]   2> 317413 T1514 oascc.SolrZkClient.makePath makePath: /configs/conf1/protwords.txt
   [junit4]   2> 317414 T1514 oasc.AbstractZkTestCase.putConfig put /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/currency.xml to /configs/conf1/currency.xml
   [junit4]   2> 317414 T1514 oascc.SolrZkClient.makePath makePath: /configs/conf1/currency.xml
   [junit4]   2> 317414 T1514 oasc.AbstractZkTestCase.putConfig put /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/enumsConfig.xml to /configs/conf1/enumsConfig.xml
   [junit4]   2> 317414 T1514 oascc.SolrZkClient.makePath makePath: /configs/conf1/enumsConfig.xml
   [junit4]   2> 317415 T1514 oasc.AbstractZkTestCase.putConfig put /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/open-exchange-rates.json to /configs/conf1/open-exchange-rates.json
   [junit4]   2> 317415 T1514 oascc.SolrZkClient.makePath makePath: /configs/conf1/open-exchange-rates.json
   [junit4]   2> 317416 T1514 oasc.AbstractZkTestCase.putConfig put /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/mapping-ISOLatin1Accent.txt to /configs/conf1/mapping-ISOLatin1Accent.txt
   [junit4]   2> 317416 T1514 oascc.SolrZkClient.makePath makePath: /configs/conf1/mapping-ISOLatin1Accent.txt
   [junit4]   2> 317417 T1514 oasc.AbstractZkTestCase.putConfig put /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/old_synonyms.txt to /configs/conf1/old_synonyms.txt
   [junit4]   2> 317417 T1514 oascc.SolrZkClient.makePath makePath: /configs/conf1/old_synonyms.txt
   [junit4]   2> 317418 T1514 oasc.AbstractZkTestCase.putConfig put /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/synonyms.txt to /configs/conf1/synonyms.txt
   [junit4]   2> 317418 T1514 oascc.SolrZkClient.makePath makePath: /configs/conf1/synonyms.txt
   [junit4]   2> 317459 T1514 oas.SolrTestCaseJ4.writeCoreProperties Writing core.properties file to /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/control-001/cores/collection1
   [junit4]   2> 317461 T1514 oejs.Server.doStart jetty-9.2.10.v20150310
   [junit4]   2> 317462 T1514 oejsh.ContextHandler.doStart Started o.e.j.s.ServletContextHandler@79710378{/ynfk/o,null,AVAILABLE}
   [junit4]   2> 317467 T1514 oejs.AbstractConnector.doStart Started ServerConnector@399f6165{HTTP/1.1}{127.0.0.1:47019}
   [junit4]   2> 317467 T1514 oejs.Server.doStart Started @318671ms
   [junit4]   2> 317467 T1514 oascse.JettySolrRunner$1.lifeCycleStarted Jetty properties: {hostContext=/ynfk/o, hostPort=47019, coreRootDirectory=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/control-001/cores}
   [junit4]   2> 317468 T1514 oass.SolrDispatchFilter.init SolrDispatchFilter.init()sun.misc.Launcher$AppClassLoader@3b764bce
   [junit4]   2> 317468 T1514 oasc.SolrResourceLoader.<init> new SolrResourceLoader for directory: '/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/control-001/'
   [junit4]   2> 317477 T1514 oasc.SolrXmlConfig.fromFile Loading container configuration from /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/control-001/solr.xml
   [junit4]   2> 317480 T1514 oasc.CorePropertiesLocator.<init> Config-defined core root directory: /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/control-001/cores
   [junit4]   2> 317480 T1514 oasc.CoreContainer.<init> New CoreContainer 2020001894
   [junit4]   2> 317480 T1514 oasc.CoreContainer.load Loading cores into CoreContainer [instanceDir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/control-001/]
   [junit4]   2> 317480 T1514 oasc.CoreContainer.load loading shared library: /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/control-001/lib
   [junit4]   2> 317481 T1514 oasc.SolrResourceLoader.addToClassLoader WARN Can't find (or read) directory to add to classloader: lib (resolved as: /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/control-001/lib).
   [junit4]   2> 317484 T1514 oashc.HttpShardHandlerFactory.init created with socketTimeout : 90000,urlScheme : ,connTimeout : 15000,maxConnectionsPerHost : 20,maxConnections : 10000,corePoolSize : 0,maximumPoolSize : 2147483647,maxThreadIdleTime : 5,sizeOfQueue : -1,fairnessPolicy : false,useRetries : false,
   [junit4]   2> 317485 T1514 oasu.UpdateShardHandler.<init> Creating UpdateShardHandler HTTP client with params: socketTimeout=340000&connTimeout=45000&retry=true
   [junit4]   2> 317485 T1514 oasl.LogWatcher.createWatcher SLF4J impl is org.slf4j.impl.Log4jLoggerFactory
   [junit4]   2> 317486 T1514 oasl.LogWatcher.newRegisteredLogWatcher Registering Log Listener [Log4j (org.slf4j.impl.Log4jLoggerFactory)]
   [junit4]   2> 317486 T1514 oasc.CoreContainer.load Node Name: 127.0.0.1
   [junit4]   2> 317486 T1514 oasc.ZkContainer.initZooKeeper Zookeeper client=127.0.0.1:47056/solr
   [junit4]   2> 317486 T1514 oasc.ZkController.checkChrootPath zkHost includes chroot
   [junit4]   2> 317486 T1514 oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default ZkCredentialsProvider
   [junit4]   2> 317487 T1514 oascc.ConnectionManager.waitForConnected Waiting for client to connect to ZooKeeper
   [junit4]   2> 317489 T1539 oascc.ConnectionManager.process Watcher org.apache.solr.common.cloud.ConnectionManager@378749a6 name:ZooKeeperConnection Watcher:127.0.0.1:47056 got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 317489 T1514 oascc.ConnectionManager.waitForConnected Client is connected to ZooKeeper
   [junit4]   2> 317489 T1514 oascc.SolrZkClient.createZkACLProvider Using default ZkACLProvider
   [junit4]   2> 317490 T1514 n:127.0.0.1:47019_ynfk%2Fo oascc.ConnectionManager.waitForConnected Waiting for client to connect to ZooKeeper
   [junit4]   2> 317491 T1542 n:127.0.0.1:47019_ynfk%2Fo oascc.ConnectionManager.process Watcher org.apache.solr.common.cloud.ConnectionManager@20a4936e name:ZooKeeperConnection Watcher:127.0.0.1:47056/solr got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 317491 T1514 n:127.0.0.1:47019_ynfk%2Fo oascc.ConnectionManager.waitForConnected Client is connected to ZooKeeper
   [junit4]   2> 317492 T1514 n:127.0.0.1:47019_ynfk%2Fo oascc.SolrZkClient.makePath makePath: /overseer/queue
   [junit4]   2> 317493 T1514 n:127.0.0.1:47019_ynfk%2Fo oascc.SolrZkClient.makePath makePath: /overseer/collection-queue-work
   [junit4]   2> 317494 T1514 n:127.0.0.1:47019_ynfk%2Fo oascc.SolrZkClient.makePath makePath: /overseer/collection-map-running
   [junit4]   2> 317495 T1514 n:127.0.0.1:47019_ynfk%2Fo oascc.SolrZkClient.makePath makePath: /overseer/collection-map-completed
   [junit4]   2> 317496 T1514 n:127.0.0.1:47019_ynfk%2Fo oascc.SolrZkClient.makePath makePath: /overseer/collection-map-failure
   [junit4]   2> 317496 T1514 n:127.0.0.1:47019_ynfk%2Fo oascc.SolrZkClient.makePath makePath: /live_nodes
   [junit4]   2> 317497 T1514 n:127.0.0.1:47019_ynfk%2Fo oascc.SolrZkClient.makePath makePath: /aliases.json
   [junit4]   2> 317497 T1514 n:127.0.0.1:47019_ynfk%2Fo oascc.SolrZkClient.makePath makePath: /clusterstate.json
   [junit4]   2> 317498 T1514 n:127.0.0.1:47019_ynfk%2Fo oasc.ZkController.createEphemeralLiveNode Register node as live in ZooKeeper:/live_nodes/127.0.0.1:47019_ynfk%2Fo
   [junit4]   2> 317498 T1514 n:127.0.0.1:47019_ynfk%2Fo oascc.SolrZkClient.makePath makePath: /live_nodes/127.0.0.1:47019_ynfk%2Fo
   [junit4]   2> 317499 T1514 n:127.0.0.1:47019_ynfk%2Fo oascc.SolrZkClient.makePath makePath: /overseer_elect
   [junit4]   2> 317499 T1514 n:127.0.0.1:47019_ynfk%2Fo oascc.SolrZkClient.makePath makePath: /overseer_elect/election
   [junit4]   2> 317500 T1514 n:127.0.0.1:47019_ynfk%2Fo oasc.Overseer.close Overseer (id=null) closing
   [junit4]   2> 317501 T1514 n:127.0.0.1:47019_ynfk%2Fo oasc.OverseerElectionContext.runLeaderProcess I am going to be the leader 127.0.0.1:47019_ynfk%2Fo
   [junit4]   2> 317501 T1514 n:127.0.0.1:47019_ynfk%2Fo oascc.SolrZkClient.makePath makePath: /overseer_elect/leader
   [junit4]   2> 317501 T1514 n:127.0.0.1:47019_ynfk%2Fo oasc.Overseer.start Overseer (id=93859402747412483-127.0.0.1:47019_ynfk%2Fo-n_0000000000) starting
   [junit4]   2> 317502 T1514 n:127.0.0.1:47019_ynfk%2Fo oascc.SolrZkClient.makePath makePath: /overseer/queue-work
   [junit4]   2> 317504 T1514 n:127.0.0.1:47019_ynfk%2Fo oasc.OverseerAutoReplicaFailoverThread.<init> Starting OverseerAutoReplicaFailoverThread autoReplicaFailoverWorkLoopDelay=10000 autoReplicaFailoverWaitAfterExpiration=30000 autoReplicaFailoverBadNodeExpiration=60000
   [junit4]   2> 317504 T1514 n:127.0.0.1:47019_ynfk%2Fo oascc.ZkStateReader.createClusterStateWatchersAndUpdate Updating cluster state from ZooKeeper... 
   [junit4]   2> 317504 T1544 n:127.0.0.1:47019_ynfk%2Fo oasc.OverseerCollectionProcessor.run Process current queue of collection creations
   [junit4]   2> 317505 T1543 n:127.0.0.1:47019_ynfk%2Fo oasc.Overseer$ClusterStateUpdater.run Starting to work on the main queue
   [junit4]   2> 317506 T1514 n:127.0.0.1:47019_ynfk%2Fo oasc.CoreContainer.initializeAuthenticationPlugin No authentication plugin used.
   [junit4]   2> 317506 T1514 n:127.0.0.1:47019_ynfk%2Fo oasc.CoreContainer.intializeAuthorizationPlugin Security conf doesn't exist. Skipping setup for authorization module.
   [junit4]   2> 317506 T1514 n:127.0.0.1:47019_ynfk%2Fo oasc.CorePropertiesLocator.discover Looking for core definitions underneath /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/control-001/cores
   [junit4]   2> 317507 T1514 n:127.0.0.1:47019_ynfk%2Fo oasc.CoreDescriptor.<init> CORE DESCRIPTOR: {name=collection1, config=solrconfig.xml, transient=false, schema=schema.xml, loadOnStartup=true, instanceDir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/control-001/cores/collection1, collection=control_collection, absoluteInstDir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/control-001/cores/collection1/, coreNodeName=, dataDir=data/, shard=}
   [junit4]   2> 317507 T1514 n:127.0.0.1:47019_ynfk%2Fo oasc.CorePropertiesLocator.discoverUnder Found core collection1 in /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/control-001/cores/collection1/
   [junit4]   2> 317507 T1514 n:127.0.0.1:47019_ynfk%2Fo oasc.CorePropertiesLocator.discover Found 1 core definitions
   [junit4]   2> 317508 T1546 n:127.0.0.1:47019_ynfk%2Fo c:control_collection x:collection1 oasc.ZkController.publish publishing core=collection1 state=down collection=control_collection
   [junit4]   2> 317508 T1546 n:127.0.0.1:47019_ynfk%2Fo c:control_collection x:collection1 oasc.ZkController.publish numShards not found on descriptor - reading it from system property
   [junit4]   2> 317508 T1546 n:127.0.0.1:47019_ynfk%2Fo oasc.ZkController.waitForCoreNodeName look for our core node name
   [junit4]   2> 317508 T1542 n:127.0.0.1:47019_ynfk%2Fo oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path /overseer/queue state SyncConnected
   [junit4]   2> 317509 T1543 n:127.0.0.1:47019_ynfk%2Fo oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
   [junit4]   2> 	  "core":"collection1",
   [junit4]   2> 	  "roles":null,
   [junit4]   2> 	  "base_url":"http://127.0.0.1:47019/ynfk/o",
   [junit4]   2> 	  "node_name":"127.0.0.1:47019_ynfk%2Fo",
   [junit4]   2> 	  "numShards":"1",
   [junit4]   2> 	  "state":"down",
   [junit4]   2> 	  "shard":null,
   [junit4]   2> 	  "collection":"control_collection",
   [junit4]   2> 	  "operation":"state"} current state version: 0
   [junit4]   2> 317509 T1543 n:127.0.0.1:47019_ynfk%2Fo oasco.ReplicaMutator.updateState Update state numShards=1 message={
   [junit4]   2> 	  "core":"collection1",
   [junit4]   2> 	  "roles":null,
   [junit4]   2> 	  "base_url":"http://127.0.0.1:47019/ynfk/o",
   [junit4]   2> 	  "node_name":"127.0.0.1:47019_ynfk%2Fo",
   [junit4]   2> 	  "numShards":"1",
   [junit4]   2> 	  "state":"down",
   [junit4]   2> 	  "shard":null,
   [junit4]   2> 	  "collection":"control_collection",
   [junit4]   2> 	  "operation":"state"}
   [junit4]   2> 317509 T1543 n:127.0.0.1:47019_ynfk%2Fo oasco.ClusterStateMutator.createCollection building a new cName: control_collection
   [junit4]   2> 317509 T1543 n:127.0.0.1:47019_ynfk%2Fo oasco.ReplicaMutator.updateState Assigning new node to shard shard=shard1
   [junit4]   2> 317510 T1542 n:127.0.0.1:47019_ynfk%2Fo oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 1)
   [junit4]   2> 318509 T1546 n:127.0.0.1:47019_ynfk%2Fo oasc.ZkController.waitForShardId waiting to find shard id in clusterstate for collection1
   [junit4]   2> 318509 T1546 n:127.0.0.1:47019_ynfk%2Fo oasc.ZkController.createCollectionZkNode Check for collection zkNode:control_collection
   [junit4]   2> 318509 T1546 n:127.0.0.1:47019_ynfk%2Fo oasc.ZkController.createCollectionZkNode Collection zkNode exists
   [junit4]   2> 318509 T1546 n:127.0.0.1:47019_ynfk%2Fo oascc.ZkStateReader.readConfigName Load collection config from:/collections/control_collection
   [junit4]   2> 318509 T1546 n:127.0.0.1:47019_ynfk%2Fo oascc.ZkStateReader.readConfigName path=/collections/control_collection configName=conf1 specified config exists in ZooKeeper
   [junit4]   2> 318510 T1546 n:127.0.0.1:47019_ynfk%2Fo oasc.SolrResourceLoader.<init> new SolrResourceLoader for directory: '/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/control-001/cores/collection1/'
   [junit4]   2> 318516 T1546 n:127.0.0.1:47019_ynfk%2Fo oasc.Config.<init> loaded config solrconfig.xml with version 0 
   [junit4]   2> 318519 T1546 n:127.0.0.1:47019_ynfk%2Fo oasc.SolrConfig.refreshRequestParams current version of requestparams : -1
   [junit4]   2> 318522 T1546 n:127.0.0.1:47019_ynfk%2Fo oasc.SolrConfig.<init> Using Lucene MatchVersion: 5.2.0
   [junit4]   2> 318526 T1546 n:127.0.0.1:47019_ynfk%2Fo oasc.SolrConfig.<init> Loaded SolrConfig: solrconfig.xml
   [junit4]   2> 318527 T1546 n:127.0.0.1:47019_ynfk%2Fo oass.IndexSchema.readSchema Reading Solr Schema from /configs/conf1/schema.xml
   [junit4]   2> 318530 T1546 n:127.0.0.1:47019_ynfk%2Fo oass.IndexSchema.readSchema [collection1] Schema name=test
   [junit4]   2> 318588 T1546 n:127.0.0.1:47019_ynfk%2Fo oass.OpenExchangeRatesOrgProvider.init Initialized with rates=open-exchange-rates.json, refreshInterval=1440.
   [junit4]   2> 318591 T1546 n:127.0.0.1:47019_ynfk%2Fo oass.IndexSchema.readSchema default search field in schema is text
   [junit4]   2> 318592 T1546 n:127.0.0.1:47019_ynfk%2Fo oass.IndexSchema.readSchema unique key field: id
   [junit4]   2> 318596 T1546 n:127.0.0.1:47019_ynfk%2Fo oass.FileExchangeRateProvider.reload Reloading exchange rates from file currency.xml
   [junit4]   2> 318597 T1546 n:127.0.0.1:47019_ynfk%2Fo oass.FileExchangeRateProvider.reload Reloading exchange rates from file currency.xml
   [junit4]   2> 318599 T1546 n:127.0.0.1:47019_ynfk%2Fo oass.OpenExchangeRatesOrgProvider.reload Reloading exchange rates from open-exchange-rates.json
   [junit4]   2> 318599 T1546 n:127.0.0.1:47019_ynfk%2Fo oass.OpenExchangeRatesOrgProvider$OpenExchangeRates.<init> WARN Unknown key IMPORTANT NOTE
   [junit4]   2> 318599 T1546 n:127.0.0.1:47019_ynfk%2Fo oass.OpenExchangeRatesOrgProvider$OpenExchangeRates.<init> WARN Expected key, got STRING
   [junit4]   2> 318599 T1546 n:127.0.0.1:47019_ynfk%2Fo oass.OpenExchangeRatesOrgProvider.reload Reloading exchange rates from open-exchange-rates.json
   [junit4]   2> 318599 T1546 n:127.0.0.1:47019_ynfk%2Fo oass.OpenExchangeRatesOrgProvider$OpenExchangeRates.<init> WARN Unknown key IMPORTANT NOTE
   [junit4]   2> 318600 T1546 n:127.0.0.1:47019_ynfk%2Fo oass.OpenExchangeRatesOrgProvider$OpenExchangeRates.<init> WARN Expected key, got STRING
   [junit4]   2> 318600 T1546 n:127.0.0.1:47019_ynfk%2Fo oasc.CoreContainer.create Creating SolrCore 'collection1' using configuration from collection control_collection
   [junit4]   2> 318600 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasc.SolrCore.initDirectoryFactory org.apache.solr.core.MockDirectoryFactory
   [junit4]   2> 318600 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasc.SolrCore.<init> [[collection1] ] Opening new SolrCore at [/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/control-001/cores/collection1/], dataDir=[null]
   [junit4]   2> 318600 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasc.JmxMonitoredMap.<init> JMX monitoring is enabled. Adding Solr mbeans to JMX Server: com.sun.jmx.mbeanserver.JmxMBeanServer@45022fa6
   [junit4]   2> 318600 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasc.CachingDirectoryFactory.get return new directory for /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/control-001/cores/collection1/data
   [junit4]   2> 318601 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasc.SolrCore.getNewIndexDir New index directory detected: old=null new=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/control-001/cores/collection1/data/index/
   [junit4]   2> 318601 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasc.SolrCore.initIndex WARN [collection1] Solr index directory '/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/control-001/cores/collection1/data/index' doesn't exist. Creating new index...
   [junit4]   2> 318601 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasc.CachingDirectoryFactory.get return new directory for /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/control-001/cores/collection1/data/index
   [junit4]   2> 318601 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasu.RandomMergePolicy.<init> RandomMergePolicy wrapping class org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: maxMergeAtOnce=12, maxMergeAtOnceExplicit=28, maxMergedSegmentMB=4.7451171875, floorSegmentMB=1.8916015625, forceMergeDeletesPctAllowed=22.582409578349576, segmentsPerTier=17.0, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=0.0
   [junit4]   2> 318602 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasc.SolrDeletionPolicy.onCommit SolrDeletionPolicy.onCommit: commits: num=1
   [junit4]   2> 		commit{dir=MockDirectoryWrapper(RAMDirectory@39ee4a2b lockFactory=org.apache.lucene.store.SingleInstanceLockFactory@3ae3c294),segFN=segments_1,generation=1}
   [junit4]   2> 318602 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasc.SolrDeletionPolicy.updateCommits newest commit generation = 1
   [junit4]   2> 318604 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain "nodistrib"
   [junit4]   2> 318604 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain "dedupe"
   [junit4]   2> 318604 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasup.UpdateRequestProcessorChain.init inserting DistributedUpdateProcessorFactory into updateRequestProcessorChain "dedupe"
   [junit4]   2> 318604 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain "stored_sig"
   [junit4]   2> 318605 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasup.UpdateRequestProcessorChain.init inserting DistributedUpdateProcessorFactory into updateRequestProcessorChain "stored_sig"
   [junit4]   2> 318605 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain "distrib-dup-test-chain-explicit"
   [junit4]   2> 318605 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain "distrib-dup-test-chain-implicit"
   [junit4]   2> 318605 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasup.UpdateRequestProcessorChain.init inserting DistributedUpdateProcessorFactory into updateRequestProcessorChain "distrib-dup-test-chain-implicit"
   [junit4]   2> 318605 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasc.SolrCore.loadUpdateProcessorChains no updateRequestProcessorChain defined as default, creating implicit default
   [junit4]   2> 318606 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
   [junit4]   2> 318607 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
   [junit4]   2> 318607 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
   [junit4]   2> 318608 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
   [junit4]   2> 318610 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasc.RequestHandlers.initHandlersFromConfig Registered paths: /admin/mbeans,standard,/update/csv,/update/json/docs,/admin/luke,/admin/segments,/get,/admin/system,/replication,/admin/properties,/config,/schema,/admin/plugins,/admin/logging,/update/json,/admin/threads,/admin/ping,/update,/admin/file
   [junit4]   2> 318610 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasc.SolrCore.initStatsCache Using default statsCache cache: org.apache.solr.search.stats.LocalStatsCache
   [junit4]   2> 318610 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasu.UpdateHandler.<init> Using UpdateLog implementation: org.apache.solr.update.UpdateLog
   [junit4]   2> 318611 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasu.UpdateLog.init Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=256
   [junit4]   2> 318611 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasu.CommitTracker.<init> Hard AutoCommit: disabled
   [junit4]   2> 318611 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasu.CommitTracker.<init> Soft AutoCommit: disabled
   [junit4]   2> 318611 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasu.RandomMergePolicy.<init> RandomMergePolicy wrapping class org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: maxMergeAtOnce=44, maxMergeAtOnceExplicit=31, maxMergedSegmentMB=2.40625, floorSegmentMB=0.716796875, forceMergeDeletesPctAllowed=20.325306031948806, segmentsPerTier=34.0, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=0.0
   [junit4]   2> 318612 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasc.SolrDeletionPolicy.onInit SolrDeletionPolicy.onInit: commits: num=1
   [junit4]   2> 		commit{dir=MockDirectoryWrapper(RAMDirectory@39ee4a2b lockFactory=org.apache.lucene.store.SingleInstanceLockFactory@3ae3c294),segFN=segments_1,generation=1}
   [junit4]   2> 318612 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasc.SolrDeletionPolicy.updateCommits newest commit generation = 1
   [junit4]   2> 318612 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oass.SolrIndexSearcher.<init> Opening Searcher@621b863b[collection1] main
   [junit4]   2> 318612 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasu.UpdateLog.onFirstSearcher On first searcher opened, looking up max value of version field
   [junit4]   2> 318612 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasu.VersionInfo.getMaxVersionFromIndex Refreshing highest value of _version_ for 256 version buckets from index
   [junit4]   2> 318612 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasu.VersionInfo.getMaxVersionFromIndex WARN No terms found for _version_, cannot seed version bucket highest value from index
   [junit4]   2> 318612 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasu.UpdateLog.seedBucketsWithHighestVersion WARN Could not find max version in index or recent updates, using new clock 1501750445333282816
   [junit4]   2> 318613 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasu.UpdateLog.seedBucketsWithHighestVersion Took 1 ms to seed version buckets with highest version 1501750445333282816
   [junit4]   2> 318613 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oascc.ZkStateReader.readConfigName Load collection config from:/collections/control_collection
   [junit4]   2> 318613 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oascc.ZkStateReader.readConfigName path=/collections/control_collection configName=conf1 specified config exists in ZooKeeper
   [junit4]   2> 318613 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasr.ManagedResourceStorage.newStorageIO Setting up ZooKeeper-based storage for the RestManager with znodeBase: /configs/conf1
   [junit4]   2> 318613 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasr.ManagedResourceStorage$ZooKeeperStorageIO.configure Configured ZooKeeperStorageIO with znodeBase: /configs/conf1
   [junit4]   2> 318613 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasr.RestManager.init Initializing RestManager with initArgs: {}
   [junit4]   2> 318614 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasr.ManagedResourceStorage.load Reading _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 318614 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasr.ManagedResourceStorage$ZooKeeperStorageIO.openInputStream No data found for znode /configs/conf1/_rest_managed.json
   [junit4]   2> 318614 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasr.ManagedResourceStorage.load Loaded null at path _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 318614 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasr.RestManager.init Initializing 0 registered ManagedResources
   [junit4]   2> 318614 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oash.ReplicationHandler.inform Commits will be reserved for  10000
   [junit4]   2> 318615 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasc.ZkController.getConfDirListeners watch zkdir /configs/conf1
   [junit4]   2> 318615 T1547 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasc.SolrCore.registerSearcher [collection1] Registered new searcher Searcher@621b863b[collection1] main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 318615 T1546 n:127.0.0.1:47019_ynfk%2Fo x:collection1 oasc.CoreContainer.registerCore registering core: collection1
   [junit4]   2> 318615 T1550 n:127.0.0.1:47019_ynfk%2Fo c:control_collection s:shard1 x:collection1 oasc.ZkController.register Register replica - core:collection1 address:http://127.0.0.1:47019/ynfk/o collection:control_collection shard:shard1
   [junit4]   2> 318615 T1514 n:127.0.0.1:47019_ynfk%2Fo oass.SolrDispatchFilter.init user.dir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0
   [junit4]   2> 318616 T1514 n:127.0.0.1:47019_ynfk%2Fo oass.SolrDispatchFilter.init SolrDispatchFilter.init() done
   [junit4]   2> 318616 T1550 n:127.0.0.1:47019_ynfk%2Fo c:control_collection s:shard1 x:collection1 oascc.SolrZkClient.makePath makePath: /collections/control_collection/leader_elect/shard1/election
   [junit4]   2> 318617 T1514 oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default ZkCredentialsProvider
   [junit4]   2> 318617 T1514 oascc.ConnectionManager.waitForConnected Waiting for client to connect to ZooKeeper
   [junit4]   2> 318618 T1550 n:127.0.0.1:47019_ynfk%2Fo c:control_collection s:shard1 x:collection1 oasc.ShardLeaderElectionContext.runLeaderProcess Running the leader process for shard shard1
   [junit4]   2> 318619 T1553 oascc.ConnectionManager.process Watcher org.apache.solr.common.cloud.ConnectionManager@1f1eae2a name:ZooKeeperConnection Watcher:127.0.0.1:47056/solr got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 318619 T1542 n:127.0.0.1:47019_ynfk%2Fo oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path /overseer/queue state SyncConnected
   [junit4]   2> 318619 T1550 n:127.0.0.1:47019_ynfk%2Fo c:control_collection s:shard1 x:collection1 oasc.ShardLeaderElectionContext.waitForReplicasToComeUp Enough replicas found to continue.
   [junit4]   2> 318619 T1514 oascc.ConnectionManager.waitForConnected Client is connected to ZooKeeper
   [junit4]   2> 318619 T1543 n:127.0.0.1:47019_ynfk%2Fo oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
   [junit4]   2> 	  "operation":"leader",
   [junit4]   2> 	  "shard":"shard1",
   [junit4]   2> 	  "collection":"control_collection"} current state version: 1
   [junit4]   2> 318619 T1550 n:127.0.0.1:47019_ynfk%2Fo c:control_collection s:shard1 x:collection1 oasc.ShardLeaderElectionContext.runLeaderProcess I may be the new leader - try and sync
   [junit4]   2> 318619 T1514 oascc.SolrZkClient.createZkACLProvider Using default ZkACLProvider
   [junit4]   2> ASYNC  NEW_CORE C229 name=collection1 org.apache.solr.core.SolrCore@29c3f3a8 url=http://127.0.0.1:47019/ynfk/o/collection1 node=127.0.0.1:47019_ynfk%2Fo C229_STATE=coll:control_collection core:collection1 props:{core=collection1, base_url=http://127.0.0.1:47019/ynfk/o, node_name=127.0.0.1:47019_ynfk%2Fo, state=down}
   [junit4]   2> 318620 T1550 n:127.0.0.1:47019_ynfk%2Fo c:control_collection s:shard1 x:collection1 C229 oasc.SyncStrategy.sync Sync replicas to http://127.0.0.1:47019/ynfk/o/collection1/
   [junit4]   2> 318620 T1514 oascc.ZkStateReader.createClusterStateWatchersAndUpdate Updating cluster state from ZooKeeper... 
   [junit4]   2> 318620 T1550 n:127.0.0.1:47019_ynfk%2Fo c:control_collection s:shard1 x:collection1 C229 oasc.SyncStrategy.syncReplicas Sync Success - now sync replicas to me
   [junit4]   2> 318621 T1550 n:127.0.0.1:47019_ynfk%2Fo c:control_collection s:shard1 x:collection1 C229 oasc.SyncStrategy.syncToMe http://127.0.0.1:47019/ynfk/o/collection1/ has no replicas
   [junit4]   2> 318621 T1550 n:127.0.0.1:47019_ynfk%2Fo c:control_collection s:shard1 x:collection1 oasc.ShardLeaderElectionContext.runLeaderProcess I am the new leader: http://127.0.0.1:47019/ynfk/o/collection1/ shard1
   [junit4]   2> 318621 T1550 n:127.0.0.1:47019_ynfk%2Fo c:control_collection s:shard1 x:collection1 oascc.SolrZkClient.makePath makePath: /collections/control_collection/leaders/shard1
   [junit4]   2> 318621 T1514 oasc.ChaosMonkey.monkeyLog monkey: init - expire sessions:false cause connection loss:false
   [junit4]   2> 318622 T1514 oasc.AbstractFullDistribZkTestBase.createJettys Creating collection1 with stateFormat=2
   [junit4]   2> 318622 T1514 oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default ZkCredentialsProvider
   [junit4]   2> 318622 T1514 oascc.ConnectionManager.waitForConnected Waiting for client to connect to ZooKeeper
   [junit4]   2> 318622 T1542 n:127.0.0.1:47019_ynfk%2Fo oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path /overseer/queue state SyncConnected
   [junit4]   2> 318623 T1556 oascc.ConnectionManager.process Watcher org.apache.solr.common.cloud.ConnectionManager@2baf1e68 name:ZooKeeperConnection Watcher:127.0.0.1:47056/solr got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 318623 T1543 n:127.0.0.1:47019_ynfk%2Fo oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
   [junit4]   2> 	  "operation":"leader",
   [junit4]   2> 	  "shard":"shard1",
   [junit4]   2> 	  "collection":"control_collection",
   [junit4]   2> 	  "base_url":"http://127.0.0.1:47019/ynfk/o",
   [junit4]   2> 	  "core":"collection1",
   [junit4]   2> 	  "state":"active"} current state version: 1
   [junit4]   2> 318623 T1514 oascc.ConnectionManager.waitForConnected Client is connected to ZooKeeper
   [junit4]   2> 318624 T1514 oascc.SolrZkClient.createZkACLProvider Using default ZkACLProvider
   [junit4]   2> 318625 T1542 n:127.0.0.1:47019_ynfk%2Fo oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path /overseer/queue state SyncConnected
   [junit4]   2> 318625 T1543 n:127.0.0.1:47019_ynfk%2Fo oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
   [junit4]   2> 	  "operation":"create",
   [junit4]   2> 	  "name":"collection1",
   [junit4]   2> 	  "numShards":"1",
   [junit4]   2> 	  "stateFormat":"2"} current state version: 1
   [junit4]   2> 318626 T1543 n:127.0.0.1:47019_ynfk%2Fo oasco.ClusterStateMutator.createCollection building a new cName: collection1
   [junit4]   2> 318626 T1542 n:127.0.0.1:47019_ynfk%2Fo oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 1)
   [junit4]   2> 318626 T1553 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 1)
   [junit4]   2> 318665 T1514 oas.SolrTestCaseJ4.writeCoreProperties Writing core.properties file to /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/shard-1-001/cores/collection1
   [junit4]   2> 318666 T1514 oasc.AbstractFullDistribZkTestBase.createJettys create jetty 1 in directory /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/shard-1-001
   [junit4]   2> 318667 T1514 oejs.Server.doStart jetty-9.2.10.v20150310
   [junit4]   2> 318667 T1514 oejsh.ContextHandler.doStart Started o.e.j.s.ServletContextHandler@167014f9{/ynfk/o,null,AVAILABLE}
   [junit4]   2> 318668 T1514 oejs.AbstractConnector.doStart Started ServerConnector@708335dc{HTTP/1.1}{127.0.0.1:54346}
   [junit4]   2> 318668 T1514 oejs.Server.doStart Started @319872ms
   [junit4]   2> 318668 T1514 oascse.JettySolrRunner$1.lifeCycleStarted Jetty properties: {solrconfig=solrconfig.xml, hostContext=/ynfk/o, hostPort=54346, coreRootDirectory=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/shard-1-001/cores}
   [junit4]   2> 318669 T1514 oass.SolrDispatchFilter.init SolrDispatchFilter.init()sun.misc.Launcher$AppClassLoader@3b764bce
   [junit4]   2> 318669 T1514 oasc.SolrResourceLoader.<init> new SolrResourceLoader for directory: '/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/shard-1-001/'
   [junit4]   2> 318673 T1550 n:127.0.0.1:47019_ynfk%2Fo c:control_collection s:shard1 x:collection1 oasc.ZkController.register We are http://127.0.0.1:47019/ynfk/o/collection1/ and leader is http://127.0.0.1:47019/ynfk/o/collection1/
   [junit4]   2> 318673 T1550 n:127.0.0.1:47019_ynfk%2Fo c:control_collection s:shard1 x:collection1 oasc.ZkController.register No LogReplay needed for core=collection1 baseURL=http://127.0.0.1:47019/ynfk/o
   [junit4]   2> 318674 T1550 n:127.0.0.1:47019_ynfk%2Fo c:control_collection s:shard1 x:collection1 oasc.ZkController.checkRecovery I am the leader, no recovery necessary
   [junit4]   2> 318674 T1550 n:127.0.0.1:47019_ynfk%2Fo c:control_collection s:shard1 x:collection1 oasc.ZkController.publish publishing core=collection1 state=active collection=control_collection
   [junit4]   2> 318674 T1550 n:127.0.0.1:47019_ynfk%2Fo c:control_collection s:shard1 x:collection1 oasc.ZkController.publish numShards not found on descriptor - reading it from system property
   [junit4]   2> 318674 T1542 n:127.0.0.1:47019_ynfk%2Fo oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path /overseer/queue state SyncConnected
   [junit4]   2> 318675 T1543 n:127.0.0.1:47019_ynfk%2Fo oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
   [junit4]   2> 	  "core":"collection1",
   [junit4]   2> 	  "core_node_name":"core_node1",
   [junit4]   2> 	  "roles":null,
   [junit4]   2> 	  "base_url":"http://127.0.0.1:47019/ynfk/o",
   [junit4]   2> 	  "node_name":"127.0.0.1:47019_ynfk%2Fo",
   [junit4]   2> 	  "numShards":"1",
   [junit4]   2> 	  "state":"active",
   [junit4]   2> 	  "shard":"shard1",
   [junit4]   2> 	  "collection":"control_collection",
   [junit4]   2> 	  "operation":"state"} current state version: 2
   [junit4]   2> 318675 T1543 n:127.0.0.1:47019_ynfk%2Fo oasco.ReplicaMutator.updateState Update state numShards=1 message={
   [junit4]   2> 	  "core":"collection1",
   [junit4]   2> 	  "core_node_name":"core_node1",
   [junit4]   2> 	  "roles":null,
   [junit4]   2> 	  "base_url":"http://127.0.0.1:47019/ynfk/o",
   [junit4]   2> 	  "node_name":"127.0.0.1:47019_ynfk%2Fo",
   [junit4]   2> 	  "numShards":"1",
   [junit4]   2> 	  "state":"active",
   [junit4]   2> 	  "shard":"shard1",
   [junit4]   2> 	  "collection":"control_collection",
   [junit4]   2> 	  "operation":"state"}
   [junit4]   2> 318676 T1543 n:127.0.0.1:47019_ynfk%2Fo oasco.ZkStateWriter.writePendingUpdates going to create_collection /collections/collection1/state.json
   [junit4]   2> 318676 T1542 n:127.0.0.1:47019_ynfk%2Fo oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 1)
   [junit4]   2> 318676 T1553 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 1)
   [junit4]   2> 318678 T1514 oasc.SolrXmlConfig.fromFile Loading container configuration from /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/shard-1-001/solr.xml
   [junit4]   2> 318681 T1514 oasc.CorePropertiesLocator.<init> Config-defined core root directory: /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/shard-1-001/cores
   [junit4]   2> 318681 T1514 oasc.CoreContainer.<init> New CoreContainer 1503237030
   [junit4]   2> 318681 T1514 oasc.CoreContainer.load Loading cores into CoreContainer [instanceDir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/shard-1-001/]
   [junit4]   2> 318681 T1514 oasc.CoreContainer.load loading shared library: /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/shard-1-001/lib
   [junit4]   2> 318681 T1514 oasc.SolrResourceLoader.addToClassLoader WARN Can't find (or read) directory to add to classloader: lib (resolved as: /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/shard-1-001/lib).
   [junit4]   2> 318685 T1514 oashc.HttpShardHandlerFactory.init created with socketTimeout : 90000,urlScheme : ,connTimeout : 15000,maxConnectionsPerHost : 20,maxConnections : 10000,corePoolSize : 0,maximumPoolSize : 2147483647,maxThreadIdleTime : 5,sizeOfQueue : -1,fairnessPolicy : false,useRetries : false,
   [junit4]   2> 318686 T1514 oasu.UpdateShardHandler.<init> Creating UpdateShardHandler HTTP client with params: socketTimeout=340000&connTimeout=45000&retry=true
   [junit4]   2> 318686 T1514 oasl.LogWatcher.createWatcher SLF4J impl is org.slf4j.impl.Log4jLoggerFactory
   [junit4]   2> 318687 T1514 oasl.LogWatcher.newRegisteredLogWatcher Registering Log Listener [Log4j (org.slf4j.impl.Log4jLoggerFactory)]
   [junit4]   2> 318687 T1514 oasc.CoreContainer.load Node Name: 127.0.0.1
   [junit4]   2> 318687 T1514 oasc.ZkContainer.initZooKeeper Zookeeper client=127.0.0.1:47056/solr
   [junit4]   2> 318687 T1514 oasc.ZkController.checkChrootPath zkHost includes chroot
   [junit4]   2> 318687 T1514 oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default ZkCredentialsProvider
   [junit4]   2> 318688 T1514 oascc.ConnectionManager.waitForConnected Waiting for client to connect to ZooKeeper
   [junit4]   2> 318689 T1570 oascc.ConnectionManager.process Watcher org.apache.solr.common.cloud.ConnectionManager@39a512a2 name:ZooKeeperConnection Watcher:127.0.0.1:47056 got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 318689 T1514 oascc.ConnectionManager.waitForConnected Client is connected to ZooKeeper
   [junit4]   2> 318690 T1514 oascc.SolrZkClient.createZkACLProvider Using default ZkACLProvider
   [junit4]   2> 318695 T1514 n:127.0.0.1:54346_ynfk%2Fo oascc.ConnectionManager.waitForConnected Waiting for client to connect to ZooKeeper
   [junit4]   2> 318696 T1573 n:127.0.0.1:54346_ynfk%2Fo oascc.ConnectionManager.process Watcher org.apache.solr.common.cloud.ConnectionManager@1257eb2 name:ZooKeeperConnection Watcher:127.0.0.1:47056/solr got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 318696 T1514 n:127.0.0.1:54346_ynfk%2Fo oascc.ConnectionManager.waitForConnected Client is connected to ZooKeeper
   [junit4]   2> 318698 T1514 n:127.0.0.1:54346_ynfk%2Fo oascc.ZkStateReader.createClusterStateWatchersAndUpdate Updating cluster state from ZooKeeper... 
   [junit4]   2> 318778 T1573 n:127.0.0.1:54346_ynfk%2Fo oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 1)
   [junit4]   2> 318778 T1542 n:127.0.0.1:47019_ynfk%2Fo oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 1)
   [junit4]   2> 318778 T1553 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 1)
   [junit4]   2> 319700 T1514 n:127.0.0.1:54346_ynfk%2Fo oasc.ZkController.createEphemeralLiveNode Register node as live in ZooKeeper:/live_nodes/127.0.0.1:54346_ynfk%2Fo
   [junit4]   2> 319701 T1514 n:127.0.0.1:54346_ynfk%2Fo oascc.SolrZkClient.makePath makePath: /live_nodes/127.0.0.1:54346_ynfk%2Fo
   [junit4]   2> 319702 T1514 n:127.0.0.1:54346_ynfk%2Fo oasc.Overseer.close Overseer (id=null) closing
   [junit4]   2> 319702 T1514 n:127.0.0.1:54346_ynfk%2Fo oasc.CoreContainer.initializeAuthenticationPlugin No authentication plugin used.
   [junit4]   2> 319703 T1514 n:127.0.0.1:54346_ynfk%2Fo oasc.CoreContainer.intializeAuthorizationPlugin Security conf doesn't exist. Skipping setup for authorization module.
   [junit4]   2> 319703 T1514 n:127.0.0.1:54346_ynfk%2Fo oasc.CorePropertiesLocator.discover Looking for core definitions underneath /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/shard-1-001/cores
   [junit4]   2> 319704 T1514 n:127.0.0.1:54346_ynfk%2Fo oasc.CoreDescriptor.<init> CORE DESCRIPTOR: {name=collection1, config=solrconfig.xml, transient=false, schema=schema.xml, loadOnStartup=true, instanceDir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/shard-1-001/cores/collection1, collection=collection1, absoluteInstDir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/shard-1-001/cores/collection1/, coreNodeName=, dataDir=data/, shard=}
   [junit4]   2> 319704 T1514 n:127.0.0.1:54346_ynfk%2Fo oasc.CorePropertiesLocator.discoverUnder Found core collection1 in /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/shard-1-001/cores/collection1/
   [junit4]   2> 319704 T1514 n:127.0.0.1:54346_ynfk%2Fo oasc.CorePropertiesLocator.discover Found 1 core definitions
   [junit4]   2> 319705 T1574 n:127.0.0.1:54346_ynfk%2Fo c:collection1 x:collection1 oasc.ZkController.publish publishing core=collection1 state=down collection=collection1
   [junit4]   2> 319705 T1574 n:127.0.0.1:54346_ynfk%2Fo c:collection1 x:collection1 oasc.ZkController.publish numShards not found on descriptor - reading it from system property
   [junit4]   2> 319706 T1542 n:127.0.0.1:47019_ynfk%2Fo oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path /overseer/queue state SyncConnected
   [junit4]   2> 319706 T1574 n:127.0.0.1:54346_ynfk%2Fo oasc.ZkController.preRegister Registering watch for external collection collection1
   [junit4]   2> 319706 T1574 n:127.0.0.1:54346_ynfk%2Fo oascc.ZkStateReader.addZkWatch addZkWatch collection1
   [junit4]   2> 319706 T1543 n:127.0.0.1:47019_ynfk%2Fo oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
   [junit4]   2> 	  "core":"collection1",
   [junit4]   2> 	  "roles":null,
   [junit4]   2> 	  "base_url":"http://127.0.0.1:54346/ynfk/o",
   [junit4]   2> 	  "node_name":"127.0.0.1:54346_ynfk%2Fo",
   [junit4]   2> 	  "numShards":"1",
   [junit4]   2> 	  "state":"down",
   [junit4]   2> 	  "shard":null,
   [junit4]   2> 	  "collection":"collection1",
   [junit4]   2> 	  "operation":"state"} current state version: 4
   [junit4]   2> 319706 T1574 n:127.0.0.1:54346_ynfk%2Fo oascc.ZkStateReader.addZkWatch Updating collection state at /collections/collection1/state.json from ZooKeeper... 
   [junit4]   2> 319707 T1543 n:127.0.0.1:47019_ynfk%2Fo oasco.ReplicaMutator.updateState Update state numShards=1 message={
   [junit4]   2> 	  "core":"collection1",
   [junit4]   2> 	  "roles":null,
   [junit4]   2> 	  "base_url":"http://127.0.0.1:54346/ynfk/o",
   [junit4]   2> 	  "node_name":"127.0.0.1:54346_ynfk%2Fo",
   [junit4]   2> 	  "numShards":"1",
   [junit4]   2> 	  "state":"down",
   [junit4]   2> 	  "shard":null,
   [junit4]   2> 	  "collection":"collection1",
   [junit4]   2> 	  "operation":"state"}
   [junit4]   2> 319707 T1543 n:127.0.0.1:47019_ynfk%2Fo oasco.ReplicaMutator.updateState Collection already exists with numShards=1
   [junit4]   2> 319707 T1574 n:127.0.0.1:54346_ynfk%2Fo oascc.ZkStateReader.updateWatchedCollection Updating data for collection1 to ver 0 
   [junit4]   2> 319707 T1543 n:127.0.0.1:47019_ynfk%2Fo oasco.ReplicaMutator.updateState Assigning new node to shard shard=shard1
   [junit4]   2> 319707 T1574 n:127.0.0.1:54346_ynfk%2Fo oasc.ZkController.waitForCoreNodeName look for our core node name
   [junit4]   2> 319808 T1543 n:127.0.0.1:47019_ynfk%2Fo oasco.ZkStateWriter.writePendingUpdates going to update_collection /collections/collection1/state.json version: 0
   [junit4]   2> 319809 T1573 n:127.0.0.1:54346_ynfk%2Fo oascc.ZkStateReader$7.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json for collection collection1 has occurred - updating... (live nodes size: 2)
   [junit4]   2> 319809 T1573 n:127.0.0.1:54346_ynfk%2Fo oascc.ZkStateReader.updateWatchedCollection Updating data for collection1 to ver 1 
   [junit4]   2> 320707 T1574 n:127.0.0.1:54346_ynfk%2Fo oasc.ZkController.waitForShardId waiting to find shard id in clusterstate for collection1
   [junit4]   2> 320707 T1574 n:127.0.0.1:54346_ynfk%2Fo oasc.ZkController.createCollectionZkNode Check for collection zkNode:collection1
   [junit4]   2> 320708 T1574 n:127.0.0.1:54346_ynfk%2Fo oasc.ZkController.createCollectionZkNode Collection zkNode exists
   [junit4]   2> 320708 T1574 n:127.0.0.1:54346_ynfk%2Fo oascc.ZkStateReader.readConfigName Load collection config from:/collections/collection1
   [junit4]   2> 320708 T1574 n:127.0.0.1:54346_ynfk%2Fo oascc.ZkStateReader.readConfigName path=/collections/collection1 configName=conf1 specified config exists in ZooKeeper
   [junit4]   2> 320708 T1574 n:127.0.0.1:54346_ynfk%2Fo oasc.SolrResourceLoader.<init> new SolrResourceLoader for directory: '/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/shard-1-001/cores/collection1/'
   [junit4]   2> 320716 T1574 n:127.0.0.1:54346_ynfk%2Fo oasc.Config.<init> loaded config solrconfig.xml with version 0 
   [junit4]   2> 320719 T1574 n:127.0.0.1:54346_ynfk%2Fo oasc.SolrConfig.refreshRequestParams current version of requestparams : -1
   [junit4]   2> 320721 T1574 n:127.0.0.1:54346_ynfk%2Fo oasc.SolrConfig.<init> Using Lucene MatchVersion: 5.2.0
   [junit4]   2> 320726 T1574 n:127.0.0.1:54346_ynfk%2Fo oasc.SolrConfig.<init> Loaded SolrConfig: solrconfig.xml
   [junit4]   2> 320726 T1574 n:127.0.0.1:54346_ynfk%2Fo oass.IndexSchema.readSchema Reading Solr Schema from /configs/conf1/schema.xml
   [junit4]   2> 320730 T1574 n:127.0.0.1:54346_ynfk%2Fo oass.IndexSchema.readSchema [collection1] Schema name=test
   [junit4]   2> 320780 T1574 n:127.0.0.1:54346_ynfk%2Fo oass.OpenExchangeRatesOrgProvider.init Initialized with rates=open-exchange-rates.json, refreshInterval=1440.
   [junit4]   2> 320783 T1574 n:127.0.0.1:54346_ynfk%2Fo oass.IndexSchema.readSchema default search field in schema is text
   [junit4]   2> 320784 T1574 n:127.0.0.1:54346_ynfk%2Fo oass.IndexSchema.readSchema unique key field: id
   [junit4]   2> 320788 T1574 n:127.0.0.1:54346_ynfk%2Fo oass.FileExchangeRateProvider.reload Reloading exchange rates from file currency.xml
   [junit4]   2> 320790 T1574 n:127.0.0.1:54346_ynfk%2Fo oass.FileExchangeRateProvider.reload Reloading exchange rates from file currency.xml
   [junit4]   2> 320791 T1574 n:127.0.0.1:54346_ynfk%2Fo oass.OpenExchangeRatesOrgProvider.reload Reloading exchange rates from open-exchange-rates.json
   [junit4]   2> 320791 T1574 n:127.0.0.1:54346_ynfk%2Fo oass.OpenExchangeRatesOrgProvider$OpenExchangeRates.<init> WARN Unknown key IMPORTANT NOTE
   [junit4]   2> 320791 T1574 n:127.0.0.1:54346_ynfk%2Fo oass.OpenExchangeRatesOrgProvider$OpenExchangeRates.<init> WARN Expected key, got STRING
   [junit4]   2> 320791 T1574 n:127.0.0.1:54346_ynfk%2Fo oass.OpenExchangeRatesOrgProvider.reload Reloading exchange rates from open-exchange-rates.json
   [junit4]   2> 320792 T1574 n:127.0.0.1:54346_ynfk%2Fo oass.OpenExchangeRatesOrgProvider$OpenExchangeRates.<init> WARN Unknown key IMPORTANT NOTE
   [junit4]   2> 320792 T1574 n:127.0.0.1:54346_ynfk%2Fo oass.OpenExchangeRatesOrgProvider$OpenExchangeRates.<init> WARN Expected key, got STRING
   [junit4]   2> 320792 T1574 n:127.0.0.1:54346_ynfk%2Fo oasc.CoreContainer.create Creating SolrCore 'collection1' using configuration from collection collection1
   [junit4]   2> 320792 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasc.SolrCore.initDirectoryFactory org.apache.solr.core.MockDirectoryFactory
   [junit4]   2> 320792 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasc.SolrCore.<init> [[collection1] ] Opening new SolrCore at [/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/shard-1-001/cores/collection1/], dataDir=[null]
   [junit4]   2> 320792 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasc.JmxMonitoredMap.<init> JMX monitoring is enabled. Adding Solr mbeans to JMX Server: com.sun.jmx.mbeanserver.JmxMBeanServer@45022fa6
   [junit4]   2> 320793 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasc.CachingDirectoryFactory.get return new directory for /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/shard-1-001/cores/collection1/data
   [junit4]   2> 320793 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasc.SolrCore.getNewIndexDir New index directory detected: old=null new=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/shard-1-001/cores/collection1/data/index/
   [junit4]   2> 320793 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasc.SolrCore.initIndex WARN [collection1] Solr index directory '/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/shard-1-001/cores/collection1/data/index' doesn't exist. Creating new index...
   [junit4]   2> 320793 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasc.CachingDirectoryFactory.get return new directory for /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/shard-1-001/cores/collection1/data/index
   [junit4]   2> 320793 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasu.RandomMergePolicy.<init> RandomMergePolicy wrapping class org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: maxMergeAtOnce=12, maxMergeAtOnceExplicit=28, maxMergedSegmentMB=4.7451171875, floorSegmentMB=1.8916015625, forceMergeDeletesPctAllowed=22.582409578349576, segmentsPerTier=17.0, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=0.0
   [junit4]   2> 320794 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasc.SolrDeletionPolicy.onCommit SolrDeletionPolicy.onCommit: commits: num=1
   [junit4]   2> 		commit{dir=MockDirectoryWrapper(RAMDirectory@39ce42e0 lockFactory=org.apache.lucene.store.SingleInstanceLockFactory@4a158bdc),segFN=segments_1,generation=1}
   [junit4]   2> 320794 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasc.SolrDeletionPolicy.updateCommits newest commit generation = 1
   [junit4]   2> 320796 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain "nodistrib"
   [junit4]   2> 320797 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain "dedupe"
   [junit4]   2> 320797 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasup.UpdateRequestProcessorChain.init inserting DistributedUpdateProcessorFactory into updateRequestProcessorChain "dedupe"
   [junit4]   2> 320797 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain "stored_sig"
   [junit4]   2> 320797 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasup.UpdateRequestProcessorChain.init inserting DistributedUpdateProcessorFactory into updateRequestProcessorChain "stored_sig"
   [junit4]   2> 320797 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain "distrib-dup-test-chain-explicit"
   [junit4]   2> 320797 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain "distrib-dup-test-chain-implicit"
   [junit4]   2> 320797 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasup.UpdateRequestProcessorChain.init inserting DistributedUpdateProcessorFactory into updateRequestProcessorChain "distrib-dup-test-chain-implicit"
   [junit4]   2> 320798 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasc.SolrCore.loadUpdateProcessorChains no updateRequestProcessorChain defined as default, creating implicit default
   [junit4]   2> 320799 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
   [junit4]   2> 320799 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
   [junit4]   2> 320800 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
   [junit4]   2> 320801 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
   [junit4]   2> 320804 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasc.RequestHandlers.initHandlersFromConfig Registered paths: /admin/mbeans,standard,/update/csv,/update/json/docs,/admin/luke,/admin/segments,/get,/admin/system,/replication,/admin/properties,/config,/schema,/admin/plugins,/admin/logging,/update/json,/admin/threads,/admin/ping,/update,/admin/file
   [junit4]   2> 320804 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasc.SolrCore.initStatsCache Using default statsCache cache: org.apache.solr.search.stats.LocalStatsCache
   [junit4]   2> 320805 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasu.UpdateHandler.<init> Using UpdateLog implementation: org.apache.solr.update.UpdateLog
   [junit4]   2> 320805 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasu.UpdateLog.init Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=256
   [junit4]   2> 320805 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasu.CommitTracker.<init> Hard AutoCommit: disabled
   [junit4]   2> 320805 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasu.CommitTracker.<init> Soft AutoCommit: disabled
   [junit4]   2> 320806 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasu.RandomMergePolicy.<init> RandomMergePolicy wrapping class org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: maxMergeAtOnce=44, maxMergeAtOnceExplicit=31, maxMergedSegmentMB=2.40625, floorSegmentMB=0.716796875, forceMergeDeletesPctAllowed=20.325306031948806, segmentsPerTier=34.0, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=0.0
   [junit4]   2> 320806 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasc.SolrDeletionPolicy.onInit SolrDeletionPolicy.onInit: commits: num=1
   [junit4]   2> 		commit{dir=MockDirectoryWrapper(RAMDirectory@39ce42e0 lockFactory=org.apache.lucene.store.SingleInstanceLockFactory@4a158bdc),segFN=segments_1,generation=1}
   [junit4]   2> 320807 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasc.SolrDeletionPolicy.updateCommits newest commit generation = 1
   [junit4]   2> 320807 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oass.SolrIndexSearcher.<init> Opening Searcher@66ac2ded[collection1] main
   [junit4]   2> 320807 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasu.UpdateLog.onFirstSearcher On first searcher opened, looking up max value of version field
   [junit4]   2> 320807 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasu.VersionInfo.getMaxVersionFromIndex Refreshing highest value of _version_ for 256 version buckets from index
   [junit4]   2> 320807 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasu.VersionInfo.getMaxVersionFromIndex WARN No terms found for _version_, cannot seed version bucket highest value from index
   [junit4]   2> 320807 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasu.UpdateLog.seedBucketsWithHighestVersion WARN Could not find max version in index or recent updates, using new clock 1501750447634907136
   [junit4]   2> 320807 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasu.UpdateLog.seedBucketsWithHighestVersion Took 0 ms to seed version buckets with highest version 1501750447634907136
   [junit4]   2> 320807 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oascc.ZkStateReader.readConfigName Load collection config from:/collections/collection1
   [junit4]   2> 320808 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oascc.ZkStateReader.readConfigName path=/collections/collection1 configName=conf1 specified config exists in ZooKeeper
   [junit4]   2> 320808 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasr.ManagedResourceStorage.newStorageIO Setting up ZooKeeper-based storage for the RestManager with znodeBase: /configs/conf1
   [junit4]   2> 320808 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasr.ManagedResourceStorage$ZooKeeperStorageIO.configure Configured ZooKeeperStorageIO with znodeBase: /configs/conf1
   [junit4]   2> 320808 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasr.RestManager.init Initializing RestManager with initArgs: {}
   [junit4]   2> 320808 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasr.ManagedResourceStorage.load Reading _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 320809 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasr.ManagedResourceStorage$ZooKeeperStorageIO.openInputStream No data found for znode /configs/conf1/_rest_managed.json
   [junit4]   2> 320809 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasr.ManagedResourceStorage.load Loaded null at path _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 320809 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasr.RestManager.init Initializing 0 registered ManagedResources
   [junit4]   2> 320809 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oash.ReplicationHandler.inform Commits will be reserved for  10000
   [junit4]   2> 320809 T1575 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasc.SolrCore.registerSearcher [collection1] Registered new searcher Searcher@66ac2ded[collection1] main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 320810 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasc.ZkController.getConfDirListeners watch zkdir /configs/conf1
   [junit4]   2> 320810 T1574 n:127.0.0.1:54346_ynfk%2Fo x:collection1 oasc.CoreContainer.registerCore registering core: collection1
   [junit4]   2> 320810 T1578 n:127.0.0.1:54346_ynfk%2Fo c:collection1 s:shard1 x:collection1 oasc.ZkController.register Register replica - core:collection1 address:http://127.0.0.1:54346/ynfk/o collection:collection1 shard:shard1
   [junit4]   2> 320810 T1514 n:127.0.0.1:54346_ynfk%2Fo oass.SolrDispatchFilter.init user.dir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0
   [junit4]   2> 320811 T1514 n:127.0.0.1:54346_ynfk%2Fo oass.SolrDispatchFilter.init SolrDispatchFilter.init() done
   [junit4]   2> 320811 T1578 n:127.0.0.1:54346_ynfk%2Fo c:collection1 s:shard1 x:collection1 oascc.SolrZkClient.makePath makePath: /collections/collection1/leader_elect/shard1/election
   [junit4]   2> 320813 T1578 n:127.0.0.1:54346_ynfk%2Fo c:collection1 s:shard1 x:collection1 oasc.ShardLeaderElectionContext.runLeaderProcess Running the leader process for shard shard1
   [junit4]   2> 320813 T1514 oas.SolrTestCaseJ4.setUp ###Starting testSolrJAPICalls
   [junit4]   2> 320813 T1578 n:127.0.0.1:54346_ynfk%2Fo c:collection1 s:shard1 x:collection1 oasc.ShardLeaderElectionContext.waitForReplicasToComeUp Enough replicas found to continue.
   [junit4]   2> 320814 T1578 n:127.0.0.1:54346_ynfk%2Fo c:collection1 s:shard1 x:collection1 oasc.ShardLeaderElectionContext.runLeaderProcess I may be the new leader - try and sync
   [junit4]   2> 320813 T1542 n:127.0.0.1:47019_ynfk%2Fo oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path /overseer/queue state SyncConnected
   [junit4]   2> ASYNC  NEW_CORE C230 name=collection1 org.apache.solr.core.SolrCore@559ef0ee url=http://127.0.0.1:54346/ynfk/o/collection1 node=127.0.0.1:54346_ynfk%2Fo C230_STATE=coll:collection1 core:collection1 props:{core=collection1, base_url=http://127.0.0.1:54346/ynfk/o, node_name=127.0.0.1:54346_ynfk%2Fo, state=down}
   [junit4]   2> 320814 T1578 n:127.0.0.1:54346_ynfk%2Fo c:collection1 s:shard1 x:collection1 C230 oasc.SyncStrategy.sync Sync replicas to http://127.0.0.1:54346/ynfk/o/collection1/
   [junit4]   2> 320814 T1578 n:127.0.0.1:54346_ynfk%2Fo c:collection1 s:shard1 x:collection1 C230 oasc.SyncStrategy.syncReplicas Sync Success - now sync replicas to me
   [junit4]   2> 320814 T1543 n:127.0.0.1:47019_ynfk%2Fo oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
   [junit4]   2> 	  "operation":"leader",
   [junit4]   2> 	  "shard":"shard1",
   [junit4]   2> 	  "collection":"collection1"} current state version: 4
   [junit4]   2> 320814 T1578 n:127.0.0.1:54346_ynfk%2Fo c:collection1 s:shard1 x:collection1 C230 oasc.SyncStrategy.syncToMe http://127.0.0.1:54346/ynfk/o/collection1/ has no replicas
   [junit4]   2> 320814 T1578 n:127.0.0.1:54346_ynfk%2Fo c:collection1 s:shard1 x:collection1 oasc.ShardLeaderElectionContext.runLeaderProcess I am the new leader: http://127.0.0.1:54346/ynfk/o/collection1/ shard1
   [junit4]   2> 320814 T1578 n:127.0.0.1:54346_ynfk%2Fo c:collection1 s:shard1 x:collection1 oascc.SolrZkClient.makePath makePath: /collections/collection1/leaders/shard1
   [junit4]   2> 320815 T1543 n:127.0.0.1:47019_ynfk%2Fo oasco.ZkStateWriter.writePendingUpdates going to update_collection /collections/collection1/state.json version: 1
   [junit4]   2> 320815 T1573 n:127.0.0.1:54346_ynfk%2Fo oascc.ZkStateReader$7.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json for collection collection1 has occurred - updating... (live nodes size: 2)
   [junit4]   2> 320815 T1573 n:127.0.0.1:54346_ynfk%2Fo oascc.ZkStateReader.updateWatchedCollection Updating data for collection1 to ver 2 
   [junit4]   2> 320816 T1542 n:127.0.0.1:47019_ynfk%2Fo oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path /overseer/queue state SyncConnected
   [junit4]   2> 320817 T1543 n:127.0.0.1:47019_ynfk%2Fo oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
   [junit4]   2> 	  "operation":"leader",
   [junit4]   2> 	  "shard":"shard1",
   [junit4]   2> 	  "collection":"collection1",
   [junit4]   2> 	  "base_url":"http://127.0.0.1:54346/ynfk/o",
   [junit4]   2> 	  "core":"collection1",
   [junit4]   2> 	  "state":"active"} current state version: 4
   [junit4]   2> 320818 T1563 n:127.0.0.1:54346_ynfk%2Fo oasha.CollectionsHandler.handleRequestBody Invoked Collection Action :create with paramsasync=1001&collection.configName=conf1&name=testasynccollectioncreation&action=CREATE&numShards=1&wt=javabin&version=2 
   [junit4]   2> 320818 T1543 n:127.0.0.1:47019_ynfk%2Fo oasco.ZkStateWriter.writePendingUpdates going to update_collection /collections/collection1/state.json version: 2
   [junit4]   2> 320818 T1573 n:127.0.0.1:54346_ynfk%2Fo oascc.ZkStateReader$7.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json for collection collection1 has occurred - updating... (live nodes size: 2)
   [junit4]   2> 320819 T1573 n:127.0.0.1:54346_ynfk%2Fo oascc.ZkStateReader.updateWatchedCollection Updating data for collection1 to ver 3 
   [junit4]   2> 320819 T1542 n:127.0.0.1:47019_ynfk%2Fo oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path /overseer/collection-queue-work state SyncConnected
   [junit4]   2> 320819 T1563 n:127.0.0.1:54346_ynfk%2Fo oass.HttpSolrCall.handleAdminRequest [admin] webapp=null path=/admin/collections params={async=1001&collection.configName=conf1&name=testasynccollectioncreation&action=CREATE&numShards=1&wt=javabin&version=2} status=0 QTime=1 
   [junit4]   2> 320821 T1544 n:127.0.0.1:47019_ynfk%2Fo oasc.OverseerCollectionProcessor.run Overseer Collection Processor: Get the message id:/overseer/collection-queue-work/qn-0000000000 message:{
   [junit4]   2> 	  "name":"testasynccollectioncreation",
   [junit4]   2> 	  "fromApi":"true",
   [junit4]   2> 	  "collection.configName":"conf1",
   [junit4]   2> 	  "numShards":"1",
   [junit4]   2> 	  "async":"1001",
   [junit4]   2> 	  "stateFormat":"2",
   [junit4]   2> 	  "operation":"create"}
   [junit4]   2> 320822 T1580 n:127.0.0.1:47019_ynfk%2Fo c:testasynccollectioncreation oasc.OverseerCollectionProcessor.processMessage WARN OverseerCollectionProcessor.processMessage : create , {
   [junit4]   2> 	  "name":"testasynccollectioncreation",
   [junit4]   2> 	  "fromApi":"true",
   [junit4]   2> 	  "collection.configName":"conf1",
   [junit4]   2> 	  "numShards":"1",
   [junit4]   2> 	  "async":"1001",
   [junit4]   2> 	  "stateFormat":"2",
   [junit4]   2> 	  "operation":"create"}
   [junit4]   2> 320823 T1564 n:127.0.0.1:54346_ynfk%2Fo oasha.CollectionsHandler.handleRequestBody Invoked Collection Action :requeststatus with paramsrequestid=1001&action=REQUESTSTATUS&wt=javabin&version=2 
   [junit4]   2> 320823 T1580 n:127.0.0.1:47019_ynfk%2Fo c:testasynccollectioncreation oasc.OverseerCollectionProcessor.createConfNode creating collections conf node /collections/testasynccollectioncreation 
   [junit4]   2> 320837 T1580 n:127.0.0.1:47019_ynfk%2Fo c:testasynccollectioncreation oascc.SolrZkClient.makePath makePath: /collections/testasynccollectioncreation
   [junit4]   2> 320837 T1564 n:127.0.0.1:54346_ynfk%2Fo oass.HttpSolrCall.handleAdminRequest [admin] webapp=null path=/admin/collections params={requestid=1001&action=REQUESTSTATUS&wt=javabin&version=2} status=0 QTime=14 
   [junit4]   2> 320839 T1542 n:127.0.0.1:47019_ynfk%2Fo oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path /overseer/queue state SyncConnected
   [junit4]   2> 320839 T1543 n:127.0.0.1:47019_ynfk%2Fo oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
   [junit4]   2> 	  "name":"testasynccollectioncreation",
   [junit4]   2> 	  "fromApi":"true",
   [junit4]   2> 	  "collection.configName":"conf1",
   [junit4]   2> 	  "numShards":"1",
   [junit4]   2> 	  "async":"1001",
   [junit4]   2> 	  "stateFormat":"2",
   [junit4]   2> 	  "operation":"create"} current state version: 4
   [junit4]   2> 320839 T1543 n:127.0.0.1:47019_ynfk%2Fo oasco.ClusterStateMutator.createCollection building a new cName: testasynccollectioncreation
   [junit4]   2> 320840 T1543 n:127.0.0.1:47019_ynfk%2Fo oasco.ZkStateWriter.writePendingUpdates going to create_collection /collections/testasynccollectioncreation/state.json
   [junit4]   2> 320841 T1573 n:127.0.0.1:54346_ynfk%2Fo oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 2)
   [junit4]   2> 320841 T1553 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 2)
   [junit4]   2> 320841 T1542 n:127.0.0.1:47019_ynfk%2Fo oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 2)
   [junit4]   2> 320867 T1578 n:127.0.0.1:54346_ynfk%2Fo c:collection1 s:shard1 x:collection1 oasc.ZkController.register We are http://127.0.0.1:54346/ynfk/o/collection1/ and leader is http://127.0.0.1:54346/ynfk/o/collection1/
   [junit4]   2> 320868 T1578 n:127.0.0.1:54346_ynfk%2Fo c:collection1 s:shard1 x:collection1 oasc.ZkController.register No LogReplay needed for core=collection1 baseURL=http://127.0.0.1:54346/ynfk/o
   [junit4]   2> 320868 T1578 n:127.0.0.1:54346_ynfk%2Fo c:collection1 s:shard1 x:collection1 oasc.ZkController.checkRecovery I am the leader, no recovery necessary
   [junit4]   2> 320868 T1578 n:127.0.0.1:54346_ynfk%2Fo c:collection1 s:shard1 x:collection1 oasc.ZkController.publish publishing core=collection1 state=active collection=collection1
   [junit4]   2> 320868 T1578 n:127.0.0.1:54346_ynfk%2Fo c:collection1 s:shard1 x:collection1 oasc.ZkController.publish numShards not found on descriptor - reading it from system property
   [junit4]   2> 320869 T1542 n:127.0.0.1:47019_ynfk%2Fo oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path /overseer/queue state SyncConnected
   [junit4]   2> 320869 T1543 n:127.0.0.1:47019_ynfk%2Fo oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
   [junit4]   2> 	  "core":"collection1",
   [junit4]   2> 	  "core_node_name":"core_node1",
   [junit4]   2> 	  "roles":null,
   [junit4]   2> 	  "base_url":"http://127.0.0.1:54346/ynfk/o",
   [junit4]   2> 	  "node_name":"127.0.0.1:54346_ynfk%2Fo",
   [junit4]   2> 	  "numShards":"1",
   [junit4]   2> 	  "state":"active",
   [junit4]   2> 	  "shard":"shard1",
   [junit4]   2> 	  "collection":"collection1",
   [junit4]   2> 	  "operation":"state"} current state version: 5
   [junit4]   2> 320870 T1543 n:127.0.0.1:47019_ynfk%2Fo oasco.ReplicaMutator.updateState Update state numShards=1 message={
   [junit4]   2> 	  "core":"collection1",
   [junit4]   2> 	  "core_node_name":"core_node1",
   [junit4]   2> 	  "roles":null,
   [junit4]   2> 	  "base_url":"http://127.0.0.1:54346/ynfk/o",
   [junit4]   2> 	  "node_name":"127.0.0.1:54346_ynfk%2Fo",
   [junit4]   2> 	  "numShards":"1",
   [junit4]   2> 	  "state":"active",
   [junit4]   2> 	  "shard":"shard1",
   [junit4]   2> 	  "collection":"collection1",
   [junit4]   2> 	  "operation":"state"}
   [junit4]   2> 320871 T1578 n:127.0.0.1:54346_ynfk%2Fo c:collection1 s:shard1 x:collection1 oascc.ZkStateReader.updateWatchedCollection Updating data for collection1 to ver 3 
   [junit4]   2> 320939 T1580 n:127.0.0.1:47019_ynfk%2Fo c:testasynccollectioncreation oasc.OverseerCollectionProcessor.createCollection Creating SolrCores for new collection testasynccollectioncreation, shardNames [shard1] , replicationFactor : 1
   [junit4]   2> 320939 T1580 n:127.0.0.1:47019_ynfk%2Fo c:testasynccollectioncreation oasc.OverseerCollectionProcessor.createCollection Creating core testasynccollectioncreation_shard1_replica1 as part of shard shard1 of collection testasynccollectioncreation on 127.0.0.1:54346_ynfk%2Fo
   [junit4]   2> 320943 T1565 n:127.0.0.1:54346_ynfk%2Fo oass.HttpSolrCall.handleAdminRequest [admin] webapp=null path=/admin/cores params={async=100128900967965317&qt=/admin/cores&collection.configName=conf1&name=testasynccollectioncreation_shard1_replica1&action=CREATE&numShards=1&collection=testasynccollectioncreation&shard=shard1&wt=javabin&version=2} status=0 QTime=2 
   [junit4]   2> 320943 T1582 n:127.0.0.1:54346_ynfk%2Fo oasha.CoreAdminHandler.handleCreateAction core create command async=100128900967965317&qt=/admin/cores&collection.configName=conf1&name=testasynccollectioncreation_shard1_replica1&action=CREATE&numShards=1&collection=testasynccollectioncreation&shard=shard1&wt=javabin&version=2
   [junit4]   2> 320944 T1582 n:127.0.0.1:54346_ynfk%2Fo oasc.CoreDescriptor.<init> CORE DESCRIPTOR: {name=testasynccollectioncreation_shard1_replica1, config=solrconfig.xml, transient=false, schema=schema.xml, loadOnStartup=true, instanceDir=testasynccollectioncreation_shard1_replica1, collection=testasynccollectioncreation, absoluteInstDir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/shard-1-001/cores/testasynccollectioncreation_shard1_replica1/, numShards=1, dataDir=data/, shard=shard1}
   [junit4]   2> 320945 T1582 n:127.0.0.1:54346_ynfk%2Fo c:testasynccollectioncreation x:testasynccollectioncreation_shard1_replica1 oasc.ZkController.publish publishing core=testasynccollectioncreation_shard1_replica1 state=down collection=testasynccollectioncreation
   [junit4]   2> 320945 T1563 n:127.0.0.1:54346_ynfk%2Fo oasha.CoreAdminHandler.handleRequestActionStatus Checking request status for : 100128900967965317
   [junit4]   2> 320946 T1563 n:127.0.0.1:54346_ynfk%2Fo oass.HttpSolrCall.handleAdminRequest [admin] webapp=null path=/admin/cores params={qt=/admin/cores&requestid=100128900967965317&action=REQUESTSTATUS&wt=javabin&version=2} status=0 QTime=1 
   [junit4]   2> 320946 T1542 n:127.0.0.1:47019_ynfk%2Fo oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path /overseer/queue state SyncConnected
   [junit4]   2> 320946 T1582 n:127.0.0.1:54346_ynfk%2Fo oasc.ZkController.preRegister Registering watch for external collection testasynccollectioncreation
   [junit4]   2> 320946 T1543 n:127.0.0.1:47019_ynfk%2Fo oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
   [junit4]   2> 	  "core":"testasynccollectioncreation_shard1_replica1",
   [junit4]   2> 	  "roles":null,
   [junit4]   2> 	  "base_url":"http://127.0.0.1:54346/ynfk/o",
   [junit4]   2> 	  "node_name":"127.0.0.1:54346_ynfk%2Fo",
   [junit4]   2> 	  "numShards":"1",
   [junit4]   2> 	  "state":"down",
   [junit4]   2> 	  "shard":"shard1",
   [junit4]   2> 	  "collection":"testasynccollectioncreation",
   [junit4]   2> 	  "operation":"state"} current state version: 5
   [junit4]   2> 320947 T1582 n:127.0.0.1:54346_ynfk%2Fo oascc.ZkStateReader.addZkWatch addZkWatch testasynccollectioncreation
   [junit4]   2> 320947 T1543 n:127.0.0.1:47019_ynfk%2Fo oasco.ReplicaMutator.updateState Update state numShards=1 message={
   [junit4]   2> 	  "core":"testasynccollectioncreation_shard1_replica1",
   [junit4]   2> 	  "roles":null,
   [junit4]   2> 	  "base_url":"http://127.0.0.1:54346/ynfk/o",
   [junit4]   2> 	  "node_name":"127.0.0.1:54346_ynfk%2Fo",
   [junit4]   2> 	  "numShards":"1",
   [junit4]   2> 	  "state":"down",
   [junit4]   2> 	  "shard":"shard1",
   [junit4]   2> 	  "collection":"testasynccollectioncreation",
   [junit4]   2> 	  "operation":"state"}
   [junit4]   2> 320947 T1582 n:127.0.0.1:54346_ynfk%2Fo oascc.ZkStateReader.addZkWatch Updating collection state at /collections/testasynccollectioncreation/state.json from ZooKeeper... 
   [junit4]   2> 320947 T1543 n:127.0.0.1:47019_ynfk%2Fo oasco.ZkStateWriter.writePendingUpdates going to update_collection /collections/collection1/state.json version: 3
   [junit4]   2> 320948 T1582 n:127.0.0.1:54346_ynfk%2Fo oascc.ZkStateReader.updateWatchedCollection Updating data for testasynccollectioncreation to ver 0 
   [junit4]   2> 320948 T1582 n:127.0.0.1:54346_ynfk%2Fo oasc.ZkController.waitForCoreNodeName look for our core node name
   [junit4]   2> 320948 T1573 n:127.0.0.1:54346_ynfk%2Fo oascc.ZkStateReader$7.process A cluster state change: WatchedEvent state:SyncConnected type

[...truncated too long message...]

nkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/shard-1-001/cores/testasynccollectioncreation_shard1_0_replica1/data/index;done=false>>]
   [junit4]   2> 450108 T1514 oasc.CachingDirectoryFactory.close Closing directory: /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/shard-1-001/cores/testasynccollectioncreation_shard1_0_replica1/data/index
   [junit4]   2> 450109 T1514 oasc.SolrCore.close [testasynccollectioncreation_shard1_1_replica1]  CLOSING SolrCore org.apache.solr.core.SolrCore@35d2476b
   [junit4]   2> 450109 T1514 oasc.ZkController.unregisterConfListener removed listener for config directory [/configs/conf1]
   [junit4]   2> 450109 T1514 oasc.ZkController.unregisterConfListener No more listeners for config directory [/configs/conf1]
   [junit4]   2> 450110 T1514 oasu.DirectUpdateHandler2.close closing DirectUpdateHandler2{commits=1,autocommits=0,soft autocommits=0,optimizes=0,rollbacks=0,expungeDeletes=0,docsPending=0,adds=0,deletesById=0,deletesByQuery=0,errors=0,cumulative_adds=0,cumulative_deletesById=0,cumulative_deletesByQuery=0,cumulative_errors=0,transaction_logs_total_size=0,transaction_logs_total_number=0}
   [junit4]   2> 450110 T1514 oasu.SolrCoreState.decrefSolrCoreState Closing SolrCoreState
   [junit4]   2> 450110 T1514 oasu.DefaultSolrCoreState.closeIndexWriter SolrCoreState ref count has reached 0 - closing IndexWriter
   [junit4]   2> 450110 T1514 oasu.DefaultSolrCoreState.closeIndexWriter closing IndexWriter with IndexWriterCloser
   [junit4]   2> 450111 T1514 oasc.SolrCore.closeSearcher [testasynccollectioncreation_shard1_1_replica1] Closing main searcher on request.
   [junit4]   2> 450121 T1514 oasc.CachingDirectoryFactory.close Closing MockDirectoryFactory - 2 directories currently being tracked
   [junit4]   2> 450122 T1514 oasc.CachingDirectoryFactory.closeCacheValue looking to close /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/shard-1-001/cores/testasynccollectioncreation_shard1_1_replica1/data [CachedDir<<refCount=0;path=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/shard-1-001/cores/testasynccollectioncreation_shard1_1_replica1/data;done=false>>]
   [junit4]   2> 450122 T1514 oasc.CachingDirectoryFactory.close Closing directory: /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/shard-1-001/cores/testasynccollectioncreation_shard1_1_replica1/data
   [junit4]   2> 450122 T1514 oasc.CachingDirectoryFactory.closeCacheValue looking to close /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/shard-1-001/cores/testasynccollectioncreation_shard1_1_replica1/data/index [CachedDir<<refCount=0;path=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/shard-1-001/cores/testasynccollectioncreation_shard1_1_replica1/data/index;done=false>>]
   [junit4]   2> 450123 T1514 oasc.CachingDirectoryFactory.close Closing directory: /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001/shard-1-001/cores/testasynccollectioncreation_shard1_1_replica1/data/index
   [junit4]   2> 450123 T1514 oasc.Overseer.close Overseer (id=93859402747412487-127.0.0.1:54346_ynfk%2Fo-n_0000000001) closing
   [junit4]   2> 450123 T1623 n:127.0.0.1:54346_ynfk%2Fo oasc.Overseer$ClusterStateUpdater.run Overseer Loop exiting : 127.0.0.1:54346_ynfk%2Fo
   [junit4]   2> 450128 T1619 n:127.0.0.1:54346_ynfk%2Fo oascc.ZkStateReader$3.process WARN ZooKeeper watch triggered, but Solr cannot talk to ZK
   [junit4]   2> 450129 T1514 oejs.AbstractConnector.doStop Stopped ServerConnector@708335dc{HTTP/1.1}{127.0.0.1:0}
   [junit4]   2> 450129 T1514 oejsh.ContextHandler.doStop Stopped o.e.j.s.ServletContextHandler@167014f9{/ynfk/o,null,UNAVAILABLE}
   [junit4]   2> 450131 T1514 oasc.ZkTestServer.send4LetterWord connecting to 127.0.0.1:47056 47056
   [junit4]   2> 450226 T1515 oasc.ZkTestServer.send4LetterWord connecting to 127.0.0.1:47056 47056
   [junit4]   2> 450228 T1515 oasc.ZkTestServer$ZKServerMain.runFromConfig WARN Watch limit violations: 
   [junit4]   2> 	Maximum concurrent create/delete watches above limit:
   [junit4]   2> 	
   [junit4]   2> 		3	/solr/aliases.json
   [junit4]   2> 		3	/solr/clusterstate.json
   [junit4]   2> 		2	/solr/configs/conf1
   [junit4]   2> 		2	/solr/collections/testasynccollectioncreation/state.json
   [junit4]   2> 	
   [junit4]   2> 	Maximum concurrent children watches above limit:
   [junit4]   2> 	
   [junit4]   2> 		3	/solr/live_nodes
   [junit4]   2> 		2	/solr/overseer/queue
   [junit4]   2> 		2	/solr/overseer/collection-queue-work
   [junit4]   2> 	
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=CollectionsAPIAsyncDistributedZkTest -Dtests.method=testSolrJAPICalls -Dtests.seed=8AB2E1776CCFAE0C -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=sr_ME -Dtests.timezone=Africa/Bissau -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
   [junit4] FAILURE  133s J0 | CollectionsAPIAsyncDistributedZkTest.testSolrJAPICalls <<<
   [junit4]    > Throwable #1: org.junit.ComparisonFailure: Shard split did not complete. Last recorded state: running expected:<[completed]> but was:<[running]>
   [junit4]    > 	at __randomizedtesting.SeedInfo.seed([8AB2E1776CCFAE0C:D2D66D166AA506D8]:0)
   [junit4]    > 	at org.apache.solr.cloud.CollectionsAPIAsyncDistributedZkTest.testSolrJAPICalls(CollectionsAPIAsyncDistributedZkTest.java:101)
   [junit4]    > 	at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
   [junit4]    > 	at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
   [junit4]    > 	at java.lang.Thread.run(Thread.java:745)
   [junit4]   2> 450247 T1514 oas.SolrTestCaseJ4.deleteCore ###deleteCore
   [junit4]   2> NOTE: leaving temporary files on disk at: /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIAsyncDistributedZkTest 8AB2E1776CCFAE0C-001
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene50): {}, docValues:{}, sim=RandomSimilarityProvider(queryNorm=true,coord=no): {}, locale=sr_ME, timezone=Africa/Bissau
   [junit4]   2> NOTE: Linux 3.13.0-53-generic amd64/Oracle Corporation 1.9.0-ea (64-bit)/cpus=12,threads=1,free=278572592,total=526385152
   [junit4]   2> NOTE: All tests run in this JVM: [TestDocBasedVersionConstraints, TestIndexingPerformance, RecoveryAfterSoftCommitTest, TestJoin, TestJettySolrRunner, DefaultValueUpdateProcessorTest, TestReversedWildcardFilterFactory, PrimitiveFieldTypeTest, TestWordDelimiterFilterFactory, TestShardHandlerFactory, LeaderFailoverAfterPartitionTest, QueryElevationComponentTest, ZkNodePropsTest, TestOverriddenPrefixQueryForCustomFieldType, TestDynamicFieldCollectionResource, IndexBasedSpellCheckerTest, CursorMarkTest, DistributedSuggestComponentTest, ClusterStateTest, BJQParserTest, TestNonDefinedSimilarityFactory, LeaderInitiatedRecoveryOnCommitTest, TestManagedStopFilterFactory, TestDistributedSearch, SolrIndexConfigTest, LeaderElectionIntegrationTest, SignatureUpdateProcessorFactoryTest, CollectionsAPIAsyncDistributedZkTest]
   [junit4] Completed [140/494] on J0 in 132.97s, 1 test, 1 failure <<< FAILURES!

[...truncated 1094 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:536: The following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:484: The following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:61: The following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/extra-targets.xml:39: The following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build.xml:229: The following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/common-build.xml:511: The following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:1433: The following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:991: There were test failures: 494 suites, 1969 tests, 1 failure, 57 ignored (25 assumptions)

Total time: 42 minutes 5 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.7.0_80) - Build # 12586 - Still Failing!

Posted by Policeman Jenkins Server <je...@thetaphi.de>.
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12586/
Java: 64bit/jdk1.7.0_80 -XX:-UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 60157 lines...]
-documentation-lint:
    [jtidy] Checking for broken html (such as invalid tags)...
   [delete] Deleting directory /home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/build/jtidy_tmp
     [echo] Checking for broken links...
     [exec] 
     [exec] Crawl/parse...
     [exec] 
     [exec] Verify...
     [exec] 
     [exec] file:///home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/docs/solr-core/org/apache/solr/search/facet/SumAgg.Merger.html
     [exec]   BROKEN LINK: file:///home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/docs/solr-core/org/apache/solr/search/facet/FacetMerger.Context.html
     [exec]   BROKEN LINK: file:///home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/docs/solr-core/org/apache/solr/search/facet/FacetMerger.Context.html
     [exec]   BROKEN LINK: file:///home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/docs/solr-core/org/apache/solr/search/facet/FacetMerger.Context.html
     [exec] 
     [exec] Broken javadocs links were found!

BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:536: The following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:90: The following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build.xml:621: The following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build.xml:634: The following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:2547: exec returned: 1

Total time: 59 minutes 6 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



Re: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_60-ea-b12) - Build # 12585 - Still Failing!

Posted by Robert Muir <rc...@gmail.com>.
The issue is the two broken links coming from SumAgg.Merger to it,
thats what the output is supposed to mean:

>      [exec] file:///home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/docs/solr-core/org/apache/solr/search/facet/SumAgg.Merger.html
>      [exec]   BROKEN LINK: file:///home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/docs/solr-core/org/apache/solr/search/facet/FacetMerger.Context.html
>      [exec]   BROKEN LINK: file:///home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/docs/solr-core/org/apache/solr/search/facet/FacetMerger.Context.html

in this case a public method refers to a package private class.

On Wed, May 20, 2015 at 9:19 PM, Yonik Seeley <ys...@gmail.com> wrote:
> What's the deal with this?
>
> BROKEN LINK: file:///home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/docs/solr-core/org/apache/solr/search/facet/FacetMerger.Context.html
>
> FacetMerger.Context has no javadoc!
>
> -Yonik
>
>
> On Wed, May 20, 2015 at 8:39 PM, Policeman Jenkins Server
> <je...@thetaphi.de> wrote:
>> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12585/
>> Java: 64bit/jdk1.8.0_60-ea-b12 -XX:-UseCompressedOops -XX:+UseParallelGC
>>
>> All tests passed
>>
>> Build Log:
>> [...truncated 60276 lines...]
>> -documentation-lint:
>>     [jtidy] Checking for broken html (such as invalid tags)...
>>    [delete] Deleting directory /home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/build/jtidy_tmp
>>      [echo] Checking for broken links...
>>      [exec]
>>      [exec] Crawl/parse...
>>      [exec]
>>      [exec] Verify...
>>      [exec]
>>      [exec] file:///home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/docs/solr-core/org/apache/solr/search/facet/SumAgg.Merger.html
>>      [exec]   BROKEN LINK: file:///home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/docs/solr-core/org/apache/solr/search/facet/FacetMerger.Context.html
>>      [exec]   BROKEN LINK: file:///home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/docs/solr-core/org/apache/solr/search/facet/FacetMerger.Context.html
>>      [exec]
>>      [exec]
>>      [exec] Broken javadocs links were found!
>>
>> BUILD FAILED
>> /home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:536: The following error occurred while executing this line:
>> /home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:90: The following error occurred while executing this line:
>> /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build.xml:621: The following error occurred while executing this line:
>> /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build.xml:634: The following error occurred while executing this line:
>> /home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:2547: exec returned: 1
>>
>> Total time: 53 minutes 12 seconds
>> Build step 'Invoke Ant' marked build as failure
>> Archiving artifacts
>> Recording test results
>> Email was triggered for: Failure - Any
>> Sending email for trigger: Failure - Any
>>
>>
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
>> For additional commands, e-mail: dev-help@lucene.apache.org
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
> For additional commands, e-mail: dev-help@lucene.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_60-ea-b12) - Build # 12585 - Still Failing!

Posted by "david.w.smiley@gmail.com" <da...@gmail.com>.
For a long while on my local checkout, I have svn:ignore at the root level
set to ‘*’.  I really think we should consider committing this; I think
I’ve advocated for it before.

RE “precommit” being a pain — yeah but, like Mike already said, it catches
problems.  At least it’s faster lately than it used to be months ago.

On Thu, May 21, 2015 at 5:16 AM Michael McCandless <
lucene@mikemccandless.com> wrote:

> On Wed, May 20, 2015 at 11:38 PM, Yonik Seeley <ys...@gmail.com> wrote:
> > Seems like a bug in javadoc if it fails on valid java code with no
> > javadoc specified on the involved classes.
> >
> >> if you'd run "ant precommit" you would have gotten the same error, and
> >> (if you hadn't understood the problem) you could have asked about it
> >> before breaking the build.
> >
> > I normally only make sure unit tests pass.
>
> Please understand that by taking this attitude you waste everyone
> else's time who does want to pass "ant precommit" before committing,
> and everyone else's time to scan all these broken builds emails.
>
> If you truly refuse to pass "ant precommit" before committing, at
> least be vigilant and watch the next few builds to see if you broke
> them, and then fix it quickly.
>
> > "ant precommit" is way to picky.
>
> I agree it's picky but I think that's a feature, not a bug :)
>
> It is this way so it catches common errors that unit tests don't
> catch: a leftover #nocommit, wrong svn props, forgot to svn add, etc.
> Over time we've made it more picky each time one of us falls into a
> trap of forgetting to do XYZ before committing.
>
> And in this case it looks like it caught something truly wrong with
> your change (a public API using a package-private class).  Had you run
> it before hand and paid attention to what it said you could have fixed
> it...
>
> My biggest complaint is how slow it is ... I wish we could improve that.
>
> > For example it will fail if one as
> > any extra files lying around (which is normally the case for active
> > development / debugging).  Doing a clean checkout and re-applying the
> > patch just to make "ant precommit" happy is too high of a hurdle.  My
> > guess is that most committers don't use it for the majority of
> > commits.
>
> This is to try to help you remember to "svn add" files.  Why not store
> such files outside of the source tree?  Or maybe we can add them to
> svn:ignore?
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
> For additional commands, e-mail: dev-help@lucene.apache.org
>
>

Re: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_60-ea-b12) - Build # 12585 - Still Failing!

Posted by Michael McCandless <lu...@mikemccandless.com>.
On Wed, May 20, 2015 at 11:38 PM, Yonik Seeley <ys...@gmail.com> wrote:
> Seems like a bug in javadoc if it fails on valid java code with no
> javadoc specified on the involved classes.
>
>> if you'd run "ant precommit" you would have gotten the same error, and
>> (if you hadn't understood the problem) you could have asked about it
>> before breaking the build.
>
> I normally only make sure unit tests pass.

Please understand that by taking this attitude you waste everyone
else's time who does want to pass "ant precommit" before committing,
and everyone else's time to scan all these broken builds emails.

If you truly refuse to pass "ant precommit" before committing, at
least be vigilant and watch the next few builds to see if you broke
them, and then fix it quickly.

> "ant precommit" is way to picky.

I agree it's picky but I think that's a feature, not a bug :)

It is this way so it catches common errors that unit tests don't
catch: a leftover #nocommit, wrong svn props, forgot to svn add, etc.
Over time we've made it more picky each time one of us falls into a
trap of forgetting to do XYZ before committing.

And in this case it looks like it caught something truly wrong with
your change (a public API using a package-private class).  Had you run
it before hand and paid attention to what it said you could have fixed
it...

My biggest complaint is how slow it is ... I wish we could improve that.

> For example it will fail if one as
> any extra files lying around (which is normally the case for active
> development / debugging).  Doing a clean checkout and re-applying the
> patch just to make "ant precommit" happy is too high of a hurdle.  My
> guess is that most committers don't use it for the majority of
> commits.

This is to try to help you remember to "svn add" files.  Why not store
such files outside of the source tree?  Or maybe we can add them to
svn:ignore?

Mike McCandless

http://blog.mikemccandless.com

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_60-ea-b12) - Build # 12585 - Still Failing!

Posted by Dawid Weiss <da...@cs.put.poznan.pl>.
> If you have a public method, that refers to a package-private class, nobody can really use that public
> method you exposed.

There is one exception that involves subclasses (not talking about
javadoc links to the declaring class now).

If you have a package-private class with a public method and inherit
(within the same package) to a public class then that method can be
used in the public class (and you cannot reduce the visibility of that
public method, it remains public).

This kind of trick is used to hide scaffolding classes that are
exposed from multiple points, for example
java.lang.AbstractStringBuilder (pkg-private) and StringBuilder,
StringBuffer.

I can be a smartass here but only because I painfully and methodically
debugged some quirky obfuscation errors resulting from this issue (in
proguard)...

Dawid

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_60-ea-b12) - Build # 12585 - Still Failing!

Posted by Robert Muir <rc...@gmail.com>.
On Thu, May 21, 2015 at 8:53 AM, Yonik Seeley <ys...@gmail.com> wrote:
> On Thu, May 21, 2015 at 12:55 AM, Robert Muir <rc...@gmail.com> wrote:
>> Honestly yonik: this is an API bug. If you have a public method, that
>> refers to a package-private class, nobody can really use that public
>> method you exposed.
>
> It's temporary - package private stuff will migrate to public.  It's
> not in it's final form and not meant to be a public API, but some
> things have to be public for internal cross-package use.
>
> My point is that javadoc shouldn't fail stuff that java is happy with.
>

But it is broken. The checker just tells you reality. If users click
that package-private parameter link in the javadocs, they will get a
404 not found.
And they wont be able to use the public method, because even though
its public, it has a package-private class as parameter.

The build did completely the right thing to fail here.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_60-ea-b12) - Build # 12585 - Still Failing!

Posted by "Ramkumar R. Aiyengar" <an...@gmail.com>.
On Thu, May 21, 2015 at 6:40 PM, Yonik Seeley <ys...@gmail.com> wrote:

> It's my opinion that it also wastes everyones time to force them to
> have a pristine checkout before committing
>

I guess there are two kinds of checks which `ant precommit` tries to
combine -- one is checks which influence the quality/correctness of the
code which is checked in, and that's probably the vast majority. The stray
files check is more of a "commit helper", whose impact is limited to the
developers working copy, if you violate it and miss files, it should really
trigger other issues like compilation or tests in jenkins. I don't know if
there are others which fall into this category.

One way out may be to remove the stray files check from precommit and
instead provide a different wrapper script or ant target which wraps around
`svn ci`. That could interactively show unversioned files, and ask
questions like "do you really want to go ahead" and if you say, commit it
anyway.

Also, have we tried pre-commit hooks before? It certainly won't do all of
pre-commit but way we can move some of the svn checks like eol-style etc.
there and those can be mandatory.

Re: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_60-ea-b12) - Build # 12585 - Still Failing!

Posted by Yonik Seeley <ys...@gmail.com>.
OK Mr grumpy.

It's my opinion that it also wastes everyones time to force them to
have a pristine checkout before committing
And I know for a *fact* that a large number of committers don't run
"ant precommit" before every commit.

> If i'm working on something, and discover that a recent commit has broken
> precommit, I revert.  no discussion, no hesitation

Shrug... I've fixed a number of other peoples commits that have
inadvertently broken the build.
If it's an easy fix, I'll continue taking that path.  Seems both
easier and a bit nicer.

I'll look into the "precommit" target to see what other targets that
can be run earlier to try and catch bugs w/o forcing one to have a
pristine checkout.

-Yonik

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_60-ea-b12) - Build # 12585 - Still Failing!

Posted by Chris Hostetter <ho...@fucit.org>.
>> Please understand that by taking this attitude you waste everyone
>> else's time who does want to pass "ant precommit" before committing,
>> and everyone else's time to scan all these broken builds emails.

+1

personally ... i'm past the point of willing to overlook people who 
obviously break the build ... ranodmized / timing sensitive test failures 
are one thing -- but failure to compile, failure to pass precommit ... 
these are obvious mistakes that can be easily caught in advance, and 
create blocks preventing for other developers.

that's un-fucking-acceptable.  

If i'm working on something, and discover that a recent commit has broken 
precommit, I revert.  no discussion, no hesitation

: It's temporary - package private stuff will migrate to public.  It's
: not in it's final form and not meant to be a public API, but some
: things have to be public for internal cross-package use.

Then commit to a branch, or keep it in a patch, or keep it in a github 
fork.  This project has evolved to trying to have the best safe guards we 
can against broken code making it into the repo -- if people ignore those 
safe guards, they have no right to complain when the rest of us call them 
out on their bull shit and/or revert their commits.

: My point is that javadoc shouldn't fail stuff that java is happy with.

java is happy with "nocommit" in source files, and "@author" tags -- that 
doesn't mean we as a project are.  If the most efficient way to find these 
types of problems is by validating the javadocs -- as opposed to spending 
a lot of time writting some sort of justom source code analysis -- then so 
be it.


-Hoss
http://www.lucidworks.com/

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_60-ea-b12) - Build # 12585 - Still Failing!

Posted by Yonik Seeley <ys...@gmail.com>.
On Thu, May 21, 2015 at 12:55 AM, Robert Muir <rc...@gmail.com> wrote:
> Honestly yonik: this is an API bug. If you have a public method, that
> refers to a package-private class, nobody can really use that public
> method you exposed.

It's temporary - package private stuff will migrate to public.  It's
not in it's final form and not meant to be a public API, but some
things have to be public for internal cross-package use.

My point is that javadoc shouldn't fail stuff that java is happy with.

-Yonik

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_60-ea-b12) - Build # 12585 - Still Failing!

Posted by Robert Muir <rc...@gmail.com>.
On Wed, May 20, 2015 at 11:38 PM, Yonik Seeley <ys...@gmail.com> wrote:
> Seems like a bug in javadoc if it fails on valid java code with no
> javadoc specified on the involved classes.

Why is it a bug in javadoc? It generates plenty of javadoc for your
code even if you don't specify it,
like method signatures, return types, arguments. And for these links
are generated...

The current javadocs level is 'protected' so that class isn't
included, hence the broken link.

Honestly yonik: this is an API bug. If you have a public method, that
refers to a package-private class, nobody can really use that public
method you exposed. Try it. This is just the only way we currently
have to detect API bugs like this.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_60-ea-b12) - Build # 12585 - Still Failing!

Posted by Yonik Seeley <ys...@gmail.com>.
Seems like a bug in javadoc if it fails on valid java code with no
javadoc specified on the involved classes.

> if you'd run "ant precommit" you would have gotten the same error, and
> (if you hadn't understood the problem) you could have asked about it
> before breaking the build.

I normally only make sure unit tests pass.
"ant precommit" is way to picky.  For example it will fail if one as
any extra files lying around (which is normally the case for active
development / debugging).  Doing a clean checkout and re-applying the
patch just to make "ant precommit" happy is too high of a hurdle.  My
guess is that most committers don't use it for the majority of
commits.

-Yonik

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_60-ea-b12) - Build # 12585 - Still Failing!

Posted by Chris Hostetter <ho...@fucit.org>.
: What's the deal with this?
: 
: BROKEN LINK: file:///home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/docs/solr-core/org/apache/solr/search/facet/FacetMerger.Context.html
: 
: FacetMerger.Context has no javadoc!

heh ... "Context" is everything.

the *full* error is telling you that while trying to veryfy the javadocs 
file "SumAgg.Merger.html" it found 2 broken links to 
"FacetMerger.Context.html" ... the reason the links are broken is because 
you have a public method (SumAgg.Merger.merge(Object, Context)) that takes 
as it's argument an instance of an object who's class is not public 
("Context" is declared as a "public" class, but it's a static inner class 
of "FacetMerger" which is a package protected class) .. so it's impossible 
for the (public) javadocs of SumAgg.Merger.merge to link anywhere 
explaining what the hell a "Context" object is.

if you'd run "ant precommit" you would have gotten the same error, and 
(if you hadn't understood the problem) you could have asked about it 
before breaking the build.

: >      [exec] Verify...
: >      [exec]
: >      [exec] file:///home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/docs/solr-core/org/apache/solr/search/facet/SumAgg.Merger.html
: >      [exec]   BROKEN LINK: file:///home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/docs/solr-core/org/apache/solr/search/facet/FacetMerger.Context.html
: >      [exec]   BROKEN LINK: file:///home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/docs/solr-core/org/apache/solr/search/facet/FacetMerger.Context.html

-Hoss
http://www.lucidworks.com/

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_60-ea-b12) - Build # 12585 - Still Failing!

Posted by Yonik Seeley <ys...@gmail.com>.
What's the deal with this?

BROKEN LINK: file:///home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/docs/solr-core/org/apache/solr/search/facet/FacetMerger.Context.html

FacetMerger.Context has no javadoc!

-Yonik


On Wed, May 20, 2015 at 8:39 PM, Policeman Jenkins Server
<je...@thetaphi.de> wrote:
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12585/
> Java: 64bit/jdk1.8.0_60-ea-b12 -XX:-UseCompressedOops -XX:+UseParallelGC
>
> All tests passed
>
> Build Log:
> [...truncated 60276 lines...]
> -documentation-lint:
>     [jtidy] Checking for broken html (such as invalid tags)...
>    [delete] Deleting directory /home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/build/jtidy_tmp
>      [echo] Checking for broken links...
>      [exec]
>      [exec] Crawl/parse...
>      [exec]
>      [exec] Verify...
>      [exec]
>      [exec] file:///home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/docs/solr-core/org/apache/solr/search/facet/SumAgg.Merger.html
>      [exec]   BROKEN LINK: file:///home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/docs/solr-core/org/apache/solr/search/facet/FacetMerger.Context.html
>      [exec]   BROKEN LINK: file:///home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/docs/solr-core/org/apache/solr/search/facet/FacetMerger.Context.html
>      [exec]
>      [exec]
>      [exec] Broken javadocs links were found!
>
> BUILD FAILED
> /home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:536: The following error occurred while executing this line:
> /home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:90: The following error occurred while executing this line:
> /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build.xml:621: The following error occurred while executing this line:
> /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build.xml:634: The following error occurred while executing this line:
> /home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:2547: exec returned: 1
>
> Total time: 53 minutes 12 seconds
> Build step 'Invoke Ant' marked build as failure
> Archiving artifacts
> Recording test results
> Email was triggered for: Failure - Any
> Sending email for trigger: Failure - Any
>
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
> For additional commands, e-mail: dev-help@lucene.apache.org

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_60-ea-b12) - Build # 12585 - Still Failing!

Posted by Policeman Jenkins Server <je...@thetaphi.de>.
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12585/
Java: 64bit/jdk1.8.0_60-ea-b12 -XX:-UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 60276 lines...]
-documentation-lint:
    [jtidy] Checking for broken html (such as invalid tags)...
   [delete] Deleting directory /home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/build/jtidy_tmp
     [echo] Checking for broken links...
     [exec] 
     [exec] Crawl/parse...
     [exec] 
     [exec] Verify...
     [exec] 
     [exec] file:///home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/docs/solr-core/org/apache/solr/search/facet/SumAgg.Merger.html
     [exec]   BROKEN LINK: file:///home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/docs/solr-core/org/apache/solr/search/facet/FacetMerger.Context.html
     [exec]   BROKEN LINK: file:///home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/docs/solr-core/org/apache/solr/search/facet/FacetMerger.Context.html
     [exec]   BROKEN LINK: file:///home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/docs/solr-core/org/apache/solr/search/facet/FacetMerger.Context.html
     [exec] 
     [exec] Broken javadocs links were found!

BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:536: The following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:90: The following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build.xml:621: The following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build.xml:634: The following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:2547: exec returned: 1

Total time: 53 minutes 12 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any