You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by Policeman Jenkins Server <je...@thetaphi.de> on 2014/03/18 02:26:07 UTC

[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) - Build # 9828 - Failure!

Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/9828/
Java: 32bit/jdk1.7.0_51 -server -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 12076 lines...]
   [junit4] JVM J0: stderr was not empty, see: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/temp/junit4-J0-20140317_230107_233.syserr
   [junit4] >>> JVM J0: stderr (verbatim) ----
   [junit4] WARN: Unhandled exception in event serialization. -> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.JsonIOException: java.io.IOException: No space left on device
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.Gson.toJson(Gson.java:514)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.events.Serializer.serialize(Serializer.java:61)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.slave.SlaveMain$4.write(SlaveMain.java:376)
   [junit4] 	at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
   [junit4] 	at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
   [junit4] 	at java.io.PrintStream.flush(PrintStream.java:338)
   [junit4] 	at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:297)
   [junit4] 	at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:141)
   [junit4] 	at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
   [junit4] 	at org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:59)
   [junit4] 	at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:324)
   [junit4] 	at org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
   [junit4] 	at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
   [junit4] 	at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
   [junit4] 	at org.apache.log4j.Category.callAppenders(Category.java:206)
   [junit4] 	at org.apache.log4j.Category.forcedLog(Category.java:391)
   [junit4] 	at org.apache.log4j.Category.log(Category.java:856)
   [junit4] 	at org.slf4j.impl.Log4jLoggerAdapter.warn(Log4jLoggerAdapter.java:478)
   [junit4] 	at org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest$FullThrottleStopableIndexingThread$1.handleError(ChaosMonkeyNothingIsSafeTest.java:284)
   [junit4] 	at org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner.run(ConcurrentUpdateSolrServer.java:256)
   [junit4] 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   [junit4] 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   [junit4] 	at java.lang.Thread.run(Thread.java:744)
   [junit4] Caused by: java.io.IOException: No space left on device
   [junit4] 	at java.io.RandomAccessFile.writeBytes0(Native Method)
   [junit4] 	at java.io.RandomAccessFile.writeBytes(RandomAccessFile.java:520)
   [junit4] 	at java.io.RandomAccessFile.write(RandomAccessFile.java:550)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.slave.RandomAccessFileOutputStream.write(RandomAccessFileOutputStream.java:28)
   [junit4] 	at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
   [junit4] 	at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
   [junit4] 	at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282)
   [junit4] 	at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125)
   [junit4] 	at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:113)
   [junit4] 	at java.io.OutputStreamWriter.write(OutputStreamWriter.java:194)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.stream.JsonWriter.string(JsonWriter.java:535)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.stream.JsonWriter.value(JsonWriter.java:364)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.bind.TypeAdapters$22.write(TypeAdapters.java:626)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.bind.TypeAdapters$22.write(TypeAdapters.java:578)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.Streams.write(Streams.java:67)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.GsonToMiniGsonTypeAdapterFactory$3.write(GsonToMiniGsonTypeAdapterFactory.java:98)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.bind.TypeAdapterRuntimeTypeWrapper.write(TypeAdapterRuntimeTypeWrapper.java:66)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.write(ReflectiveTypeAdapterFactory.java:82)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.write(ReflectiveTypeAdapterFactory.java:194)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.Gson.toJson(Gson.java:512)
   [junit4] 	... 22 more
   [junit4] 
   [junit4] WARN: Event serializer exception. -> java.io.IOException: Serializer already closed.
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.events.Serializer.serialize(Serializer.java:41)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.slave.RunListenerEmitter.testFailure(RunListenerEmitter.java:54)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.slave.NoExceptionRunListenerDecorator.testFailure(NoExceptionRunListenerDecorator.java:55)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.slave.BeforeAfterRunListenerDecorator.testFailure(BeforeAfterRunListenerDecorator.java:60)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.slave.OrderedRunNotifier$4.notifyListener(OrderedRunNotifier.java:129)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.slave.OrderedRunNotifier$SafeNotifier.run(OrderedRunNotifier.java:63)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.slave.OrderedRunNotifier.fireTestFailure(OrderedRunNotifier.java:126)
   [junit4] 	at com.carrotsearch.randomizedtesting.ThreadLeakControl$2.evaluate(ThreadLeakControl.java:406)
   [junit4] 	at com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:641)
   [junit4] 	at com.carrotsearch.randomizedtesting.RandomizedRunner.access$200(RandomizedRunner.java:128)
   [junit4] 	at com.carrotsearch.randomizedtesting.RandomizedRunner$1.run(RandomizedRunner.java:558)
   [junit4] 
   [junit4] WARN: Event serializer exception. -> java.io.IOException: Serializer already closed.
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.events.Serializer.serialize(Serializer.java:41)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.slave.RunListenerEmitter.testFinished(RunListenerEmitter.java:113)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.slave.NoExceptionRunListenerDecorator.testFinished(NoExceptionRunListenerDecorator.java:47)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.slave.BeforeAfterRunListenerDecorator.testFinished(BeforeAfterRunListenerDecorator.java:51)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.slave.OrderedRunNotifier$7.notifyListener(OrderedRunNotifier.java:179)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.slave.OrderedRunNotifier$SafeNotifier.run(OrderedRunNotifier.java:63)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.slave.OrderedRunNotifier.fireTestFinished(OrderedRunNotifier.java:176)
   [junit4] 	at com.carrotsearch.randomizedtesting.ThreadLeakControl$2.evaluate(ThreadLeakControl.java:410)
   [junit4] 	at com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:641)
   [junit4] 	at com.carrotsearch.randomizedtesting.RandomizedRunner.access$200(RandomizedRunner.java:128)
   [junit4] 	at com.carrotsearch.randomizedtesting.RandomizedRunner$1.run(RandomizedRunner.java:558)
   [junit4] 
   [junit4] WARN: Event serializer exception. -> java.io.IOException: Serializer already closed.
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.events.Serializer.serialize(Serializer.java:41)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.slave.RunListenerEmitter.testFailure(RunListenerEmitter.java:52)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.slave.NoExceptionRunListenerDecorator.testFailure(NoExceptionRunListenerDecorator.java:55)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.slave.BeforeAfterRunListenerDecorator.testFailure(BeforeAfterRunListenerDecorator.java:60)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.slave.OrderedRunNotifier$4.notifyListener(OrderedRunNotifier.java:129)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.slave.OrderedRunNotifier$SafeNotifier.run(OrderedRunNotifier.java:63)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.slave.OrderedRunNotifier.fireTestFailure(OrderedRunNotifier.java:126)
   [junit4] 	at com.carrotsearch.randomizedtesting.RandomizedRunner.fireTestFailure(RandomizedRunner.java:753)
   [junit4] 	at com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:654)
   [junit4] 	at com.carrotsearch.randomizedtesting.RandomizedRunner.access$200(RandomizedRunner.java:128)
   [junit4] 	at com.carrotsearch.randomizedtesting.RandomizedRunner$1.run(RandomizedRunner.java:558)
   [junit4] 
   [junit4] WARN: Event serializer exception. -> java.io.IOException: Serializer already closed.
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.events.Serializer.serialize(Serializer.java:41)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.slave.RunListenerEmitter.testRunFinished(RunListenerEmitter.java:120)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.slave.NoExceptionRunListenerDecorator.testRunFinished(NoExceptionRunListenerDecorator.java:31)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.slave.BeforeAfterRunListenerDecorator.testRunFinished(BeforeAfterRunListenerDecorator.java:33)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.slave.OrderedRunNotifier$2.notifyListener(OrderedRunNotifier.java:94)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.slave.OrderedRunNotifier$SafeNotifier.run(OrderedRunNotifier.java:63)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.slave.OrderedRunNotifier.fireTestRunFinished(OrderedRunNotifier.java:91)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.execute(SlaveMain.java:181)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.main(SlaveMain.java:276)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.slave.SlaveMainSafe.main(SlaveMainSafe.java:12)
   [junit4] 
   [junit4] WARN: Exception at main loop level. -> java.lang.RuntimeException: java.io.IOException: Serializer already closed.
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.slave.StdInLineIterator.computeNext(StdInLineIterator.java:34)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.slave.StdInLineIterator.computeNext(StdInLineIterator.java:13)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.dependencies.com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.dependencies.com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.dependencies.com.google.common.collect.Iterators$5.hasNext(Iterators.java:542)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.execute(SlaveMain.java:169)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.main(SlaveMain.java:276)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.slave.SlaveMainSafe.main(SlaveMainSafe.java:12)
   [junit4] Caused by: java.io.IOException: Serializer already closed.
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.events.Serializer.serialize(Serializer.java:41)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.slave.StdInLineIterator.computeNext(StdInLineIterator.java:28)
   [junit4] 	... 7 more
   [junit4] <<< JVM J0: EOF ----

[...truncated 2 lines...]
   [junit4] ERROR: JVM J0 ended with an exception, command line: /var/lib/jenkins/tools/java/32bit/jdk1.7.0_51/jre/bin/java -server -XX:+UseG1GC -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/heapdumps -Dtests.prefix=tests -Dtests.seed=D0DA9EC3A93F0E07 -Xmx512M -Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random -Dtests.postingsformat=random -Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random -Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=5.0 -Dtests.cleanthreads=perClass -Djava.util.logging.config.file=/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/junit4/logging.properties -Dtests.nightly=false -Dtests.weekly=false -Dtests.slow=true -Dtests.asserts.gracious=false -Dtests.multiplier=3 -DtempDir=. -Djava.io.tmpdir=. -Djunit4.tempDir=/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/temp -Dclover.db.dir=/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/clover/db -Djava.security.manager=org.apache.lucene.util.TestSecurityManager -Djava.security.policy=/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/junit4/tests.policy -Dlucene.version=5.0-SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1 -Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory -Djava.awt.headless=true -Djdk.map.althashing.threshold=0 -Dtests.filterstacks=true -Dtests.disableHdfs=true -Dfile.encoding=ISO-8859-1 -classpath /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/classes/test:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-test-framework/classes/java:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/test-framework/lib/junit4-ant-2.1.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/src/test-files:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/test-framework/classes/java:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/codecs/classes/java:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-solrj/classes/java:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/classes/java:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/analysis/common/lucene-analyzers-common-5.0-SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/analysis/kuromoji/lucene-analyzers-kuromoji-5.0-SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/analysis/phonetic/lucene-analyzers-phonetic-5.0-SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/codecs/lucene-codecs-5.0-SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/highlighter/lucene-highlighter-5.0-SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/memory/lucene-memory-5.0-SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/misc/lucene-misc-5.0-SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/spatial/lucene-spatial-5.0-SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/expressions/lucene-expressions-5.0-SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/suggest/lucene-suggest-5.0-SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/grouping/lucene-grouping-5.0-SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/queries/lucene-queries-5.0-SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/queryparser/lucene-queryparser-5.0-SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/join/lucene-join-5.0-SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/lib/antlr-runtime-3.5.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/lib/asm-4.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/lib/asm-commons-4.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/lib/commons-cli-1.2.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/lib/commons-codec-1.9.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/lib/commons-configuration-1.6.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/lib/commons-fileupload-1.2.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/lib/commons-lang-2.6.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/lib/concurrentlinkedhashmap-lru-1.2.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/lib/dom4j-1.6.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/lib/guava-14.0.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/lib/hadoop-annotations-2.2.0.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/lib/hadoop-auth-2.2.0.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/lib/hadoop-common-2.2.0.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/lib/hadoop-hdfs-2.2.0.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/lib/hppc-0.5.2.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/lib/joda-time-2.2.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/lib/org.restlet-2.1.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/lib/org.restlet.ext.servlet-2.1.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/lib/protobuf-java-2.5.0.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/lib/spatial4j-0.4.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/solrj/lib/commons-io-2.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/solrj/lib/httpclient-4.3.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/solrj/lib/httpcore-4.3.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/solrj/lib/httpmime-4.3.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/solrj/lib/jcl-over-slf4j-1.7.6.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/solrj/lib/jul-to-slf4j-1.7.6.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/solrj/lib/log4j-1.2.16.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/solrj/lib/noggit-0.5.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/solrj/lib/slf4j-api-1.7.6.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/solrj/lib/slf4j-log4j12-1.7.6.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/solrj/lib/wstx-asl-3.2.7.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/solrj/lib/zookeeper-3.4.5.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/example/lib/jetty-continuation-8.1.10.v20130312.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/example/lib/jetty-deploy-8.1.10.v20130312.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/example/lib/jetty-http-8.1.10.v20130312.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/example/lib/jetty-io-8.1.10.v20130312.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/example/lib/jetty-jmx-8.1.10.v20130312.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/example/lib/jetty-security-8.1.10.v20130312.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/example/lib/jetty-server-8.1.10.v20130312.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/example/lib/jetty-servlet-8.1.10.v20130312.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/example/lib/jetty-util-8.1.10.v20130312.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/example/lib/jetty-webapp-8.1.10.v20130312.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/example/lib/jetty-xml-8.1.10.v20130312.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/example/lib/servlet-api-3.0.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/example/example-DIH/solr/db/lib/derby-10.9.1.0.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/example/example-DIH/solr/db/lib/hsqldb-1.8.0.10.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/core/classes/java:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/test-framework/lib/junit-4.10.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/test-framework/lib/randomizedtesting-runner-2.1.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/test-lib/antlr-runtime-3.5.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/test-lib/asm-4.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/test-lib/asm-commons-4.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/test-lib/cglib-nodep-2.2.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/test-lib/commons-collections-3.2.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/test-lib/dom4j-1.6.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/test-lib/easymock-3.0.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/test-lib/hadoop-common-2.2.0-tests.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/test-lib/hadoop-hdfs-2.2.0-tests.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/test-lib/hppc-0.5.2.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/test-lib/javax.servlet-api-3.0.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/test-lib/jersey-core-1.8.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/test-lib/jetty-6.1.26.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/test-lib/jetty-util-6.1.26.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/test-lib/objenesis-1.2.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-launcher.jar:/var/lib/jenkins/.ant/lib/ivy-2.3.0.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jai.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-swing.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-oro.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jmf.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-xalan2.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-javamail.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-resolver.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-testutil.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-commons-logging.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-log4j.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-junit.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jsch.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-commons-net.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-bsf.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jdepend.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-netrexx.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-regexp.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-junit4.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-bcel.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-antlr.jar:/var/lib/jenkins/tools/java/32bit/jdk1.7.0_51/lib/tools.jar:/var/lib/jenkins/.ivy2/cache/com.carrotsearch.randomizedtesting/junit4-ant/jars/junit4-ant-2.1.1.jar -ea:org.apache.lucene... -ea:org.apache.solr... com.carrotsearch.ant.tasks.junit4.slave.SlaveMainSafe -eventsfile /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/temp/junit4-J0-20140317_230107_233.events @/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/temp/junit4-J0-20140317_230107_233.suites
   [junit4] ERROR: JVM J0 ended with an exception: Forked process returned with error code: 240. Very likely a JVM crash.  Process output piped in logs above.
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.JUnit4.executeSlave(JUnit4.java:1458)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.JUnit4.access$000(JUnit4.java:133)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.JUnit4$2.call(JUnit4.java:945)
   [junit4] 	at com.carrotsearch.ant.tasks.junit4.JUnit4$2.call(JUnit4.java:942)
   [junit4] 	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
   [junit4] 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   [junit4] 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   [junit4] 	at java.lang.Thread.run(Thread.java:744)

BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:467: The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:447: The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:45: The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/extra-targets.xml:37: The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build.xml:189: The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/common-build.xml:490: The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/common-build.xml:1275: The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/common-build.xml:907: At least one slave process threw an exception, first: Forked process returned with error code: 240. Very likely a JVM crash.  Process output piped in logs above.

Total time: 165 minutes 4 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 32bit/jdk1.7.0_51 -server -XX:+UseG1GC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0-fcs-b132) - Build # 9831 - Still Failing!

Posted by Policeman Jenkins Server <je...@thetaphi.de>.
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/9831/
Java: 64bit/jdk1.8.0-fcs-b132 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.lucene.spatial.prefix.SpatialOpRecursivePrefixTreeTest.testWithin {#4 seed=[25E057BEF603A59:3621DA68E80BD686]}

Error Message:
Shouldn't match I#0:Rect(minX=0.0,maxX=256.0,minY=-128.0,maxY=128.0) Q:ShapePair(Rect(minX=0.0,maxX=256.0,minY=-128.0,maxY=128.0) , Rect(minX=38.0,maxX=47.0,minY=-67.0,maxY=-59.0))

Stack Trace:
java.lang.AssertionError: Shouldn't match I#0:Rect(minX=0.0,maxX=256.0,minY=-128.0,maxY=128.0) Q:ShapePair(Rect(minX=0.0,maxX=256.0,minY=-128.0,maxY=128.0) , Rect(minX=38.0,maxX=47.0,minY=-67.0,maxY=-59.0))
	at __randomizedtesting.SeedInfo.seed([25E057BEF603A59:3621DA68E80BD686]:0)
	at org.junit.Assert.fail(Assert.java:93)
	at org.apache.lucene.spatial.prefix.SpatialOpRecursivePrefixTreeTest.fail(SpatialOpRecursivePrefixTreeTest.java:355)
	at org.apache.lucene.spatial.prefix.SpatialOpRecursivePrefixTreeTest.doTest(SpatialOpRecursivePrefixTreeTest.java:335)
	at org.apache.lucene.spatial.prefix.SpatialOpRecursivePrefixTreeTest.testWithin(SpatialOpRecursivePrefixTreeTest.java:119)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:483)
	at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1617)
	at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:826)
	at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:862)
	at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:876)
	at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
	at org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
	at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
	at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
	at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
	at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
	at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
	at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
	at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:359)
	at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:783)
	at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:443)
	at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:835)
	at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:737)
	at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:771)
	at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:782)
	at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
	at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
	at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
	at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
	at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
	at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
	at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
	at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
	at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
	at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
	at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
	at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:359)
	at java.lang.Thread.run(Thread.java:744)




Build Log:
[...truncated 9087 lines...]
   [junit4] Suite: org.apache.lucene.spatial.prefix.SpatialOpRecursivePrefixTreeTest
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:-3,SPG:(QuadPrefixTree(maxLevels:1,ctx:SpatialContext{geo=false, calculator=CartesianDistCalc, worldBounds=Rect(minX=0.0,maxX=256.0,minY=-128.0,maxY=128.0)})))
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:1,SPG:(QuadPrefixTree(maxLevels:5,ctx:SpatialContext{geo=false, calculator=CartesianDistCalc, worldBounds=Rect(minX=0.0,maxX=256.0,minY=-128.0,maxY=128.0)})))
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:-2,SPG:(GeohashPrefixTree(maxLevels:2,ctx:SpatialContext.GEO)))
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:-3,SPG:(GeohashPrefixTree(maxLevels:1,ctx:SpatialContext.GEO)))
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:2,SPG:(QuadPrefixTree(maxLevels:6,ctx:SpatialContext{geo=false, calculator=CartesianDistCalc, worldBounds=Rect(minX=0.0,maxX=256.0,minY=-128.0,maxY=128.0)})))
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:-3,SPG:(GeohashPrefixTree(maxLevels:1,ctx:SpatialContext.GEO)))
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:-3,SPG:(QuadPrefixTree(maxLevels:1,ctx:SpatialContext{geo=false, calculator=CartesianDistCalc, worldBounds=Rect(minX=0.0,maxX=256.0,minY=-128.0,maxY=128.0)})))
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:-1,SPG:(QuadPrefixTree(maxLevels:3,ctx:SpatialContext{geo=false, calculator=CartesianDistCalc, worldBounds=Rect(minX=0.0,maxX=256.0,minY=-128.0,maxY=128.0)})))
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:-3,SPG:(QuadPrefixTree(maxLevels:1,ctx:SpatialContext{geo=false, calculator=CartesianDistCalc, worldBounds=Rect(minX=0.0,maxX=256.0,minY=-128.0,maxY=128.0)})))
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:-3,SPG:(GeohashPrefixTree(maxLevels:1,ctx:SpatialContext.GEO)))
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:-2,SPG:(GeohashPrefixTree(maxLevels:2,ctx:SpatialContext.GEO)))
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:2,SPG:(QuadPrefixTree(maxLevels:6,ctx:SpatialContext{geo=false, calculator=CartesianDistCalc, worldBounds=Rect(minX=0.0,maxX=256.0,minY=-128.0,maxY=128.0)})))
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:3,SPG:(QuadPrefixTree(maxLevels:7,ctx:SpatialContext{geo=false, calculator=CartesianDistCalc, worldBounds=Rect(minX=0.0,maxX=256.0,minY=-128.0,maxY=128.0)})))
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:-3,SPG:(GeohashPrefixTree(maxLevels:1,ctx:SpatialContext.GEO)))
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:-1,SPG:(QuadPrefixTree(maxLevels:3,ctx:SpatialContext{geo=false, calculator=CartesianDistCalc, worldBounds=Rect(minX=0.0,maxX=256.0,minY=-128.0,maxY=128.0)})))
   [junit4]   2> Ig:Rect(minX=0.0,maxX=256.0,minY=-128.0,maxY=128.0) Qg:ShapePair(Rect(minX=0.0,maxX=256.0,minY=-128.0,maxY=128.0) , Rect(minX=32.0,maxX=64.0,minY=-96.0,maxY=-32.0))
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=SpatialOpRecursivePrefixTreeTest -Dtests.method=testWithin -Dtests.seed=25E057BEF603A59 -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=ru -Dtests.timezone=Africa/Brazzaville -Dtests.file.encoding=US-ASCII
   [junit4] FAILURE 0.12s J0 | SpatialOpRecursivePrefixTreeTest.testWithin {#4 seed=[25E057BEF603A59:3621DA68E80BD686]} <<<
   [junit4]    > Throwable #1: java.lang.AssertionError: Shouldn't match I#0:Rect(minX=0.0,maxX=256.0,minY=-128.0,maxY=128.0) Q:ShapePair(Rect(minX=0.0,maxX=256.0,minY=-128.0,maxY=128.0) , Rect(minX=38.0,maxX=47.0,minY=-67.0,maxY=-59.0))
   [junit4]    > 	at __randomizedtesting.SeedInfo.seed([25E057BEF603A59:3621DA68E80BD686]:0)
   [junit4]    > 	at org.apache.lucene.spatial.prefix.SpatialOpRecursivePrefixTreeTest.fail(SpatialOpRecursivePrefixTreeTest.java:355)
   [junit4]    > 	at org.apache.lucene.spatial.prefix.SpatialOpRecursivePrefixTreeTest.doTest(SpatialOpRecursivePrefixTreeTest.java:335)
   [junit4]    > 	at org.apache.lucene.spatial.prefix.SpatialOpRecursivePrefixTreeTest.testWithin(SpatialOpRecursivePrefixTreeTest.java:119)
   [junit4]    > 	at java.lang.Thread.run(Thread.java:744)
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:2,SPG:(QuadPrefixTree(maxLevels:6,ctx:SpatialContext{geo=false, calculator=CartesianDistCalc, worldBounds=Rect(minX=0.0,maxX=256.0,minY=-128.0,maxY=128.0)})))
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:4,SPG:(QuadPrefixTree(maxLevels:8,ctx:SpatialContext{geo=false, calculator=CartesianDistCalc, worldBounds=Rect(minX=0.0,maxX=256.0,minY=-128.0,maxY=128.0)})))
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:-2,SPG:(GeohashPrefixTree(maxLevels:2,ctx:SpatialContext.GEO)))
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:-3,SPG:(GeohashPrefixTree(maxLevels:1,ctx:SpatialContext.GEO)))
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:2,SPG:(QuadPrefixTree(maxLevels:6,ctx:SpatialContext{geo=false, calculator=CartesianDistCalc, worldBounds=Rect(minX=0.0,maxX=256.0,minY=-128.0,maxY=128.0)})))
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:-1,SPG:(QuadPrefixTree(maxLevels:3,ctx:SpatialContext{geo=false, calculator=CartesianDistCalc, worldBounds=Rect(minX=0.0,maxX=256.0,minY=-128.0,maxY=128.0)})))
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:3,SPG:(QuadPrefixTree(maxLevels:7,ctx:SpatialContext{geo=false, calculator=CartesianDistCalc, worldBounds=Rect(minX=0.0,maxX=256.0,minY=-128.0,maxY=128.0)})))
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:-2,SPG:(GeohashPrefixTree(maxLevels:2,ctx:SpatialContext.GEO)))
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:4,SPG:(QuadPrefixTree(maxLevels:8,ctx:SpatialContext{geo=false, calculator=CartesianDistCalc, worldBounds=Rect(minX=0.0,maxX=256.0,minY=-128.0,maxY=128.0)})))
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:0,SPG:(QuadPrefixTree(maxLevels:4,ctx:SpatialContext{geo=false, calculator=CartesianDistCalc, worldBounds=Rect(minX=0.0,maxX=256.0,minY=-128.0,maxY=128.0)})))
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:-2,SPG:(GeohashPrefixTree(maxLevels:2,ctx:SpatialContext.GEO)))
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:-1,SPG:(GeohashPrefixTree(maxLevels:3,ctx:SpatialContext.GEO)))
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:-3,SPG:(QuadPrefixTree(maxLevels:1,ctx:SpatialContext{geo=false, calculator=CartesianDistCalc, worldBounds=Rect(minX=0.0,maxX=256.0,minY=-128.0,maxY=128.0)})))
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:-3,SPG:(GeohashPrefixTree(maxLevels:1,ctx:SpatialContext.GEO)))
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:-2,SPG:(QuadPrefixTree(maxLevels:2,ctx:SpatialContext{geo=false, calculator=CartesianDistCalc, worldBounds=Rect(minX=0.0,maxX=256.0,minY=-128.0,maxY=128.0)})))
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:-2,SPG:(GeohashPrefixTree(maxLevels:2,ctx:SpatialContext.GEO)))
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:-3,SPG:(QuadPrefixTree(maxLevels:1,ctx:SpatialContext{geo=false, calculator=CartesianDistCalc, worldBounds=Rect(minX=0.0,maxX=256.0,minY=-128.0,maxY=128.0)})))
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:4,SPG:(QuadPrefixTree(maxLevels:8,ctx:SpatialContext{geo=false, calculator=CartesianDistCalc, worldBounds=Rect(minX=0.0,maxX=256.0,minY=-128.0,maxY=128.0)})))
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:-1,SPG:(GeohashPrefixTree(maxLevels:3,ctx:SpatialContext.GEO)))
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:-1,SPG:(GeohashPrefixTree(maxLevels:3,ctx:SpatialContext.GEO)))
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:-2,SPG:(QuadPrefixTree(maxLevels:2,ctx:SpatialContext{geo=false, calculator=CartesianDistCalc, worldBounds=Rect(minX=0.0,maxX=256.0,minY=-128.0,maxY=128.0)})))
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:-3,SPG:(QuadPrefixTree(maxLevels:1,ctx:SpatialContext{geo=false, calculator=CartesianDistCalc, worldBounds=Rect(minX=0.0,maxX=256.0,minY=-128.0,maxY=128.0)})))
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:4,SPG:(QuadPrefixTree(maxLevels:8,ctx:SpatialContext{geo=false, calculator=CartesianDistCalc, worldBounds=Rect(minX=0.0,maxX=256.0,minY=-128.0,maxY=128.0)})))
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:-2,SPG:(QuadPrefixTree(maxLevels:2,ctx:SpatialContext{geo=false, calculator=CartesianDistCalc, worldBounds=Rect(minX=0.0,maxX=256.0,minY=-128.0,maxY=128.0)})))
   [junit4]   1> Strategy: RecursivePrefixTreeStrategy(prefixGridScanLevel:-1,SPG:(GeohashPrefixTree(maxLevels:3,ctx:SpatialContext.GEO)))
   [junit4]   2> NOTE: test params are: codec=Lucene40, sim=DefaultSimilarity, locale=ru, timezone=Africa/Brazzaville
   [junit4]   2> NOTE: Linux 3.8.0-36-generic amd64/Oracle Corporation 1.8.0 (64-bit)/cpus=8,threads=1,free=107729760,total=131989504
   [junit4]   2> NOTE: All tests run in this JVM: [QueryEqualsHashCodeTest, SpatialOpRecursivePrefixTreeTest]
   [junit4] Completed on J0 in 4.89s, 43 tests, 1 failure <<< FAILURES!

[...truncated 38 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:467: The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:447: The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:45: The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/extra-targets.xml:37: The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build.xml:539: The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/common-build.xml:1996: The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/module-build.xml:60: The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/common-build.xml:1275: The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/common-build.xml:907: There were test failures: 14 suites, 79 tests, 1 failure, 2 ignored (2 assumptions)

Total time: 20 minutes 6 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 64bit/jdk1.8.0-fcs-b132 -XX:-UseCompressedOops -XX:+UseSerialGC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0-fcs-b132) - Build # 9830 - Still Failing!

Posted by Policeman Jenkins Server <je...@thetaphi.de>.
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/9830/
Java: 32bit/jdk1.8.0-fcs-b132 -client -XX:+UseConcMarkSweepGC

1 tests failed.
REGRESSION:  org.apache.solr.client.solrj.impl.CloudSolrServerTest.testShutdown

Error Message:
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 127.0.0.1:42897 within 45000 ms

Stack Trace:
org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 127.0.0.1:42897 within 45000 ms
	at __randomizedtesting.SeedInfo.seed([6FBCD7D7D4A8682E:8CCADE4233D2FD5C]:0)
	at org.apache.solr.common.cloud.SolrZkClient.<init>(SolrZkClient.java:150)
	at org.apache.solr.common.cloud.SolrZkClient.<init>(SolrZkClient.java:101)
	at org.apache.solr.common.cloud.SolrZkClient.<init>(SolrZkClient.java:91)
	at org.apache.solr.cloud.AbstractZkTestCase.buildZooKeeper(AbstractZkTestCase.java:89)
	at org.apache.solr.cloud.AbstractZkTestCase.buildZooKeeper(AbstractZkTestCase.java:83)
	at org.apache.solr.cloud.AbstractDistribZkTestBase.setUp(AbstractDistribZkTestBase.java:70)
	at org.apache.solr.cloud.AbstractFullDistribZkTestBase.setUp(AbstractFullDistribZkTestBase.java:201)
	at org.apache.solr.client.solrj.impl.CloudSolrServerTest.setUp(CloudSolrServerTest.java:78)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:483)
	at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1617)
	at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:860)
	at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:876)
	at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
	at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
	at org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
	at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
	at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
	at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
	at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
	at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
	at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
	at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:359)
	at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:783)
	at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:443)
	at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:835)
	at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:737)
	at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:771)
	at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:782)
	at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
	at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
	at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
	at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
	at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
	at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
	at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
	at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
	at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
	at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
	at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
	at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
	at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
	at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:359)
	at java.lang.Thread.run(Thread.java:744)
Caused by: java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 127.0.0.1:42897 within 45000 ms
	at org.apache.solr.common.cloud.ConnectionManager.waitForConnected(ConnectionManager.java:223)
	at org.apache.solr.common.cloud.SolrZkClient.<init>(SolrZkClient.java:142)
	... 45 more




Build Log:
[...truncated 11603 lines...]
   [junit4] Suite: org.apache.solr.client.solrj.impl.CloudSolrServerTest
   [junit4]   2> 10831 T26 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl (true) and clientAuth (true)
   [junit4]   2> 10832 T26 oas.BaseDistributedSearchTestCase.initHostContext Setting hostContext system property: /_/
   [junit4]   2> 10834 T26 oasc.AbstractZkTestCase.<clinit> WARN TEST_HOME() does not exist - solrj test?
   [junit4]   2> 10836 T26 oas.SolrTestCaseJ4.setUp ###Starting testShutdown
   [junit4]   2> Creating dataDir: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-solrj/test/J0/./solrtest-CloudSolrServerTest-1395118373088
   [junit4]   2> 10840 T26 oasc.ZkTestServer.run STARTING ZK TEST SERVER
   [junit4]   2> 10843 T27 oasc.ZkTestServer$ZKServerMain.runFromConfig Starting server
   [junit4]   2> 10942 T26 oasc.ZkTestServer.run start zk server on port:42897
   [junit4]   2> 10984 T26 oascc.ConnectionManager.waitForConnected Waiting for client to connect to ZooKeeper
   [junit4]   2> 67442 T30 oazsp.FileTxnLog.commit WARN fsync-ing the write ahead log in SyncThread:0 took 56409ms which will adversely effect operation latency. See the ZooKeeper troubleshooting guide
   [junit4]   2> 67443 T26 oas.SolrTestCaseJ4.tearDown ###Ending testShutdown
   [junit4]   2> 67452 T28 oazs.NIOServerCnxn.doIO WARN caught end of stream exception EndOfStreamException: Unable to read additional data from client sessionid 0x144d388f9260000, likely client has closed socket
   [junit4]   2> 	at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
   [junit4]   2> 	at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
   [junit4]   2> 	at java.lang.Thread.run(Thread.java:744)
   [junit4]   2> 
   [junit4]   2> 67454 T26 oasc.ZkTestServer.send4LetterWord connecting to 127.0.0.1:42897 42897
   [junit4]   1> client port:0.0.0.0/0.0.0.0:0
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=CloudSolrServerTest -Dtests.method=testShutdown -Dtests.seed=6FBCD7D7D4A8682E -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=vi_VN -Dtests.timezone=Australia/LHI -Dtests.file.encoding=US-ASCII
   [junit4] ERROR   56.7s J0 | CloudSolrServerTest.testShutdown <<<
   [junit4]    > Throwable #1: org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 127.0.0.1:42897 within 45000 ms
   [junit4]    > 	at __randomizedtesting.SeedInfo.seed([6FBCD7D7D4A8682E:8CCADE4233D2FD5C]:0)
   [junit4]    > 	at org.apache.solr.common.cloud.SolrZkClient.<init>(SolrZkClient.java:150)
   [junit4]    > 	at org.apache.solr.common.cloud.SolrZkClient.<init>(SolrZkClient.java:101)
   [junit4]    > 	at org.apache.solr.common.cloud.SolrZkClient.<init>(SolrZkClient.java:91)
   [junit4]    > 	at org.apache.solr.cloud.AbstractZkTestCase.buildZooKeeper(AbstractZkTestCase.java:89)
   [junit4]    > 	at org.apache.solr.cloud.AbstractZkTestCase.buildZooKeeper(AbstractZkTestCase.java:83)
   [junit4]    > 	at org.apache.solr.cloud.AbstractDistribZkTestBase.setUp(AbstractDistribZkTestBase.java:70)
   [junit4]    > 	at org.apache.solr.cloud.AbstractFullDistribZkTestBase.setUp(AbstractFullDistribZkTestBase.java:201)
   [junit4]    > 	at org.apache.solr.client.solrj.impl.CloudSolrServerTest.setUp(CloudSolrServerTest.java:78)
   [junit4]    > 	at java.lang.Thread.run(Thread.java:744)
   [junit4]    > Caused by: java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 127.0.0.1:42897 within 45000 ms
   [junit4]    > 	at org.apache.solr.common.cloud.ConnectionManager.waitForConnected(ConnectionManager.java:223)
   [junit4]    > 	at org.apache.solr.common.cloud.SolrZkClient.<init>(SolrZkClient.java:142)
   [junit4]    > 	... 45 more
   [junit4]   2> 67584 T26 oas.SolrTestCaseJ4.setUp ###Starting testDistribSearch
   [junit4]   2> Creating dataDir: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-solrj/test/J0/./solrtest-CloudSolrServerTest-1395118429836
   [junit4]   2> 67585 T26 oasc.ZkTestServer.run STARTING ZK TEST SERVER
   [junit4]   2> 67586 T34 oasc.ZkTestServer$ZKServerMain.runFromConfig Starting server
   [junit4]   2> 67686 T26 oasc.ZkTestServer.run start zk server on port:38601
   [junit4]   2> 67687 T26 oascc.ConnectionManager.waitForConnected Waiting for client to connect to ZooKeeper
   [junit4]   2> 67785 T40 oascc.ConnectionManager.process Watcher org.apache.solr.common.cloud.ConnectionManager@a086bb name:ZooKeeperConnection Watcher:127.0.0.1:38601 got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 67785 T26 oascc.ConnectionManager.waitForConnected Client is connected to ZooKeeper
   [junit4]   2> 67786 T26 oascc.SolrZkClient.makePath makePath: /solr
   [junit4]   2> 67806 T26 oascc.ConnectionManager.waitForConnected Waiting for client to connect to ZooKeeper
   [junit4]   2> 67808 T42 oascc.ConnectionManager.process Watcher org.apache.solr.common.cloud.ConnectionManager@149d74c name:ZooKeeperConnection Watcher:127.0.0.1:38601/solr got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 67808 T26 oascc.ConnectionManager.waitForConnected Client is connected to ZooKeeper
   [junit4]   2> 67811 T26 oascc.SolrZkClient.makePath makePath: /collections/collection1
   [junit4]   2> 67816 T26 oascc.SolrZkClient.makePath makePath: /collections/collection1/shards
   [junit4]   2> 67819 T26 oascc.SolrZkClient.makePath makePath: /collections/control_collection
   [junit4]   2> 67824 T26 oascc.SolrZkClient.makePath makePath: /collections/control_collection/shards
   [junit4]   2> 67828 T26 oasc.AbstractZkTestCase.putConfig put /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/solrj/src/test-files/solrj/solr/collection1/conf/solrconfig.xml to /configs/conf1/solrconfig.xml
   [junit4]   2> 67834 T26 oascc.SolrZkClient.makePath makePath: /configs/conf1/solrconfig.xml
   [junit4]   2> 67840 T26 oasc.AbstractZkTestCase.putConfig put /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/solrj/src/test-files/solrj/solr/collection1/conf/schema.xml to /configs/conf1/schema.xml
   [junit4]   2> 67840 T26 oascc.SolrZkClient.makePath makePath: /configs/conf1/schema.xml
   [junit4]   2> 67844 T26 oasc.AbstractZkTestCase.putConfig skipping /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/solrj/src/test-files/solrj/solr/collection1/conf/solrconfig.snippet.randomindexconfig.xml because it doesn't exist
   [junit4]   2> 67845 T26 oasc.AbstractZkTestCase.putConfig skipping /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/solrj/src/test-files/solrj/solr/collection1/conf/stopwords.txt because it doesn't exist
   [junit4]   2> 67845 T26 oasc.AbstractZkTestCase.putConfig skipping /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/solrj/src/test-files/solrj/solr/collection1/conf/protwords.txt because it doesn't exist
   [junit4]   2> 67845 T26 oasc.AbstractZkTestCase.putConfig skipping /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/solrj/src/test-files/solrj/solr/collection1/conf/currency.xml because it doesn't exist
   [junit4]   2> 67846 T26 oasc.AbstractZkTestCase.putConfig skipping /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/solrj/src/test-files/solrj/solr/collection1/conf/enumsConfig.xml because it doesn't exist
   [junit4]   2> 67846 T26 oasc.AbstractZkTestCase.putConfig skipping /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/solrj/src/test-files/solrj/solr/collection1/conf/open-exchange-rates.json because it doesn't exist
   [junit4]   2> 67846 T26 oasc.AbstractZkTestCase.putConfig skipping /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/solrj/src/test-files/solrj/solr/collection1/conf/mapping-ISOLatin1Accent.txt because it doesn't exist
   [junit4]   2> 67847 T26 oasc.AbstractZkTestCase.putConfig skipping /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/solrj/src/test-files/solrj/solr/collection1/conf/old_synonyms.txt because it doesn't exist
   [junit4]   2> 67847 T26 oasc.AbstractZkTestCase.putConfig skipping /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/solrj/src/test-files/solrj/solr/collection1/conf/synonyms.txt because it doesn't exist
   [junit4]   2> 67851 T26 oascc.ConnectionManager.waitForConnected Waiting for client to connect to ZooKeeper
   [junit4]   2> 67853 T44 oascc.ConnectionManager.process Watcher org.apache.solr.common.cloud.ConnectionManager@1916fe9 name:ZooKeeperConnection Watcher:127.0.0.1:38601/solr got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 67853 T26 oascc.ConnectionManager.waitForConnected Client is connected to ZooKeeper
   [junit4]   2> 68077 T26 oejs.Server.doStart jetty-8.1.10.v20130312
   [junit4]   2> 68305 T26 oejus.SslContextFactory.doStart Enabled Protocols [SSLv2Hello, SSLv3, TLSv1, TLSv1.1, TLSv1.2] of [SSLv2Hello, SSLv3, TLSv1, TLSv1.1, TLSv1.2]
   [junit4]   2> 68326 T26 oejs.AbstractConnector.doStart Started SslSelectChannelConnector@127.0.0.1:41219
   [junit4]   2> 68368 T26 oass.SolrDispatchFilter.init SolrDispatchFilter.init()
   [junit4]   2> 68368 T26 oasc.SolrResourceLoader.locateSolrHome JNDI not configured for solr (NoInitialContextEx)
   [junit4]   2> 68369 T26 oasc.SolrResourceLoader.locateSolrHome using system property solr.solr.home: ../../../../../../../../../../mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-solrj/test/J0/./org.apache.solr.client.solrj.impl.CloudSolrServerTest-controljetty-1395118430106
   [junit4]   2> 68369 T26 oasc.SolrResourceLoader.<init> new SolrResourceLoader for directory: '../../../../../../../../../../mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-solrj/test/J0/./org.apache.solr.client.solrj.impl.CloudSolrServerTest-controljetty-1395118430106/'
   [junit4]   2> 68387 T26 oasc.ConfigSolr.fromFile Loading container configuration from /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-solrj/test/J0/../../../../../../../../../../mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-solrj/test/J0/./org.apache.solr.client.solrj.impl.CloudSolrServerTest-controljetty-1395118430106/solr.xml
   [junit4]   2> 68402 T26 oasc.CoreContainer.<init> New CoreContainer 5059049
   [junit4]   2> 68403 T26 oasc.CoreContainer.load Loading cores into CoreContainer [instanceDir=../../../../../../../../../../mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-solrj/test/J0/./org.apache.solr.client.solrj.impl.CloudSolrServerTest-controljetty-1395118430106/]
   [junit4]   2> 68406 T26 oashc.HttpShardHandlerFactory.getParameter Setting socketTimeout to: 0
   [junit4]   2> 68406 T26 oashc.HttpShardHandlerFactory.getParameter Setting urlScheme to: null
   [junit4]   2> 68407 T26 oashc.HttpShardHandlerFactory.getParameter Setting connTimeout to: 0
   [junit4]   2> 68407 T26 oashc.HttpShardHandlerFactory.getParameter Setting maxConnectionsPerHost to: 20
   [junit4]   2> 68407 T26 oashc.HttpShardHandlerFactory.getParameter Setting corePoolSize to: 0
   [junit4]   2> 68407 T26 oashc.HttpShardHandlerFactory.getParameter Setting maximumPoolSize to: 2147483647
   [junit4]   2> 68408 T26 oashc.HttpShardHandlerFactory.getParameter Setting maxThreadIdleTime to: 5
   [junit4]   2> 68408 T26 oashc.HttpShardHandlerFactory.getParameter Setting sizeOfQueue to: -1
   [junit4]   2> 68408 T26 oashc.HttpShardHandlerFactory.getParameter Setting fairnessPolicy to: false
   [junit4]   2> 68411 T26 oasl.LogWatcher.createWatcher SLF4J impl is org.slf4j.impl.Log4jLoggerFactory
   [junit4]   2> 68411 T26 oasl.LogWatcher.newRegisteredLogWatcher Registering Log Listener [Log4j (org.slf4j.impl.Log4jLoggerFactory)]
   [junit4]   2> 68411 T26 oasc.CoreContainer.load Host Name: 127.0.0.1
   [junit4]   2> 68412 T26 oasc.ZkContainer.initZooKeeper Zookeeper client=127.0.0.1:38601/solr
   [junit4]   2> 68421 T26 oasc.ZkController.checkChrootPath zkHost includes chroot
   [junit4]   2> 68422 T26 oascc.ConnectionManager.waitForConnected Waiting for client to connect to ZooKeeper
   [junit4]   2> 68425 T56 oascc.ConnectionManager.process Watcher org.apache.solr.common.cloud.ConnectionManager@1d04085 name:ZooKeeperConnection Watcher:127.0.0.1:38601 got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 68425 T26 oascc.ConnectionManager.waitForConnected Client is connected to ZooKeeper
   [junit4]   2> 68431 T26 oascc.ConnectionManager.waitForConnected Waiting for client to connect to ZooKeeper
   [junit4]   2> 68433 T58 oascc.ConnectionManager.process Watcher org.apache.solr.common.cloud.ConnectionManager@9f6e9e name:ZooKeeperConnection Watcher:127.0.0.1:38601/solr got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 68433 T26 oascc.ConnectionManager.waitForConnected Client is connected to ZooKeeper
   [junit4]   2> 68439 T26 oascc.SolrZkClient.makePath makePath: /overseer/queue
   [junit4]   2> 68445 T26 oascc.SolrZkClient.makePath makePath: /overseer/collection-queue-work
   [junit4]   2> 68450 T26 oascc.SolrZkClient.makePath makePath: /overseer/collection-map-running
   [junit4]   2> 68454 T26 oascc.SolrZkClient.makePath makePath: /overseer/collection-map-completed
   [junit4]   2> 68458 T26 oascc.SolrZkClient.makePath makePath: /overseer/collection-map-failure
   [junit4]   2> 68467 T26 oascc.SolrZkClient.makePath makePath: /live_nodes
   [junit4]   2> 68470 T26 oasc.ZkController.createEphemeralLiveNode Register node as live in ZooKeeper:/live_nodes/127.0.0.1:41219__
   [junit4]   2> 68472 T26 oascc.SolrZkClient.makePath makePath: /live_nodes/127.0.0.1:41219__
   [junit4]   2> 68476 T26 oascc.SolrZkClient.makePath makePath: /overseer_elect
   [junit4]   2> 68480 T26 oascc.SolrZkClient.makePath makePath: /overseer_elect/election
   [junit4]   2> 68490 T26 oasc.OverseerElectionContext.runLeaderProcess I am going to be the leader 127.0.0.1:41219__
   [junit4]   2> 68490 T26 oascc.SolrZkClient.makePath makePath: /overseer_elect/leader
   [junit4]   2> 68494 T26 oasc.Overseer.start Overseer (id=91430481417863172-127.0.0.1:41219__-n_0000000000) starting
   [junit4]   2> 68503 T26 oascc.SolrZkClient.makePath makePath: /overseer/queue-work
   [junit4]   2> 68530 T60 oasc.OverseerCollectionProcessor.run Process current queue of collection creations
   [junit4]   2> 68531 T26 oascc.SolrZkClient.makePath makePath: /clusterstate.json
   [junit4]   2> 68531 T60 oasc.OverseerCollectionProcessor.prioritizeOverseerNodes prioritizing overseer nodes
   [junit4]   2> 68534 T26 oascc.SolrZkClient.makePath makePath: /aliases.json
   [junit4]   2> 68536 T26 oascc.ZkStateReader.createClusterStateWatchersAndUpdate Updating cluster state from ZooKeeper... 
   [junit4]   2> 68543 T59 oasc.Overseer$ClusterStateUpdater.run Starting to work on the main queue
   [junit4]   2> 68551 T61 oasc.ZkController.publish publishing core=collection1 state=down collection=control_collection
   [junit4]   2> 68551 T61 oasc.ZkController.publish numShards not found on descriptor - reading it from system property
   [junit4]   2> 68554 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 68554 T61 oasc.ZkController.waitForCoreNodeName look for our core node name
   [junit4]   2> 68558 T59 oascc.ZkStateReader.updateClusterState Updating cloud state from ZooKeeper... 
   [junit4]   2> 68560 T59 oasc.Overseer$ClusterStateUpdater.updateState Update state numShards=1 message={
   [junit4]   2> 	  "operation":"state",
   [junit4]   2> 	  "state":"down",
   [junit4]   2> 	  "base_url":"https://127.0.0.1:41219/_",
   [junit4]   2> 	  "core":"collection1",
   [junit4]   2> 	  "roles":null,
   [junit4]   2> 	  "node_name":"127.0.0.1:41219__",
   [junit4]   2> 	  "shard":null,
   [junit4]   2> 	  "collection":"control_collection",
   [junit4]   2> 	  "numShards":"1",
   [junit4]   2> 	  "core_node_name":null}
   [junit4]   2> 68561 T59 oasc.Overseer$ClusterStateUpdater.createCollection Create collection control_collection with shards [shard1]
   [junit4]   2> 68574 T59 oasc.Overseer$ClusterStateUpdater.updateState Assigning new node to shard shard=shard1
   [junit4]   2> 68578 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 68582 T58 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 1)
   [junit4]   2> 69555 T61 oasc.ZkController.waitForShardId waiting to find shard id in clusterstate for collection1
   [junit4]   2> 69555 T61 oasc.CoreContainer.create Creating SolrCore 'collection1' using instanceDir: ../../../../../../../../../../mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-solrj/test/J0/./org.apache.solr.client.solrj.impl.CloudSolrServerTest-controljetty-1395118430106/collection1
   [junit4]   2> 69555 T61 oasc.ZkController.createCollectionZkNode Check for collection zkNode:control_collection
   [junit4]   2> 69556 T61 oasc.ZkController.createCollectionZkNode Collection zkNode exists
   [junit4]   2> 69556 T61 oascc.ZkStateReader.readConfigName Load collection config from:/collections/control_collection
   [junit4]   2> 69558 T61 oascc.ZkStateReader.readConfigName path=/collections/control_collection configName=conf1 specified config exists in ZooKeeper
   [junit4]   2> 69558 T61 oasc.SolrResourceLoader.<init> new SolrResourceLoader for directory: '../../../../../../../../../../mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-solrj/test/J0/./org.apache.solr.client.solrj.impl.CloudSolrServerTest-controljetty-1395118430106/collection1/'
   [junit4]   2> 69591 T61 oasc.SolrConfig.<init> Using Lucene MatchVersion: LUCENE_50
   [junit4]   2> 69605 T61 oasc.SolrConfig.<init> Loaded SolrConfig: solrconfig.xml
   [junit4]   2> 69607 T61 oass.IndexSchema.readSchema Reading Solr Schema from schema.xml
   [junit4]   2> 69616 T61 oass.IndexSchema.readSchema [collection1] Schema name=test
   [junit4]   2> 69637 T61 oasc.SolrResourceLoader.findClass WARN Solr loaded a deprecated plugin/analysis class [solr.SortableIntField]. Please consult documentation how to replace it accordingly.
   [junit4]   2> 69641 T61 oasc.SolrResourceLoader.findClass WARN Solr loaded a deprecated plugin/analysis class [solr.SortableLongField]. Please consult documentation how to replace it accordingly.
   [junit4]   2> 69646 T61 oasc.SolrResourceLoader.findClass WARN Solr loaded a deprecated plugin/analysis class [solr.SortableFloatField]. Please consult documentation how to replace it accordingly.
   [junit4]   2> 69652 T61 oasc.SolrResourceLoader.findClass WARN Solr loaded a deprecated plugin/analysis class [solr.SortableDoubleField]. Please consult documentation how to replace it accordingly.
   [junit4]   2> 69920 T61 oass.IndexSchema.readSchema default search field in schema is text
   [junit4]   2> 69922 T61 oass.IndexSchema.readSchema unique key field: id
   [junit4]   2> 69924 T61 oass.IndexSchema.readSchema WARN Field lowerfilt1and2 is not multivalued and destination for multiple copyFields (2)
   [junit4]   2> 69925 T61 oass.IndexSchema.readSchema WARN Field text is not multivalued and destination for multiple copyFields (3)
   [junit4]   2> 69928 T61 oasc.SolrCore.initDirectoryFactory org.apache.solr.core.MockDirectoryFactory
   [junit4]   2> 69928 T61 oasc.SolrCore.<init> [collection1] Opening new SolrCore at ../../../../../../../../../../mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-solrj/test/J0/./org.apache.solr.client.solrj.impl.CloudSolrServerTest-controljetty-1395118430106/collection1/, dataDir=./org.apache.solr.client.solrj.impl.CloudSolrServerTest-1395118429836/control/data/
   [junit4]   2> 69928 T61 oasc.SolrCore.<init> JMX monitoring not detected for core: collection1
   [junit4]   2> 69929 T61 oasc.CachingDirectoryFactory.get return new directory for ./org.apache.solr.client.solrj.impl.CloudSolrServerTest-1395118429836/control/data
   [junit4]   2> 69930 T61 oasc.SolrCore.getNewIndexDir New index directory detected: old=null new=./org.apache.solr.client.solrj.impl.CloudSolrServerTest-1395118429836/control/data/index/
   [junit4]   2> 69930 T61 oasc.SolrCore.initIndex WARN [collection1] Solr index directory './org.apache.solr.client.solrj.impl.CloudSolrServerTest-1395118429836/control/data/index' doesn't exist. Creating new index...
   [junit4]   2> 69930 T61 oasc.CachingDirectoryFactory.get return new directory for ./org.apache.solr.client.solrj.impl.CloudSolrServerTest-1395118429836/control/data/index
   [junit4]   2> 69932 T61 oasc.SolrDeletionPolicy.onCommit SolrDeletionPolicy.onCommit: commits: num=1
   [junit4]   2> 		commit{dir=MockDirectoryWrapper(RAMDirectory@1858f8a lockFactory=NativeFSLockFactory@./org.apache.solr.client.solrj.impl.CloudSolrServerTest-1395118429836/control/data/index),segFN=segments_1,generation=1}
   [junit4]   2> 69933 T61 oasc.SolrDeletionPolicy.updateCommits newest commit generation = 1
   [junit4]   2> 69934 T61 oasc.SolrCore.loadUpdateProcessorChains no updateRequestProcessorChain defined as default, creating implicit default
   [junit4]   2> 69935 T61 oasc.RequestHandlers.initHandlersFromConfig created /get: solr.RealTimeGetHandler
   [junit4]   2> 69935 T61 oasc.RequestHandlers.initHandlersFromConfig adding lazy requestHandler: solr.ReplicationHandler
   [junit4]   2> 69935 T61 oasc.RequestHandlers.initHandlersFromConfig created /replication: solr.ReplicationHandler
   [junit4]   2> 69942 T61 oasc.RequestHandlers.initHandlersFromConfig created standard: solr.StandardRequestHandler
   [junit4]   2> 69942 T61 oasc.RequestHandlers.initHandlersFromConfig created /update: solr.UpdateRequestHandler
   [junit4]   2> 69942 T61 oasc.RequestHandlers.initHandlersFromConfig created /admin/: org.apache.solr.handler.admin.AdminHandlers
   [junit4]   2> 69943 T61 oasc.RequestHandlers.initHandlersFromConfig created /admin/ping: solr.PingRequestHandler
   [junit4]   2> 69944 T61 oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
   [junit4]   2> 69946 T61 oasu.CommitTracker.<init> Hard AutoCommit: disabled
   [junit4]   2> 69946 T61 oasu.CommitTracker.<init> Soft AutoCommit: disabled
   [junit4]   2> 69947 T61 oasc.SolrDeletionPolicy.onInit SolrDeletionPolicy.onInit: commits: num=1
   [junit4]   2> 		commit{dir=MockDirectoryWrapper(RAMDirectory@1858f8a lockFactory=NativeFSLockFactory@./org.apache.solr.client.solrj.impl.CloudSolrServerTest-1395118429836/control/data/index),segFN=segments_1,generation=1}
   [junit4]   2> 69947 T61 oasc.SolrDeletionPolicy.updateCommits newest commit generation = 1
   [junit4]   2> 69947 T61 oass.SolrIndexSearcher.<init> Opening Searcher@130d5e9[collection1] main
   [junit4]   2> 69947 T61 oascc.ZkStateReader.readConfigName Load collection config from:/collections/control_collection
   [junit4]   2> 69950 T61 oascc.ZkStateReader.readConfigName path=/collections/control_collection configName=conf1 specified config exists in ZooKeeper
   [junit4]   2> 69950 T61 oasr.ManagedResourceStorage.newStorageIO Setting up ZooKeeper-based storage for the RestManager with znodeBase: /configs/conf1
   [junit4]   2> 69953 T61 oasr.ManagedResourceStorage$ZooKeeperStorageIO.configure Configured ZooKeeperStorageIO with znodeBase: /configs/conf1
   [junit4]   2> 69953 T61 oasr.RestManager.init Initializing RestManager with initArgs: {}
   [junit4]   2> 69954 T61 oasr.ManagedResourceStorage.load Reading _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 69955 T61 oasr.ManagedResourceStorage$ZooKeeperStorageIO.openInputStream No data found for znode /configs/conf1/_rest_managed.json
   [junit4]   2> 69955 T61 oasr.ManagedResourceStorage.load Loaded null at path _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 69955 T61 oasr.ManagedResource.notifyObserversDuringInit WARN No registered observers for /rest/managed
   [junit4]   2> 69955 T61 oasr.RestManager.init Initializing 0 registered ManagedResources
   [junit4]   2> 69956 T62 oasc.SolrCore.registerSearcher [collection1] Registered new searcher Searcher@130d5e9[collection1] main{StandardDirectoryReader(segments_1:1:nrt)}
   [junit4]   2> 69956 T61 oasc.CoreContainer.registerCore registering core: collection1
   [junit4]   2> 69958 T26 oass.SolrDispatchFilter.init user.dir=/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-solrj/test/J0
   [junit4]   2> 69958 T65 oasc.ZkController.register Register replica - core:collection1 address:https://127.0.0.1:41219/_ collection:control_collection shard:shard1
   [junit4]   2> 69958 T26 oass.SolrDispatchFilter.init SolrDispatchFilter.init() done
   [junit4]   2> 69966 T65 oascc.SolrZkClient.makePath makePath: /collections/control_collection/leader_elect/shard1/election
   [junit4]   2> 69976 T65 oasc.ShardLeaderElectionContext.runLeaderProcess Running the leader process for shard shard1
   [junit4]   2> 69978 T26 oascc.ConnectionManager.waitForConnected Waiting for client to connect to ZooKeeper
   [junit4]   2> 69979 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 69980 T67 oascc.ConnectionManager.process Watcher org.apache.solr.common.cloud.ConnectionManager@40b6a6 name:ZooKeeperConnection Watcher:127.0.0.1:38601/solr got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 69980 T65 oasc.ShardLeaderElectionContext.waitForReplicasToComeUp Enough replicas found to continue.
   [junit4]   2> 69980 T26 oascc.ConnectionManager.waitForConnected Client is connected to ZooKeeper
   [junit4]   2> 69980 T65 oasc.ShardLeaderElectionContext.runLeaderProcess I may be the new leader - try and sync
   [junit4]   2> 69984 T59 oascc.ZkStateReader.updateClusterState Updating cloud state from ZooKeeper... 
   [junit4]   2> 69985 T26 oascc.ZkStateReader.createClusterStateWatchersAndUpdate Updating cluster state from ZooKeeper... 
   [junit4]   2> ASYNC  NEW_CORE C52 name=collection1 org.apache.solr.core.SolrCore@486454 url=https://127.0.0.1:41219/_/collection1 node=127.0.0.1:41219__ C52_STATE=coll:control_collection core:collection1 props:{state=down, base_url=https://127.0.0.1:41219/_, core=collection1, node_name=127.0.0.1:41219__}
   [junit4]   2> 69985 T65 C52 P41219 oasc.SyncStrategy.sync Sync replicas to https://127.0.0.1:41219/_/collection1/
   [junit4]   2> 69986 T65 C52 P41219 oasc.SyncStrategy.syncReplicas Sync Success - now sync replicas to me
   [junit4]   2> 69986 T65 C52 P41219 oasc.SyncStrategy.syncToMe https://127.0.0.1:41219/_/collection1/ has no replicas
   [junit4]   2> 69987 T65 oasc.ShardLeaderElectionContext.runLeaderProcess I am the new leader: https://127.0.0.1:41219/_/collection1/ shard1
   [junit4]   2> 69990 T65 oascc.SolrZkClient.makePath makePath: /collections/control_collection/leaders/shard1
   [junit4]   2> 69990 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 70002 T26 oasc.ChaosMonkey.monkeyLog monkey: init - expire sessions:false cause connection loss:false
   [junit4]   2> 70017 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 70023 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 70033 T26 oasc.AbstractFullDistribZkTestBase.createJettys create jetty 1
   [junit4]   2> 70034 T26 oejs.Server.doStart jetty-8.1.10.v20130312
   [junit4]   2> 70050 T26 oejus.SslContextFactory.doStart Enabled Protocols [SSLv2Hello, SSLv3, TLSv1, TLSv1.1, TLSv1.2] of [SSLv2Hello, SSLv3, TLSv1, TLSv1.1, TLSv1.2]
   [junit4]   2> 70061 T26 oejs.AbstractConnector.doStart Started SslSelectChannelConnector@127.0.0.1:39638
   [junit4]   2> 70064 T26 oass.SolrDispatchFilter.init SolrDispatchFilter.init()
   [junit4]   2> 70064 T26 oasc.SolrResourceLoader.locateSolrHome JNDI not configured for solr (NoInitialContextEx)
   [junit4]   2> 70065 T26 oasc.SolrResourceLoader.locateSolrHome using system property solr.solr.home: ./org.apache.solr.client.solrj.impl.CloudSolrServerTest-jetty1-1395118432253
   [junit4]   2> 70065 T26 oasc.SolrResourceLoader.<init> new SolrResourceLoader for directory: './org.apache.solr.client.solrj.impl.CloudSolrServerTest-jetty1-1395118432253/'
   [junit4]   2> 70084 T26 oasc.ConfigSolr.fromFile Loading container configuration from /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-solrj/test/J0/./org.apache.solr.client.solrj.impl.CloudSolrServerTest-jetty1-1395118432253/solr.xml
   [junit4]   2> 70102 T26 oasc.CoreContainer.<init> New CoreContainer 27977600
   [junit4]   2> 70102 T26 oasc.CoreContainer.load Loading cores into CoreContainer [instanceDir=./org.apache.solr.client.solrj.impl.CloudSolrServerTest-jetty1-1395118432253/]
   [junit4]   2> 70103 T26 oashc.HttpShardHandlerFactory.getParameter Setting socketTimeout to: 0
   [junit4]   2> 70103 T26 oashc.HttpShardHandlerFactory.getParameter Setting urlScheme to: null
   [junit4]   2> 70103 T26 oashc.HttpShardHandlerFactory.getParameter Setting connTimeout to: 0
   [junit4]   2> 70104 T26 oashc.HttpShardHandlerFactory.getParameter Setting maxConnectionsPerHost to: 20
   [junit4]   2> 70104 T26 oashc.HttpShardHandlerFactory.getParameter Setting corePoolSize to: 0
   [junit4]   2> 70104 T26 oashc.HttpShardHandlerFactory.getParameter Setting maximumPoolSize to: 2147483647
   [junit4]   2> 70104 T26 oashc.HttpShardHandlerFactory.getParameter Setting maxThreadIdleTime to: 5
   [junit4]   2> 70105 T26 oashc.HttpShardHandlerFactory.getParameter Setting sizeOfQueue to: -1
   [junit4]   2> 70105 T26 oashc.HttpShardHandlerFactory.getParameter Setting fairnessPolicy to: false
   [junit4]   2> 70107 T26 oasl.LogWatcher.createWatcher SLF4J impl is org.slf4j.impl.Log4jLoggerFactory
   [junit4]   2> 70108 T26 oasl.LogWatcher.newRegisteredLogWatcher Registering Log Listener [Log4j (org.slf4j.impl.Log4jLoggerFactory)]
   [junit4]   2> 70108 T26 oasc.CoreContainer.load Host Name: 127.0.0.1
   [junit4]   2> 70108 T26 oasc.ZkContainer.initZooKeeper Zookeeper client=127.0.0.1:38601/solr
   [junit4]   2> 70109 T26 oasc.ZkController.checkChrootPath zkHost includes chroot
   [junit4]   2> 70109 T26 oascc.ConnectionManager.waitForConnected Waiting for client to connect to ZooKeeper
   [junit4]   2> 70111 T78 oascc.ConnectionManager.process Watcher org.apache.solr.common.cloud.ConnectionManager@1cfb0c3 name:ZooKeeperConnection Watcher:127.0.0.1:38601 got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 70112 T26 oascc.ConnectionManager.waitForConnected Client is connected to ZooKeeper
   [junit4]   2> 70115 T26 oascc.ConnectionManager.waitForConnected Waiting for client to connect to ZooKeeper
   [junit4]   2> 70117 T80 oascc.ConnectionManager.process Watcher org.apache.solr.common.cloud.ConnectionManager@4502a7 name:ZooKeeperConnection Watcher:127.0.0.1:38601/solr got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 70117 T26 oascc.ConnectionManager.waitForConnected Client is connected to ZooKeeper
   [junit4]   2> 70125 T67 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 1)
   [junit4]   2> 70125 T58 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 1)
   [junit4]   2> 70130 T26 oascc.ZkStateReader.createClusterStateWatchersAndUpdate Updating cluster state from ZooKeeper... 
   [junit4]   2> 70169 T65 oasc.ZkController.register We are https://127.0.0.1:41219/_/collection1/ and leader is https://127.0.0.1:41219/_/collection1/
   [junit4]   2> 70169 T65 oasc.ZkController.register No LogReplay needed for core=collection1 baseURL=https://127.0.0.1:41219/_
   [junit4]   2> 70169 T65 oasc.ZkController.checkRecovery I am the leader, no recovery necessary
   [junit4]   2> 70169 T65 oasc.ZkController.publish publishing core=collection1 state=active collection=control_collection
   [junit4]   2> 70170 T65 oasc.ZkController.publish numShards not found on descriptor - reading it from system property
   [junit4]   2> 70171 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 70171 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 70171 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 70172 T65 oascc.ZkStateReader.updateClusterState Updating cloud state from ZooKeeper... 
   [junit4]   2> 70173 T59 oascc.ZkStateReader.updateClusterState Updating cloud state from ZooKeeper... 
   [junit4]   2> 70174 T59 oasc.Overseer$ClusterStateUpdater.updateState Update state numShards=2 message={
   [junit4]   2> 	  "operation":"state",
   [junit4]   2> 	  "state":"active",
   [junit4]   2> 	  "base_url":"https://127.0.0.1:41219/_",
   [junit4]   2> 	  "core":"collection1",
   [junit4]   2> 	  "roles":null,
   [junit4]   2> 	  "node_name":"127.0.0.1:41219__",
   [junit4]   2> 	  "shard":"shard1",
   [junit4]   2> 	  "collection":"control_collection",
   [junit4]   2> 	  "numShards":"2",
   [junit4]   2> 	  "core_node_name":"core_node1"}
   [junit4]   2> 70178 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 70280 T58 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 1)
   [junit4]   2> 70280 T67 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 1)
   [junit4]   2> 70281 T80 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 1)
   [junit4]   2> 71135 T26 oasc.ZkController.createEphemeralLiveNode Register node as live in ZooKeeper:/live_nodes/127.0.0.1:39638__
   [junit4]   2> 71138 T26 oascc.SolrZkClient.makePath makePath: /live_nodes/127.0.0.1:39638__
   [junit4]   2> 71143 T67 oascc.ZkStateReader$3.process Updating live nodes... (2)
   [junit4]   2> 71143 T58 oascc.ZkStateReader$3.process Updating live nodes... (2)
   [junit4]   2> 71144 T80 oascc.ZkStateReader$3.process Updating live nodes... (2)
   [junit4]   2> 71163 T81 oasc.ZkController.publish publishing core=collection1 state=down collection=collection1
   [junit4]   2> 71163 T81 oasc.ZkController.publish numShards not found on descriptor - reading it from system property
   [junit4]   2> 71168 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 71168 T81 oasc.ZkController.waitForCoreNodeName look for our core node name
   [junit4]   2> 71168 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 71169 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 71171 T59 oascc.ZkStateReader.updateClusterState Updating cloud state from ZooKeeper... 
   [junit4]   2> 71173 T59 oasc.Overseer$ClusterStateUpdater.updateState Update state numShards=2 message={
   [junit4]   2> 	  "operation":"state",
   [junit4]   2> 	  "state":"down",
   [junit4]   2> 	  "base_url":"https://127.0.0.1:39638/_",
   [junit4]   2> 	  "core":"collection1",
   [junit4]   2> 	  "roles":null,
   [junit4]   2> 	  "node_name":"127.0.0.1:39638__",
   [junit4]   2> 	  "shard":null,
   [junit4]   2> 	  "collection":"collection1",
   [junit4]   2> 	  "numShards":"2",
   [junit4]   2> 	  "core_node_name":null}
   [junit4]   2> 71173 T59 oasc.Overseer$ClusterStateUpdater.createCollection Create collection collection1 with shards [shard1, shard2]
   [junit4]   2> 71174 T59 oasc.Overseer$ClusterStateUpdater.updateState Assigning new node to shard shard=shard2
   [junit4]   2> 71179 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 71284 T58 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 2)
   [junit4]   2> 71284 T80 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 2)
   [junit4]   2> 71284 T67 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 2)
   [junit4]   2> 72169 T81 oasc.ZkController.waitForShardId waiting to find shard id in clusterstate for collection1
   [junit4]   2> 72169 T81 oasc.CoreContainer.create Creating SolrCore 'collection1' using instanceDir: ./org.apache.solr.client.solrj.impl.CloudSolrServerTest-jetty1-1395118432253/collection1
   [junit4]   2> 72170 T81 oasc.ZkController.createCollectionZkNode Check for collection zkNode:collection1
   [junit4]   2> 72171 T81 oasc.ZkController.createCollectionZkNode Collection zkNode exists
   [junit4]   2> 72172 T81 oascc.ZkStateReader.readConfigName Load collection config from:/collections/collection1
   [junit4]   2> 72173 T81 oascc.ZkStateReader.readConfigName path=/collections/collection1 configName=conf1 specified config exists in ZooKeeper
   [junit4]   2> 72174 T81 oasc.SolrResourceLoader.<init> new SolrResourceLoader for directory: './org.apache.solr.client.solrj.impl.CloudSolrServerTest-jetty1-1395118432253/collection1/'
   [junit4]   2> 72200 T81 oasc.SolrConfig.<init> Using Lucene MatchVersion: LUCENE_50
   [junit4]   2> 72215 T81 oasc.SolrConfig.<init> Loaded SolrConfig: solrconfig.xml
   [junit4]   2> 72217 T81 oass.IndexSchema.readSchema Reading Solr Schema from schema.xml
   [junit4]   2> 72224 T81 oass.IndexSchema.readSchema [collection1] Schema name=test
   [junit4]   2> 72422 T81 oass.IndexSchema.readSchema default search field in schema is text
   [junit4]   2> 72423 T81 oass.IndexSchema.readSchema unique key field: id
   [junit4]   2> 72424 T81 oass.IndexSchema.readSchema WARN Field lowerfilt1and2 is not multivalued and destination for multiple copyFields (2)
   [junit4]   2> 72425 T81 oass.IndexSchema.readSchema WARN Field text is not multivalued and destination for multiple copyFields (3)
   [junit4]   2> 72425 T81 oasc.SolrCore.initDirectoryFactory org.apache.solr.core.MockDirectoryFactory
   [junit4]   2> 72425 T81 oasc.SolrCore.<init> [collection1] Opening new SolrCore at ./org.apache.solr.client.solrj.impl.CloudSolrServerTest-jetty1-1395118432253/collection1/, dataDir=./org.apache.solr.client.solrj.impl.CloudSolrServerTest-1395118429836/jetty1/
   [junit4]   2> 72426 T81 oasc.SolrCore.<init> JMX monitoring not detected for core: collection1
   [junit4]   2> 72426 T81 oasc.CachingDirectoryFactory.get return new directory for ./org.apache.solr.client.solrj.impl.CloudSolrServerTest-1395118429836/jetty1
   [junit4]   2> 72426 T81 oasc.SolrCore.getNewIndexDir New index directory detected: old=null new=./org.apache.solr.client.solrj.impl.CloudSolrServerTest-1395118429836/jetty1/index/
   [junit4]   2> 72427 T81 oasc.SolrCore.initIndex WARN [collection1] Solr index directory './org.apache.solr.client.solrj.impl.CloudSolrServerTest-1395118429836/jetty1/index' doesn't exist. Creating new index...
   [junit4]   2> 72427 T81 oasc.CachingDirectoryFactory.get return new directory for ./org.apache.solr.client.solrj.impl.CloudSolrServerTest-1395118429836/jetty1/index
   [junit4]   2> 72428 T81 oasc.SolrDeletionPolicy.onCommit SolrDeletionPolicy.onCommit: commits: num=1
   [junit4]   2> 		commit{dir=MockDirectoryWrapper(RAMDirectory@710be7 lockFactory=NativeFSLockFactory@./org.apache.solr.client.solrj.impl.CloudSolrServerTest-1395118429836/jetty1/index),segFN=segments_1,generation=1}
   [junit4]   2> 72428 T81 oasc.SolrDeletionPolicy.updateCommits newest commit generation = 1
   [junit4]   2> 72429 T81 oasc.SolrCore.loadUpdateProcessorChains no updateRequestProcessorChain defined as default, creating implicit default
   [junit4]   2> 72430 T81 oasc.RequestHandlers.initHandlersFromConfig created /get: solr.RealTimeGetHandler
   [junit4]   2> 72430 T81 oasc.RequestHandlers.initHandlersFromConfig adding lazy requestHandler: solr.ReplicationHandler
   [junit4]   2> 72430 T81 oasc.RequestHandlers.initHandlersFromConfig created /replication: solr.ReplicationHandler
   [junit4]   2> 72430 T81 oasc.RequestHandlers.initHandlersFromConfig created standard: solr.StandardRequestHandler
   [junit4]   2> 72430 T81 oasc.RequestHandlers.initHandlersFromConfig created /update: solr.UpdateRequestHandler
   [junit4]   2> 72431 T81 oasc.RequestHandlers.initHandlersFromConfig created /admin/: org.apache.solr.handler.admin.AdminHandlers
   [junit4]   2> 72431 T81 oasc.RequestHandlers.initHandlersFromConfig created /admin/ping: solr.PingRequestHandler
   [junit4]   2> 72432 T81 oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
   [junit4]   2> 72433 T81 oasu.CommitTracker.<init> Hard AutoCommit: disabled
   [junit4]   2> 72434 T81 oasu.CommitTracker.<init> Soft AutoCommit: disabled
   [junit4]   2> 72434 T81 oasc.SolrDeletionPolicy.onInit SolrDeletionPolicy.onInit: commits: num=1
   [junit4]   2> 		commit{dir=MockDirectoryWrapper(RAMDirectory@710be7 lockFactory=NativeFSLockFactory@./org.apache.solr.client.solrj.impl.CloudSolrServerTest-1395118429836/jetty1/index),segFN=segments_1,generation=1}
   [junit4]   2> 72440 T81 oasc.SolrDeletionPolicy.updateCommits newest commit generation = 1
   [junit4]   2> 72440 T81 oass.SolrIndexSearcher.<init> Opening Searcher@1288185[collection1] main
   [junit4]   2> 72440 T81 oascc.ZkStateReader.readConfigName Load collection config from:/collections/collection1
   [junit4]   2> 72442 T81 oascc.ZkStateReader.readConfigName path=/collections/collection1 configName=conf1 specified config exists in ZooKeeper
   [junit4]   2> 72442 T81 oasr.ManagedResourceStorage.newStorageIO Setting up ZooKeeper-based storage for the RestManager with znodeBase: /configs/conf1
   [junit4]   2> 72443 T81 oasr.ManagedResourceStorage$ZooKeeperStorageIO.configure Configured ZooKeeperStorageIO with znodeBase: /configs/conf1
   [junit4]   2> 72443 T81 oasr.RestManager.init Initializing RestManager with initArgs: {}
   [junit4]   2> 72443 T81 oasr.ManagedResourceStorage.load Reading _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 72444 T81 oasr.ManagedResourceStorage$ZooKeeperStorageIO.openInputStream No data found for znode /configs/conf1/_rest_managed.json
   [junit4]   2> 72445 T81 oasr.ManagedResourceStorage.load Loaded null at path _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 72445 T81 oasr.ManagedResource.notifyObserversDuringInit WARN No registered observers for /rest/managed
   [junit4]   2> 72445 T81 oasr.RestManager.init Initializing 0 registered ManagedResources
   [junit4]   2> 72446 T82 oasc.SolrCore.registerSearcher [collection1] Registered new searcher Searcher@1288185[collection1] main{StandardDirectoryReader(segments_1:1:nrt)}
   [junit4]   2> 72446 T81 oasc.CoreContainer.registerCore registering core: collection1
   [junit4]   2> 72447 T26 oass.SolrDispatchFilter.init user.dir=/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-solrj/test/J0
   [junit4]   2> 72447 T26 oass.SolrDispatchFilter.init SolrDispatchFilter.init() done
   [junit4]   2> 72447 T85 oasc.ZkController.register Register replica - core:collection1 address:https://127.0.0.1:39638/_ collection:collection1 shard:shard2
   [junit4]   2> 72450 T85 oascc.SolrZkClient.makePath makePath: /collections/collection1/leader_elect/shard2/election
   [junit4]   2> 72461 T85 oasc.ShardLeaderElectionContext.runLeaderProcess Running the leader process for shard shard2
   [junit4]   2> 72465 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 72465 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 72465 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 72466 T85 oasc.ShardLeaderElectionContext.waitForReplicasToComeUp Enough replicas found to continue.
   [junit4]   2> 72466 T85 oasc.ShardLeaderElectionContext.runLeaderProcess I may be the new leader - try and sync
   [junit4]   2> ASYNC  NEW_CORE C53 name=collection1 org.apache.solr.core.SolrCore@14cd4fc url=https://127.0.0.1:39638/_/collection1 node=127.0.0.1:39638__ C53_STATE=coll:collection1 core:collection1 props:{state=down, base_url=https://127.0.0.1:39638/_, core=collection1, node_name=127.0.0.1:39638__}
   [junit4]   2> 72467 T85 C53 P39638 oasc.SyncStrategy.sync Sync replicas to https://127.0.0.1:39638/_/collection1/
   [junit4]   2> 72468 T85 C53 P39638 oasc.SyncStrategy.syncReplicas Sync Success - now sync replicas to me
   [junit4]   2> 72468 T59 oascc.ZkStateReader.updateClusterState Updating cloud state from ZooKeeper... 
   [junit4]   2> 72468 T85 C53 P39638 oasc.SyncStrategy.syncToMe https://127.0.0.1:39638/_/collection1/ has no replicas
   [junit4]   2> 72469 T85 oasc.ShardLeaderElectionContext.runLeaderProcess I am the new leader: https://127.0.0.1:39638/_/collection1/ shard2
   [junit4]   2> 72469 T85 oascc.SolrZkClient.makePath makePath: /collections/collection1/leaders/shard2
   [junit4]   2> 72477 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 72490 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 72494 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 72496 T26 oasc.AbstractFullDistribZkTestBase.createJettys create jetty 2
   [junit4]   2> 72496 T26 oejs.Server.doStart jetty-8.1.10.v20130312
   [junit4]   2> 72501 T26 oejus.SslContextFactory.doStart Enabled Protocols [SSLv2Hello, SSLv3, TLSv1, TLSv1.1, TLSv1.2] of [SSLv2Hello, SSLv3, TLSv1, TLSv1.1, TLSv1.2]
   [junit4]   2> 72503 T26 oejs.AbstractConnector.doStart Started SslSelectChannelConnector@127.0.0.1:53188
   [junit4]   2> 72506 T26 oass.SolrDispatchFilter.init SolrDispatchFilter.init()
   [junit4]   2> 72506 T26 oasc.SolrResourceLoader.locateSolrHome JNDI not configured for solr (NoInitialContextEx)
   [junit4]   2> 72507 T26 oasc.SolrResourceLoader.locateSolrHome using system property solr.solr.home: ../../../../../../../../../../mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-solrj/test/J0/./org.apache.solr.client.solrj.impl.CloudSolrServerTest-jetty2-1395118434701
   [junit4]   2> 72507 T26 oasc.SolrResourceLoader.<init> new SolrResourceLoader for directory: '../../../../../../../../../../mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-solrj/test/J0/./org.apache.solr.client.solrj.impl.CloudSolrServerTest-jetty2-1395118434701/'
   [junit4]   2> 72537 T26 oasc.ConfigSolr.fromFile Loading container configuration from /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-solrj/test/J0/../../../../../../../../../../mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-solrj/test/J0/./org.apache.solr.client.solrj.impl.CloudSolrServerTest-jetty2-1395118434701/solr.xml
   [junit4]   2> 72560 T26 oasc.CoreContainer.<init> New CoreContainer 16459242
   [junit4]   2> 72560 T26 oasc.CoreContainer.load Loading cores into CoreContainer [instanceDir=../../../../../../../../../../mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-solrj/test/J0/./org.apache.solr.client.solrj.impl.CloudSolrServerTest-jetty2-1395118434701/]
   [junit4]   2> 72561 T26 oashc.HttpShardHandlerFactory.getParameter Setting socketTimeout to: 0
   [junit4]   2> 72561 T26 oashc.HttpShardHandlerFactory.getParameter Setting urlScheme to: null
   [junit4]   2> 72561 T26 oashc.HttpShardHandlerFactory.getParameter Setting connTimeout to: 0
   [junit4]   2> 72562 T26 oashc.HttpShardHandlerFactory.getParameter Setting maxConnectionsPerHost to: 20
   [junit4]   2> 72562 T26 oashc.HttpShardHandlerFactory.getParameter Setting corePoolSize to: 0
   [junit4]   2> 72562 T26 oashc.HttpShardHandlerFactory.getParameter Setting maximumPoolSize to: 2147483647
   [junit4]   2> 72563 T26 oashc.HttpShardHandlerFactory.getParameter Setting maxThreadIdleTime to: 5
   [junit4]   2> 72563 T26 oashc.HttpShardHandlerFactory.getParameter Setting sizeOfQueue to: -1
   [junit4]   2> 72563 T26 oashc.HttpShardHandlerFactory.getParameter Setting fairnessPolicy to: false
   [junit4]   2> 72566 T26 oasl.LogWatcher.createWatcher SLF4J impl is org.slf4j.impl.Log4jLoggerFactory
   [junit4]   2> 72567 T26 oasl.LogWatcher.newRegisteredLogWatcher Registering Log Listener [Log4j (org.slf4j.impl.Log4jLoggerFactory)]
   [junit4]   2> 72567 T26 oasc.CoreContainer.load Host Name: 127.0.0.1
   [junit4]   2> 72567 T26 oasc.ZkContainer.initZooKeeper Zookeeper client=127.0.0.1:38601/solr
   [junit4]   2> 72568 T26 oasc.ZkController.checkChrootPath zkHost includes chroot
   [junit4]   2> 72569 T26 oascc.ConnectionManager.waitForConnected Waiting for client to connect to ZooKeeper
   [junit4]   2> 72571 T96 oascc.ConnectionManager.process Watcher org.apache.solr.common.cloud.ConnectionManager@1ef0293 name:ZooKeeperConnection Watcher:127.0.0.1:38601 got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 72572 T26 oascc.ConnectionManager.waitForConnected Client is connected to ZooKeeper
   [junit4]   2> 72578 T26 oascc.ConnectionManager.waitForConnected Waiting for client to connect to ZooKeeper
   [junit4]   2> 72581 T98 oascc.ConnectionManager.process Watcher org.apache.solr.common.cloud.ConnectionManager@5f0e10 name:ZooKeeperConnection Watcher:127.0.0.1:38601/solr got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 72581 T26 oascc.ConnectionManager.waitForConnected Client is connected to ZooKeeper
   [junit4]   2> 72594 T26 oascc.ZkStateReader.createClusterStateWatchersAndUpdate Updating cluster state from ZooKeeper... 
   [junit4]   2> 72597 T58 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 2)
   [junit4]   2> 72598 T98 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 2)
   [junit4]   2> 72598 T67 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 2)
   [junit4]   2> 72598 T80 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 2)
   [junit4]   2> 72641 T85 oasc.ZkController.register We are https://127.0.0.1:39638/_/collection1/ and leader is https://127.0.0.1:39638/_/collection1/
   [junit4]   2> 72641 T85 oasc.ZkController.register No LogReplay needed for core=collection1 baseURL=https://127.0.0.1:39638/_
   [junit4]   2> 72641 T85 oasc.ZkController.checkRecovery I am the leader, no recovery necessary
   [junit4]   2> 72642 T85 oasc.ZkController.publish publishing core=collection1 state=active collection=collection1
   [junit4]   2> 72642 T85 oasc.ZkController.publish numShards not found on descriptor - reading it from system property
   [junit4]   2> 72652 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 72653 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 72653 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 72654 T85 oascc.ZkStateReader.updateClusterState Updating cloud state from ZooKeeper... 
   [junit4]   2> 72657 T59 oascc.ZkStateReader.updateClusterState Updating cloud state from ZooKeeper... 
   [junit4]   2> 72659 T59 oasc.Overseer$ClusterStateUpdater.updateState Update state numShards=2 message={
   [junit4]   2> 	  "operation":"state",
   [junit4]   2> 	  "state":"active",
   [junit4]   2> 	  "base_url":"https://127.0.0.1:39638/_",
   [junit4]   2> 	  "core":"collection1",
   [junit4]   2> 	  "roles":null,
   [junit4]   2> 	  "node_name":"127.0.0.1:39638__",
   [junit4]   2> 	  "shard":"shard2",
   [junit4]   2> 	  "collection":"collection1",
   [junit4]   2> 	  "numShards":"2",
   [junit4]   2> 	  "core_node_name":"core_node1"}
   [junit4]   2> 72665 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 72770 T67 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 2)
   [junit4]   2> 72770 T80 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 2)
   [junit4]   2> 72770 T98 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 2)
   [junit4]   2> 72770 T58 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 2)
   [junit4]   2> 73601 T26 oasc.ZkController.createEphemeralLiveNode Register node as live in ZooKeeper:/live_nodes/127.0.0.1:53188__
   [junit4]   2> 73604 T26 oascc.SolrZkClient.makePath makePath: /live_nodes/127.0.0.1:53188__
   [junit4]   2> 73609 T67 oascc.ZkStateReader$3.process Updating live nodes... (3)
   [junit4]   2> 73610 T58 oascc.ZkStateReader$3.process Updating live nodes... (3)
   [junit4]   2> 73610 T80 oascc.ZkStateReader$3.process Updating live nodes... (3)
   [junit4]   2> 73610 T98 oascc.ZkStateReader$3.process Updating live nodes... (3)
   [junit4]   2> 73625 T99 oasc.ZkController.publish publishing core=collection1 state=down collection=collection1
   [junit4]   2> 73625 T99 oasc.ZkController.publish numShards not found on descriptor - reading it from system property
   [junit4]   2> 73628 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 73628 T99 oasc.ZkController.waitForCoreNodeName look for our core node name
   [junit4]   2> 73628 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 73629 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 73632 T59 oascc.ZkStateReader.updateClusterState Updating cloud state from ZooKeeper... 
   [junit4]   2> 73634 T59 oasc.Overseer$ClusterStateUpdater.updateState Update state numShards=2 message={
   [junit4]   2> 	  "operation":"state",
   [junit4]   2> 	  "state":"down",
   [junit4]   2> 	  "base_url":"https://127.0.0.1:53188/_",
   [junit4]   2> 	  "core":"collection1",
   [junit4]   2> 	  "roles":null,
   [junit4]   2> 	  "node_name":"127.0.0.1:53188__",
   [junit4]   2> 	  "shard":null,
   [junit4]   2> 	  "collection":"collection1",
   [junit4]   2> 	  "numShards":"2",
   [junit4]   2> 	  "core_node_name":null}
   [junit4]   2> 73635 T59 oasc.Overseer$ClusterStateUpdater.updateState Collection already exists with numShards=2
   [junit4]   2> 73635 T59 oasc.Overseer$ClusterStateUpdater.updateState Assigning new node to shard shard=shard1
   [junit4]   2> 73640 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 73744 T58 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 3)
   [junit4]   2> 73744 T80 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 3)
   [junit4]   2> 73744 T98 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 3)
   [junit4]   2> 73744 T67 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 3)
   [junit4]   2> 74629 T99 oasc.ZkController.waitForShardId waiting to find shard id in clusterstate for collection1
   [junit4]   2> 74630 T99 oasc.CoreContainer.create Creating SolrCore 'collection1' using instanceDir: ../../../../../../../../../../mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-solrj/test/J0/./org.apache.solr.client.solrj.impl.CloudSolrServerTest-jetty2-1395118434701/collection1
   [junit4]   2> 74630 T99 oasc.ZkController.createCollectionZkNode Check for collection zkNode:collection1
   [junit4]   2> 74631 T99 oasc.ZkController.createCollectionZkNode Collection zkNode exists
   [junit4]   2> 74631 T99 oascc.ZkStateReader.readConfigName Load collection config from:/collections/collection1
   [junit4]   2> 74633 T99 oascc.ZkStateReader.readConfigName path=/collections/collection1 configName=conf1 specified config exists in ZooKeeper
   [junit4]   2> 74633 T99 oasc.SolrResourceLoader.<init> new SolrResourceLoader for directory: '../../../../../../../../../../mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-solrj/test/J0/./org.apache.solr.client.solrj.impl.CloudSolrServerTest-jetty2-1395118434701/collection1/'
   [junit4]   2> 74666 T99 oasc.SolrConfig.<init> Using Lucene MatchVersion: LUCENE_50
   [junit4]   2> 74682 T99 oasc.SolrConfig.<init> Loaded SolrConfig: solrconfig.xml
   [junit4]   2> 74683 T99 oass.IndexSchema.readSchema Reading Solr Schema from schema.xml
   [junit4]   2> 74691 T99 oass.IndexSchema.readSchema [collection1] Schema name=test
   [junit4]   2> 74838 T99 oass.IndexSchema.readSchema default search field in schema is text
   [junit4]   2> 74839 T99 oass.IndexSchema.readSchema unique key field: id
   [junit4]   2> 74844 T99 oass.IndexSchema.readSchema WARN Field lowerfilt1and2 is not multivalued and destination for multiple copyFields (2)
   [junit4]   2> 74845 T99 oass.IndexSchema.readSchema WARN Field text is not multivalued and destination for multiple copyFields (3)
   [junit4]   2> 74846 T99 oasc.SolrCore.initDirectoryFactory org.apache.solr.core.MockDirectoryFactory
   [junit4]   2> 74846 T99 oasc.SolrCore.<init> [collection1] Opening new SolrCore at ../../../../../../../../../../mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-solrj/test/J0/./org.apache.solr.client.solrj.impl.CloudSolrServerTest-jetty2-1395118434701/collection1/, dataDir=./org.apache.solr.client.solrj.impl.CloudSolrServerTest-1395118429836/jetty2/
   [junit4]   2> 74846 T99 oasc.SolrCore.<init> JMX monitoring not detected for core: collection1
   [junit4]   2> 74847 T99 oasc.CachingDirectoryFactory.get return new directory for ./org.apache.solr.client.solrj.impl.CloudSolrServerTest-1395118429836/jetty2
   [junit4]   2> 74848 T99 oasc.SolrCore.getNewIndexDir New index directory detected: old=null new=./org.apache.solr.client.solrj.impl.CloudSolrServerTest-1395118429836/jetty2/index/
   [junit4]   2> 74848 T99 oasc.SolrCore.initIndex WARN [collection1] Solr index directory './org.apache.solr.client.solrj.impl.CloudSolrServerTest-1395118429836/jetty2/index' doesn't exist. Creating new index...
   [junit4]   2> 74848 T99 oasc.CachingDirectoryFactory.get return new directory for ./org.apache.solr.client.solrj.impl.CloudSolrServerTest-1395118429836/jetty2/index
   [junit4]   2> 74850 T99 oasc.SolrDeletionPolicy.onCommit SolrDeletionPolicy.onCommit: commits: num=1
   [junit4]   2> 		commit{dir=MockDirectoryWrapper(RAMDirectory@df5672 lockFactory=NativeFSLockFactory@./org.apache.solr.client.solrj.impl.CloudSolrServerTest-1395118429836/jetty2/index),segFN=segments_1,generation=1}
   [junit4]   2> 74850 T99 oasc.SolrDeletionPolicy.updateCommits newest commit generation = 1
   [junit4]   2> 74852 T99 oasc.SolrCore.loadUpdateProcessorChains no updateRequestProcessorChain defined as default, creating implicit default
   [junit4]   2> 74853 T99 oasc.RequestHandlers.initHandlersFromConfig created /get: solr.RealTimeGetHandler
   [junit4]   2> 74853 T99 oasc.RequestHandlers.initHandlersFromConfig adding lazy requestHandler: solr.ReplicationHandler
   [junit4]   2> 74853 T99 oasc.RequestHandlers.initHandlersFromConfig created /replication: solr.ReplicationHandler
   [junit4]   2> 74853 T99 oasc.RequestHandlers.initHandlersFromConfig created standard: solr.StandardRequestHandler
   [junit4]   2> 74854 T99 oasc.RequestHandlers.initHandlersFromConfig created /update: solr.UpdateRequestHandler
   [junit4]   2> 74854 T99 oasc.RequestHandlers.initHandlersFromConfig created /admin/: org.apache.solr.handler.admin.AdminHandlers
   [junit4]   2> 74854 T99 oasc.RequestHandlers.initHandlersFromConfig created /admin/ping: solr.PingRequestHandler
   [junit4]   2> 74857 T99 oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
   [junit4]   2> 74858 T99 oasu.CommitTracker.<init> Hard AutoCommit: disabled
   [junit4]   2> 74858 T99 oasu.CommitTracker.<init> Soft AutoCommit: disabled
   [junit4]   2> 74859 T99 oasc.SolrDeletionPolicy.onInit SolrDeletionPolicy.onInit: commits: num=1
   [junit4]   2> 		commit{dir=MockDirectoryWrapper(RAMDirectory@df5672 lockFactory=NativeFSLockFactory@./org.apache.solr.client.solrj.impl.CloudSolrServerTest-1395118429836/jetty2/index),segFN=segments_1,generation=1}
   [junit4]   2> 74859 T99 oasc.SolrDeletionPolicy.updateCommits newest commit generation = 1
   [junit4]   2> 74859 T99 oass.SolrIndexSearcher.<init> Opening Searcher@4e9648[collection1] main
   [junit4]   2> 74860 T99 oascc.ZkStateReader.readConfigName Load collection config from:/collections/collection1
   [junit4]   2> 74861 T99 oascc.ZkStateReader.readConfigName path=/collections/collection1 configName=conf1 specified config exists in ZooKeeper
   [junit4]   2> 74861 T99 oasr.ManagedResourceStorage.newStorageIO Setting up ZooKeeper-based storage for the RestManager with znodeBase: /configs/conf1
   [junit4]   2> 74862 T99 oasr.ManagedResourceStorage$ZooKeeperStorageIO.configure Configured ZooKeeperStorageIO with znodeBase: /configs/conf1
   [junit4]   2> 74862 T99 oasr.RestManager.init Initializing RestManager with initArgs: {}
   [junit4]   2> 74862 T99 oasr.ManagedResourceStorage.load Reading _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 74863 T99 oasr.ManagedResourceStorage$ZooKeeperStorageIO.openInputStream No data found for znode /configs/conf1/_rest_managed.json
   [junit4]   2> 74863 T99 oasr.ManagedResourceStorage.load Loaded null at path _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 74863 T99 oasr.ManagedResource.notifyObserversDuringInit WARN No registered observers for /rest/managed
   [junit4]   2> 74863 T99 oasr.RestManager.init Initializing 0 registered ManagedResources
   [junit4]   2> 74864 T99 oasc.CoreContainer.registerCore registering core: collection1
   [junit4]   2> 74864 T100 oasc.SolrCore.registerSearcher [collection1] Registered new searcher Searcher@4e9648[collection1] main{StandardDirectoryReader(segments_1:1:nrt)}
   [junit4]   2> 74865 T26 oass.SolrDispatchFilter.init user.dir=/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-solrj/test/J0
   [junit4]   2> 74865 T26 oass.SolrDispatchFilter.init SolrDispatchFilter.init() done
   [junit4]   2> 74865 T103 oasc.ZkController.register Register replica - core:collection1 address:https://127.0.0.1:53188/_ collection:collection1 shard:shard1
   [junit4]   2> 74867 T103 oascc.SolrZkClient.makePath makePath: /collections/collection1/leader_elect/shard1/election
   [junit4]   2> 74875 T103 oasc.ShardLeaderElectionContext.runLeaderProcess Running the leader process for shard shard1
   [junit4]   2> 74878 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 74878 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 74878 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 74879 T103 oasc.ShardLeaderElectionContext.waitForReplicasToComeUp Enough replicas found to continue.
   [junit4]   2> 74879 T103 oasc.ShardLeaderElectionContext.runLeaderProcess I may be the new leader - try and sync
   [junit4]   2> ASYNC  NEW_CORE C54 name=collection1 org.apache.solr.core.SolrCore@49fe01 url=https://127.0.0.1:53188/_/collection1 node=127.0.0.1:53188__ C54_STATE=coll:collection1 core:collection1 props:{state=down, base_url=https://127.0.0.1:53188/_, core=collection1, node_name=127.0.0.1:53188__}
   [junit4]   2> 74879 T103 C54 P53188 oasc.SyncStrategy.sync Sync replicas to https://127.0.0.1:53188/_/collection1/
   [junit4]   2> 74879 T59 oascc.ZkStateReader.updateClusterState Updating cloud state from ZooKeeper... 
   [junit4]   2> 74880 T103 C54 P53188 oasc.SyncStrategy.syncReplicas Sync Success - now sync replicas to me
   [junit4]   2> 74880 T103 C54 P53188 oasc.SyncStrategy.syncToMe https://127.0.0.1:53188/_/collection1/ has no replicas
   [junit4]   2> 74880 T103 oasc.ShardLeaderElectionContext.runLeaderProcess I am the new leader: https://127.0.0.1:53188/_/collection1/ shard1
   [junit4]   2> 74880 T103 oascc.SolrZkClient.makePath makePath: /collections/collection1/leaders/shard1
   [junit4]   2> 74885 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 74886 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 74897 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 74913 T26 oasc.AbstractFullDistribZkTestBase.createJettys create jetty 3
   [junit4]   2> 74913 T26 oejs.Server.doStart jetty-8.1.10.v20130312
   [junit4]   2> 74919 T26 oejus.SslContextFactory.doStart Enabled Protocols [SSLv2Hello, SSLv3, TLSv1, TLSv1.1, TLSv1.2] of [SSLv2Hello, SSLv3, TLSv1, TLSv1.1, TLSv1.2]
   [junit4]   2> 74924 T26 oejs.AbstractConnector.doStart Started SslSelectChannelConnector@127.0.0.1:49268
   [junit4]   2> 74926 T26 oass.SolrDispatchFilter.init SolrDispatchFilter.init()
   [junit4]   2> 74927 T26 oasc.SolrResourceLoader.locateSolrHome JNDI not configured for solr (NoInitialContextEx)
   [junit4]   2> 74927 T26 oasc.SolrResourceLoader.locateSolrHome using system property solr.solr.home: ./org.apache.solr.client.solrj.impl.CloudSolrServerTest-jetty3-1395118437120
   [junit4]   2> 74927 T26 oasc.SolrResourceLoader.<init> new SolrResourceLoader for directory: './org.apache.solr.client.solrj.impl.CloudSolrServerTest-jetty3-1395118437120/'
   [junit4]   2> 74951 T26 oasc.ConfigSolr.fromFile Loading container configuration from /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-solrj/test/J0/./org.apache.solr.client.solrj.impl.CloudSolrServerTest-jetty3-1395118437120/solr.xml
   [junit4]   2> 74971 T26 oasc.CoreContainer.<init> New CoreContainer 31364558
   [junit4]   2> 74972 T26 oasc.CoreContainer.load Loading cores into CoreContainer [instanceDir=./org.apache.solr.client.solrj.impl.CloudSolrServerTest-jetty3-1395118437120/]
   [junit4]   2> 74977 T26 oashc.HttpShardHandlerFactory.getParameter Setting socketTimeout to: 0
   [junit4]   2> 74978 T26 oashc.HttpShardHandlerFactory.getParameter Setting urlScheme to: null
   [junit4]   2> 74978 T26 oashc.HttpShardHandlerFactory.getParameter Setting connTimeout to: 0
   [junit4]   2> 74978 T26 oashc.HttpShardHandlerFactory.getParameter Setting maxConnectionsPerHost to: 20
   [junit4]   2> 74979 T26 oashc.HttpShardHandlerFactory.getParameter Setting corePoolSize to: 0
   [junit4]   2> 74979 T26 oashc.HttpShardHandlerFactory.getParameter Setting maximumPoolSize to: 2147483647
   [junit4]   2> 74980 T26 oashc.HttpShardHandlerFactory.getParameter Setting maxThreadIdleTime to: 5
   [junit4]   2> 74980 T26 oashc.HttpShardHandlerFactory.getParameter Setting sizeOfQueue to: -1
   [junit4]   2> 74980 T26 oashc.HttpShardHandlerFactory.getParameter Setting fairnessPolicy to: false
   [junit4]   2> 74984 T26 oasl.LogWatcher.createWatcher SLF4J impl is org.slf4j.impl.Log4jLoggerFactory
   [junit4]   2> 74985 T26 oasl.LogWatcher.newRegisteredLogWatcher Registering Log Listener [Log4j (org.slf4j.impl.Log4jLoggerFactory)]
   [junit4]   2> 74985 T26 oasc.CoreContainer.load Host Name: 127.0.0.1
   [junit4]   2> 74986 T26 oasc.ZkContainer.initZooKeeper Zookeeper client=127.0.0.1:38601/solr
   [junit4]   2> 74986 T26 oasc.ZkController.checkChrootPath zkHost includes chroot
   [junit4]   2> 74987 T26 oascc.ConnectionManager.waitForConnected Waiting for client to connect to ZooKeeper
   [junit4]   2> 74989 T114 oascc.ConnectionManager.process Watcher org.apache.solr.common.cloud.ConnectionManager@132d438 name:ZooKeeperConnection Watcher:127.0.0.1:38601 got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 74989 T26 oascc.ConnectionManager.waitForConnected Client is connected to ZooKeeper
   [junit4]   2> 74994 T26 oascc.ConnectionManager.waitForConnected Waiting for client to connect to ZooKeeper
   [junit4]   2> 74995 T116 oascc.ConnectionManager.process Watcher org.apache.solr.common.cloud.ConnectionManager@1998b9a name:ZooKeeperConnection Watcher:127.0.0.1:38601/solr got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 74996 T26 oascc.ConnectionManager.waitForConnected Client is connected to ZooKeeper
   [junit4]   2> 74999 T67 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 3)
   [junit4]   2> 74999 T98 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 3)
   [junit4]   2> 75000 T80 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 3)
   [junit4]   2> 74999 T58 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 3)
   [junit4]   2> 75010 T26 oascc.ZkStateReader.createClusterStateWatchersAndUpdate Updating cluster state from ZooKeeper... 
   [junit4]   2> 75037 T103 oasc.ZkController.register We are https://127.0.0.1:53188/_/collection1/ and leader is https://127.0.0.1:53188/_/collection1/
   [junit4]   2> 75038 T103 oasc.ZkController.register No LogReplay needed for core=collection1 baseURL=https://127.0.0.1:53188/_
   [junit4]   2> 75038 T103 oasc.ZkController.checkRecovery I am the leader, no recovery necessary
   [junit4]   2> 75038 T103 oasc.ZkController.publish publishing core=collection1 state=active collection=collection1
   [junit4]   2> 75038 T103 oasc.ZkController.publish numShards not found on descriptor - reading it from system property
   [junit4]   2> 75040 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 75040 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 75040 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 75040 T103 oascc.ZkStateReader.updateClusterState Updating cloud state from ZooKeeper... 
   [junit4]   2> 75042 T59 oascc.ZkStateReader.updateClusterState Updating cloud state from ZooKeeper... 
   [junit4]   2> 75043 T59 oasc.Overseer$ClusterStateUpdater.updateState Update state numShards=2 message={
   [junit4]   2> 	  "operation":"state",
   [junit4]   2> 	  "state":"active",
   [junit4]   2> 	  "base_url":"https://127.0.0.1:53188/_",
   [junit4]   2> 	  "core":"collection1",
   [junit4]   2> 	  "roles":null,
   [junit4]   2> 	  "node_name":"127.0.0.1:53188__",
   [junit4]   2> 	  "shard":"shard1",
   [junit4]   2> 	  "collection":"collection1",
   [junit4]   2> 	  "numShards":"2",
   [junit4]   2> 	  "core_node_name":"core_node2"}
   [junit4]   2> 75045 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 75149 T58 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 3)
   [junit4]   2> 75149 T80 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 3)
   [junit4]   2> 75150 T116 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 3)
   [junit4]   2> 75149 T98 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 3)
   [junit4]   2> 75149 T67 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 3)
   [junit4]   2> 76014 T26 oasc.ZkController.createEphemeralLiveNode Register node as live in ZooKeeper:/live_nodes/127.0.0.1:49268__
   [junit4]   2> 76016 T26 oascc.SolrZkClient.makePath makePath: /live_nodes/127.0.0.1:49268__
   [junit4]   2> 76020 T67 oascc.ZkStateReader$3.process Updating live nodes... (4)
   [junit4]   2> 76020 T98 oascc.ZkStateReader$3.process Updating live nodes... (4)
   [junit4]   2> 76020 T116 oascc.ZkStateReader$3.process Updating live nodes... (4)
   [junit4]   2> 76020 T80 oascc.ZkStateReader$3.process Updating live nodes... (4)
   [junit4]   2> 76021 T58 oascc.ZkStateReader$3.process Updating live nodes... (4)
   [junit4]   2> 76028 T117 oasc.ZkController.publish publishing core=collection1 state=down collection=collection1
   [junit4]   2> 76029 T117 oasc.ZkController.publish numShards not found on descriptor - reading it from system property
   [junit4]   2> 76030 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 76030 T117 oasc.ZkController.waitForCoreNodeName look for our core node name
   [junit4]   2> 76031 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 76031 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 76033 T59 oascc.ZkStateReader.updateClusterState Updating cloud state from ZooKeeper... 
   [junit4]   2> 76034 T59 oasc.Overseer$ClusterStateUpdater.updateState Update state numShards=2 message={
   [junit4]   2> 	  "operation":"state",
   [junit4]   2> 	  "state":"down",
   [junit4]   2> 	  "base_url":"https://127.0.0.1:49268/_",
   [junit4]   2> 	  "core":"collection1",
   [junit4]   2> 	  "roles":null,
   [junit4]   2> 	  "node_name":"127.0.0.1:49268__",
   [junit4]   2> 	  "shard":null,
   [junit4]   2> 	  "collection":"collection1",
   [junit4]   2> 	  "numShards":"2",
   [junit4]   2> 	  "core_node_name":null}
   [junit4]   2> 76034 T59 oasc.Overseer$ClusterStateUpdater.updateState Collection already exists with numShards=2
   [junit4]   2> 76034 T59 oasc.Overseer$ClusterStateUpdater.updateState Assigning new node to shard shard=shard2
   [junit4]   2> 76038 T58 oasc.DistributedQueue$LatchChildWatcher.process LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChanged
   [junit4]   2> 76142 T58 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 4)
   [junit4]   2> 76142 T116 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 4)
   [junit4]   2> 76142 T98 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 4)
   [junit4]   2> 76142 T80 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 4)
   [junit4]   2> 76142 T67 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 4)
   [junit4]   2> 77031 T117 oasc.ZkController.waitForShardId waiting to find shard id in clusterstate for collection1
   [junit4]   2> 77032 T117 oasc.CoreContainer.create Creating SolrCore 'collection1' using instanceDir: ./org.apache.solr.client.solrj.impl.CloudSolrServerTest-jetty3-1395118437120/collection1
   [junit4]   2> 77032 T117 oasc.ZkController.createCollectionZkNode Check for collection zkNode:collection1
   [junit4]   2> 77034 T117 oasc.ZkController.createCollectionZkNode Collection zkNode exists
   [junit4]   2> 77034 T117 oascc.ZkStateReader.readConfigName Load collection config from:/collections/collection1
   [junit4]   2> 77037 T117 oascc.ZkStateReader.readConfigName path=/collections/collection1 configName=conf1 specified config exists in ZooKeeper
   [junit4]   2> 77037 T117 oasc.SolrResourceLoader.<init> new SolrResourceLoader for directory: './org.apache.solr.client.solrj.impl.CloudSolrServerTest-jetty3-1395118437120/collection1/'
   [junit4]   2> 77081 T117 oasc.SolrConfig.<init> Using Lucene MatchVersion: LUCENE_50
   [junit4]   2> 77095 T117 oasc.SolrConfig.<init> Loaded SolrConfig: solrconfig.xml
   [junit4]   2> 77096 T117 oass.IndexSchema.readSchema Reading Solr Schema from schema.xml
   [junit4]   2> 77107 T117 oass.IndexSchema.readSchema [collection1] Schema name=test
   [junit4]   2> 77321 T117 oass.IndexSchema.readSchema default search field in schema is text
   [junit4]   2> 77323 T117 oass.IndexSchema.readSchema unique key field: id
   [junit4]   2> 77324 T117 oass.IndexSchema.readSchema WARN Field lowerfilt1and2 is not multivalued and destination for multiple copyFields (2)
   [junit4]   2> 77325 T117 oass.IndexSchema.readSchema WARN Field text is not multivalued and destination for multiple copyFields (3)
   [junit4]   2> 77325 T117 oasc.SolrCore.initDirectoryFactory org.apache.solr.core.MockDirectoryFactory
   [junit4]   2> 77325 T117 oasc.SolrCore.<init> [collection1] Opening new SolrCore at ./org.apache.solr.client.solrj.impl.CloudSolrServerTest-jetty3-1395118437120/collection1/, dataDir=./org.apache.solr.client.solrj.impl.CloudSolrServerTest-1395118429836/jetty3/
   [junit4]   2> 77325 T117 oasc.SolrCore.<init> JMX monitoring not detected for core: collection1
   [junit4]   2> 77326 T117 oasc.CachingDirectoryFactory.get return new directory for ./org.apache.solr.client.solrj.impl.CloudSolrServerTest-1395118429836/jetty3
   [junit4]   2> 77326 T117 oasc.SolrCore.getNewIndexDir New index directory detected: old=null new=./org.apache.solr.client.solrj.impl.CloudSolrServerTest-1395118429836/jetty3/index/
   [junit4]   2> 77326 T117 oasc.SolrCore.initIndex WARN [collection1] Solr index directory './org.apache.solr.client.solrj.impl.CloudSolrServerTest-1395118429836/jetty3/index' doesn't exist. Creating new index...
   [junit4]   2> 77327 T117 oasc.CachingDirectoryFactory.get return new directory for ./org.apache.solr.client.solrj.impl.CloudSolrServerTest-1395118429836/jetty3/index
   [junit4]   2> 77328 T117 oasc.SolrDeletionPolicy.onCommit SolrDeletionPolicy.onCommit: commits: num=1
   [junit4]   2> 		commit{dir=MockDirectoryWrapper(RAMDirectory@5da8af lockFactory=NativeFSLockFactory@./org.apache.solr.client.solrj.impl.CloudSolrServerTest-1395118429836/jetty3/index),segFN=segments_1,generation=1}
   [junit4]   2> 77328 T117 oasc.SolrDeletionPolicy.updateCommits newest commit generation = 1
   [junit4]   2> 77330 T117 oasc.SolrCore.loadUpdateProcessorChains no updateRequestProcessorChain defined as default, creating implicit default
   [junit4]   2> 77330 T117 oasc.RequestHandlers.initHandlersFromConfig created /get: solr.RealTimeGetHandler
   [junit4]   2> 77330 T117 oasc.RequestHandlers.initHandlersFromConfig adding lazy requestHandler: solr.ReplicationHandler
   [junit4]   2> 77330 T117 oasc.RequestHandlers.initHandlersFromConfig created /replication: solr.ReplicationHandler
   [junit4]   2> 77330 T117 oasc.RequestHandlers.initHandlersFromConfig created standard: solr.StandardRequestHandler
   [junit4]   2> 77331 T117 oasc.RequestHandlers.initHandlersFromConfig created /update: solr.UpdateRequestHandler
   [junit4]   2> 77331 T117 oasc.RequestHandlers.initHandlersFromConfig created /admin/: org.apache.solr.handler.admin.AdminHandlers
   [junit4]   2> 77331 T117 oasc.RequestHandlers.initHandlersFromConfig created /admin/ping: solr.PingRequestHandler
   [junit4]   2> 77333 T117 oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
   [junit4]   2> 77334 T117 oasu.CommitTracker.<init> 

[...truncated too long message...]

ps://127.0.0.1:49268/_/collection1 node=127.0.0.1:49268__ C121_STATE=coll:collection1 core:collection1 props:{state=active, base_url=https://127.0.0.1:49268/_, core=collection1, node_name=127.0.0.1:49268__}
   [junit4]   2> 91267 T116 C121 P49268 oasc.SyncStrategy.sync WARN Closed, skipping sync up.
   [junit4]   2> 91271 T116 oasc.ShardLeaderElectionContext.rejoinLeaderElection Not rejoining election because CoreContainer is shutdown
   [junit4]   2> 91272 T116 oasc.SolrCore.close [collection1]  CLOSING SolrCore org.apache.solr.core.SolrCore@722b00
   [junit4]   2> 91272 T116 oasu.DirectUpdateHandler2.close closing DirectUpdateHandler2{commits=6,autocommits=0,soft autocommits=0,optimizes=0,rollbacks=0,expungeDeletes=0,docsPending=0,adds=0,deletesById=0,deletesByQuery=0,errors=0,cumulative_adds=2,cumulative_deletesById=1,cumulative_deletesByQuery=2,cumulative_errors=0,transaction_logs_total_size=0,transaction_logs_total_number=5}
   [junit4]   2> 91272 T116 oasu.SolrCoreState.decrefSolrCoreState Closing SolrCoreState
   [junit4]   2> 91273 T116 oasu.DefaultSolrCoreState.closeIndexWriter SolrCoreState ref count has reached 0 - closing IndexWriter
   [junit4]   2> 91273 T116 oasu.DefaultSolrCoreState.closeIndexWriter closing IndexWriter with IndexWriterCloser
   [junit4]   2> 91274 T116 oasc.SolrCore.closeSearcher [collection1] Closing main searcher on request.
   [junit4]   2> 91274 T116 oasc.CachingDirectoryFactory.close Closing MockDirectoryFactory - 2 directories currently being tracked
   [junit4]   2> 91274 T116 oasc.CachingDirectoryFactory.closeCacheValue looking to close ./org.apache.solr.client.solrj.impl.CloudSolrServerTest-1395118429836/jetty3/index [CachedDir<<refCount=0;path=./org.apache.solr.client.solrj.impl.CloudSolrServerTest-1395118429836/jetty3/index;done=false>>]
   [junit4]   2> 91275 T116 oasc.CachingDirectoryFactory.close Closing directory: ./org.apache.solr.client.solrj.impl.CloudSolrServerTest-1395118429836/jetty3/index
   [junit4]   2> 91275 T116 oasc.CachingDirectoryFactory.closeCacheValue looking to close ./org.apache.solr.client.solrj.impl.CloudSolrServerTest-1395118429836/jetty3 [CachedDir<<refCount=0;path=./org.apache.solr.client.solrj.impl.CloudSolrServerTest-1395118429836/jetty3;done=false>>]
   [junit4]   2> 91275 T116 oasc.CachingDirectoryFactory.close Closing directory: ./org.apache.solr.client.solrj.impl.CloudSolrServerTest-1395118429836/jetty3
   [junit4]   2> 91275 T116 oascc.ZkStateReader$3.process WARN ZooKeeper watch triggered, but Solr cannot talk to ZK
   [junit4]   2> 91275 T116 oascc.ZkStateReader$2.process A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 4)
   [junit4]   2> 91275 T116 oascc.ZkStateReader$2.process WARN ZooKeeper watch triggered, but Solr cannot talk to ZK
   [junit4]   2> 91276 T116 oasc.LeaderElector$1.process WARN  org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /overseer_elect/election
   [junit4]   2> 	at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
   [junit4]   2> 	at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
   [junit4]   2> 	at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1468)
   [junit4]   2> 	at org.apache.solr.common.cloud.SolrZkClient$6.execute(SolrZkClient.java:259)
   [junit4]   2> 	at org.apache.solr.common.cloud.SolrZkClient$6.execute(SolrZkClient.java:256)
   [junit4]   2> 	at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:73)
   [junit4]   2> 	at org.apache.solr.common.cloud.SolrZkClient.getChildren(SolrZkClient.java:256)
   [junit4]   2> 	at org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:92)
   [junit4]   2> 	at org.apache.solr.cloud.LeaderElector.access$000(LeaderElector.java:55)
   [junit4]   2> 	at org.apache.solr.cloud.LeaderElector$1.process(LeaderElector.java:137)
   [junit4]   2> 	at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:519)
   [junit4]   2> 	at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:495)
   [junit4]   2> 
   [junit4]   2> ASYNC  NEW_CORE C122 name=collection1 org.apache.solr.core.SolrCore@1462bf0 url=https://127.0.0.1:42765/_/collection1 node=127.0.0.1:42765__ C122_STATE=coll:collection1 core:collection1 props:{state=active, base_url=https://127.0.0.1:42765/_, core=collection1, node_name=127.0.0.1:42765__}
   [junit4]   2> 91355 T135 C122 P42765 oasc.SyncStrategy.sync WARN Closed, skipping sync up.
   [junit4]   2> 91356 T135 oasc.ShardLeaderElectionContext.rejoinLeaderElection Not rejoining election because CoreContainer is shutdown
   [junit4]   2> 91356 T135 oasc.SolrCore.close [collection1]  CLOSING SolrCore org.apache.solr.core.SolrCore@1462bf0
   [junit4]   2> 91357 T135 oasu.DirectUpdateHandler2.close closing DirectUpdateHandler2{commits=6,autocommits=0,soft autocommits=0,optimizes=0,rollbacks=0,expungeDeletes=0,docsPending=0,adds=0,deletesById=0,deletesByQuery=0,errors=0,cumulative_adds=2,cumulative_deletesById=1,cumulative_deletesByQuery=2,cumulative_errors=0,transaction_logs_total_size=0,transaction_logs_total_number=5}
   [junit4]   2> 91357 T135 oasu.SolrCoreState.decrefSolrCoreState Closing SolrCoreState
   [junit4]   2> 91357 T135 oasu.DefaultSolrCoreState.closeIndexWriter SolrCoreState ref count has reached 0 - closing IndexWriter
   [junit4]   2> 91357 T135 oasu.DefaultSolrCoreState.closeIndexWriter closing IndexWriter with IndexWriterCloser
   [junit4]   2> 91358 T135 oasc.SolrCore.closeSearcher [collection1] Closing main searcher on request.
   [junit4]   2> 91359 T135 oasc.CachingDirectoryFactory.close Closing MockDirectoryFactory - 2 directories currently being tracked
   [junit4]   2> 91359 T135 oasc.CachingDirectoryFactory.closeCacheValue looking to close ./org.apache.solr.client.solrj.impl.CloudSolrServerTest-1395118429836/jetty4/index [CachedDir<<refCount=0;path=./org.apache.solr.client.solrj.impl.CloudSolrServerTest-1395118429836/jetty4/index;done=false>>]
   [junit4]   2> 91359 T135 oasc.CachingDirectoryFactory.close Closing directory: ./org.apache.solr.client.solrj.impl.CloudSolrServerTest-1395118429836/jetty4/index
   [junit4]   2> 91359 T135 oasc.CachingDirectoryFactory.closeCacheValue looking to close ./org.apache.solr.client.solrj.impl.CloudSolrServerTest-1395118429836/jetty4 [CachedDir<<refCount=0;path=./org.apache.solr.client.solrj.impl.CloudSolrServerTest-1395118429836/jetty4;done=false>>]
   [junit4]   2> 91359 T135 oasc.CachingDirectoryFactory.close Closing directory: ./org.apache.solr.client.solrj.impl.CloudSolrServerTest-1395118429836/jetty4
   [junit4]   2> 91360 T135 oascc.ZkStateReader$3.process WARN ZooKeeper watch triggered, but Solr cannot talk to ZK
   [junit4]   2> 91360 T135 oasc.LeaderElector$1.process WARN  org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /overseer_elect/election
   [junit4]   2> 	at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
   [junit4]   2> 	at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
   [junit4]   2> 	at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1468)
   [junit4]   2> 	at org.apache.solr.common.cloud.SolrZkClient$6.execute(SolrZkClient.java:259)
   [junit4]   2> 	at org.apache.solr.common.cloud.SolrZkClient$6.execute(SolrZkClient.java:256)
   [junit4]   2> 	at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:73)
   [junit4]   2> 	at org.apache.solr.common.cloud.SolrZkClient.getChildren(SolrZkClient.java:256)
   [junit4]   2> 	at org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:92)
   [junit4]   2> 	at org.apache.solr.cloud.LeaderElector.access$000(LeaderElector.java:55)
   [junit4]   2> 	at org.apache.solr.cloud.LeaderElector$1.process(LeaderElector.java:137)
   [junit4]   2> 	at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:519)
   [junit4]   2> 	at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:495)
   [junit4]   2> 
   [junit4]   2> NOTE: test params are: codec=Lucene40, sim=DefaultSimilarity, locale=vi_VN, timezone=Australia/LHI
   [junit4]   2> NOTE: Linux 3.8.0-36-generic i386/Oracle Corporation 1.8.0 (32-bit)/cpus=8,threads=1,free=10712032,total=44961792
   [junit4]   2> NOTE: All tests run in this JVM: [SolrParamTest, ClientUtilsTest, SolrExampleEmbeddedTest, TestUpdateRequestCodec, TestJavaBinCodec, CloudSolrServerTest]
   [junit4] Completed on J0 in 81.41s, 2 tests, 1 error <<< FAILURES!

[...truncated 94 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:467: The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:447: The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:45: The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/extra-targets.xml:37: The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build.xml:202: The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/common-build.xml:490: The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/common-build.xml:1275: The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/common-build.xml:907: There were test failures: 49 suites, 283 tests, 1 error

Total time: 48 minutes 49 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 32bit/jdk1.8.0-fcs-b132 -client -XX:+UseConcMarkSweepGC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_60-ea-b07) - Build # 9829 - Still Failing!

Posted by Policeman Jenkins Server <je...@thetaphi.de>.
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/9829/
Java: 32bit/jdk1.7.0_60-ea-b07 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.client.solrj.impl.CloudSolrServerTest.testDistribSearch

Error Message:
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 127.0.0.1:43702 within 45000 ms

Stack Trace:
org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 127.0.0.1:43702 within 45000 ms
	at __randomizedtesting.SeedInfo.seed([7A8E06EF54218FD:864E6E76821D78C1]:0)
	at org.apache.solr.common.cloud.SolrZkClient.<init>(SolrZkClient.java:150)
	at org.apache.solr.common.cloud.SolrZkClient.<init>(SolrZkClient.java:101)
	at org.apache.solr.common.cloud.SolrZkClient.<init>(SolrZkClient.java:91)
	at org.apache.solr.cloud.AbstractZkTestCase.buildZooKeeper(AbstractZkTestCase.java:89)
	at org.apache.solr.cloud.AbstractZkTestCase.buildZooKeeper(AbstractZkTestCase.java:83)
	at org.apache.solr.cloud.AbstractDistribZkTestBase.setUp(AbstractDistribZkTestBase.java:70)
	at org.apache.solr.cloud.AbstractFullDistribZkTestBase.setUp(AbstractFullDistribZkTestBase.java:201)
	at org.apache.solr.client.solrj.impl.CloudSolrServerTest.setUp(CloudSolrServerTest.java:78)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1617)
	at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:860)
	at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:876)
	at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
	at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
	at org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
	at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
	at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
	at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
	at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
	at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
	at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
	at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:359)
	at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:783)
	at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:443)
	at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:835)
	at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:737)
	at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:771)
	at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:782)
	at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
	at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
	at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
	at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
	at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
	at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
	at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
	at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
	at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
	at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
	at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
	at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
	at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
	at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:359)
	at java.lang.Thread.run(Thread.java:744)
Caused by: java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 127.0.0.1:43702 within 45000 ms
	at org.apache.solr.common.cloud.ConnectionManager.waitForConnected(ConnectionManager.java:223)
	at org.apache.solr.common.cloud.SolrZkClient.<init>(SolrZkClient.java:142)
	... 45 more




Build Log:
[...truncated 11568 lines...]
   [junit4] Suite: org.apache.solr.client.solrj.impl.CloudSolrServerTest
   [junit4]   2> 34613 T160 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl (false) and clientAuth (true)
   [junit4]   2> 34614 T160 oas.BaseDistributedSearchTestCase.initHostContext Setting hostContext system property: /
   [junit4]   2> 34616 T160 oasc.AbstractZkTestCase.<clinit> WARN TEST_HOME() does not exist - solrj test?
   [junit4]   2> 34621 T160 oas.SolrTestCaseJ4.setUp ###Starting testDistribSearch
   [junit4]   2> Creating dataDir: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-solrj/test/J0/./solrtest-CloudSolrServerTest-1395112556507
   [junit4]   2> 34625 T160 oasc.ZkTestServer.run STARTING ZK TEST SERVER
   [junit4]   2> 34630 T161 oasc.ZkTestServer$ZKServerMain.runFromConfig Starting server
   [junit4]   2> 34729 T160 oasc.ZkTestServer.run start zk server on port:43702
   [junit4]   2> 34777 T160 oascc.ConnectionManager.waitForConnected Waiting for client to connect to ZooKeeper
   [junit4]   2> 111590 T164 oazsp.FileTxnLog.commit WARN fsync-ing the write ahead log in SyncThread:0 took 76796ms which will adversely effect operation latency. See the ZooKeeper troubleshooting guide
   [junit4]   2> 111592 T160 oas.SolrTestCaseJ4.tearDown ###Ending testDistribSearch
   [junit4]   2> 111599 T162 oazs.NIOServerCnxn.doIO WARN caught end of stream exception EndOfStreamException: Unable to read additional data from client sessionid 0x144d330381e0000, likely client has closed socket
   [junit4]   2> 	at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
   [junit4]   2> 	at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
   [junit4]   2> 	at java.lang.Thread.run(Thread.java:744)
   [junit4]   2> 
   [junit4]   2> 111600 T160 oasc.ZkTestServer.send4LetterWord connecting to 127.0.0.1:43702 43702
   [junit4]   1> client port:0.0.0.0/0.0.0.0:0
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=CloudSolrServerTest -Dtests.method=testDistribSearch -Dtests.seed=7A8E06EF54218FD -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=de_AT -Dtests.timezone=America/Knox_IN -Dtests.file.encoding=UTF-8
   [junit4] ERROR   77.3s J0 | CloudSolrServerTest.testDistribSearch <<<
   [junit4]    > Throwable #1: org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 127.0.0.1:43702 within 45000 ms
   [junit4]    > 	at __randomizedtesting.SeedInfo.seed([7A8E06EF54218FD:864E6E76821D78C1]:0)
   [junit4]    > 	at org.apache.solr.common.cloud.SolrZkClient.<init>(SolrZkClient.java:150)
   [junit4]    > 	at org.apache.solr.common.cloud.SolrZkClient.<init>(SolrZkClient.java:101)
   [junit4]    > 	at org.apache.solr.common.cloud.SolrZkClient.<init>(SolrZkClient.java:91)
   [junit4]    > 	at org.apache.solr.cloud.AbstractZkTestCase.buildZooKeeper(AbstractZkTestCase.java:89)
   [junit4]    > 	at org.apache.solr.cloud.AbstractZkTestCase.buildZooKeeper(AbstractZkTestCase.java:83)
   [junit4]    > 	at org.apache.solr.cloud.AbstractDistribZkTestBase.setUp(AbstractDistribZkTestBase.java:70)
   [junit4]    > 	at org.apache.solr.cloud.AbstractFullDistribZkTestBase.setUp(AbstractFullDistribZkTestBase.java:201)
   [junit4]    > 	at org.apache.solr.client.solrj.impl.CloudSolrServerTest.setUp(CloudSolrServerTest.java:78)
   [junit4]    > 	at java.lang.Thread.run(Thread.java:744)
   [junit4]    > Caused by: java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 127.0.0.1:43702 within 45000 ms
   [junit4]    > 	at org.apache.solr.common.cloud.ConnectionManager.waitForConnected(ConnectionManager.java:223)
   [junit4]    > 	at org.apache.solr.common.cloud.SolrZkClient.<init>(SolrZkClient.java:142)
   [junit4]    > 	... 45 more
   [junit4]   2> 111875 T160 oas.SolrTestCaseJ4.setUp ###Starting testShutdown
   [junit4]   2> Creating dataDir: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-solrj/test/J0/./solrtest-CloudSolrServerTest-1395112633760
   [junit4]   2> 111875 T160 oasc.ZkTestServer.run STARTING ZK TEST SERVER
   [junit4]   2> 111876 T168 oasc.ZkTestServer$ZKServerMain.runFromConfig Starting server
   [junit4]   2> 111976 T160 oasc.ZkTestServer.run start zk server on port:54522
   [junit4]   2> 111977 T160 oascc.ConnectionManager.waitForConnected Waiting for client to connect to ZooKeeper
   [junit4]   2> 111981 T174 oascc.ConnectionManager.process Watcher org.apache.solr.common.cloud.ConnectionManager@314df9 name:ZooKeeperConnection Watcher:127.0.0.1:54522 got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 111982 T160 oascc.ConnectionManager.waitForConnected Client is connected to ZooKeeper
   [junit4]   2> 111983 T160 oascc.SolrZkClient.makePath makePath: /solr
   [junit4]   2> 112006 T160 oascc.ConnectionManager.waitForConnected Waiting for client to connect to ZooKeeper
   [junit4]   2> 112008 T176 oascc.ConnectionManager.process Watcher org.apache.solr.common.cloud.ConnectionManager@21cc15 name:ZooKeeperConnection Watcher:127.0.0.1:54522/solr got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 112008 T160 oascc.ConnectionManager.waitForConnected Client is connected to ZooKeeper
   [junit4]   2> 112011 T160 oascc.SolrZkClient.makePath makePath: /collections/collection1
   [junit4]   2> 112015 T160 oascc.SolrZkClient.makePath makePath: /collections/collection1/shards
   [junit4]   2> 112018 T160 oascc.SolrZkClient.makePath makePath: /collections/control_collection
   [junit4]   2> 112021 T160 oascc.SolrZkClient.makePath makePath: /collections/control_collection/shards
   [junit4]   2> 112025 T160 oasc.AbstractZkTestCase.putConfig put /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/solrj/src/test-files/solrj/solr/collection1/conf/solrconfig.xml to /configs/conf1/solrconfig.xml
   [junit4]   2> 112025 T160 oascc.SolrZkClient.makePath makePath: /configs/conf1/solrconfig.xml
   [junit4]   2> 112030 T160 oasc.AbstractZkTestCase.putConfig put /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/solrj/src/test-files/solrj/solr/collection1/conf/schema.xml to /configs/conf1/schema.xml
   [junit4]   2> 112031 T160 oascc.SolrZkClient.makePath makePath: /configs/conf1/schema.xml
   [junit4]   2> 112038 T160 oasc.AbstractZkTestCase.putConfig skipping /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/solrj/src/test-files/solrj/solr/collection1/conf/solrconfig.snippet.randomindexconfig.xml because it doesn't exist
   [junit4]   2> 112039 T160 oasc.AbstractZkTestCase.putConfig skipping /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/solrj/src/test-files/solrj/solr/collection1/conf/stopwords.txt because it doesn't exist
   [junit4]   2> 112039 T160 oasc.AbstractZkTestCase.putConfig skipping /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/solrj/src/test-files/solrj/solr/collection1/conf/protwords.txt because it doesn't exist
   [junit4]   2> 112039 T160 oasc.AbstractZkTestCase.putConfig skipping /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/solrj/src/test-files/solrj/solr/collection1/conf/currency.xml because it doesn't exist
   [junit4]   2> 112040 T160 oasc.AbstractZkTestCase.putConfig skipping /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/solrj/src/test-files/solrj/solr/collection1/conf/enumsConfig.xml because it doesn't exist
   [junit4]   2> 112040 T160 oasc.AbstractZkTestCase.putConfig skipping /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/solrj/src/test-files/solrj/solr/collection1/conf/open-exchange-rates.json because it doesn't exist
   [junit4]   2> 112040 T160 oasc.AbstractZkTestCase.putConfig skipping /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/solrj/src/test-files/solrj/solr/collection1/conf/mapping-ISOLatin1Accent.txt because it doesn't exist
   [junit4]   2> 112041 T160 oasc.AbstractZkTestCase.putConfig skipping /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/solrj/src/test-files/solrj/solr/collection1/conf/old_synonyms.txt because it doesn't exist
   [junit4]   2> 112041 T160 oasc.AbstractZkTestCase.putConfig skipping /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/solrj/src/test-files/solrj/solr/collection1/conf/synonyms.txt because it doesn't exist
   [junit4]   2> 112048 T160 oascc.ConnectionManager.waitForConnected Waiting for client to connect to ZooKeeper
   [junit4]   2> 112050 T177 oaz.ClientCnxnSocketNIO.connect ERROR Unable to open socket to ff01:0:0:0:0:0:0:114/ff01:0:0:0:0:0:0:114:33332
   [junit4]   2> 112051 T177 oaz.ClientCnxn$SendThread.run WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect java.net.SocketException: Network is unreachable
   [junit4]   2> 	at sun.nio.ch.Net.connect0(Native Method)
   [junit4]   2> 	at sun.nio.ch.Net.connect(Net.java:465)
   [junit4]   2> 	at sun.nio.ch.Net.connect(Net.java:457)
   [junit4]   2> 	at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670)
   [junit4]   2> 	at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:266)
   [junit4]   2> 	at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:276)
   [junit4]   2> 	at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:958)
   [junit4]   2> 	at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:993)
   [junit4]   2> 
   [junit4]   2> 113152 T177 oaz.ClientCnxnSocketNIO.connect ERROR Unable to open socket to ff01:0:0:0:0:0:0:114/ff01:0:0:0:0:0:0:114:33332
   [junit4]   2> 113256 T160 oas.SolrTestCaseJ4.tearDown ###Ending testShutdown
   [junit4]   2> 113258 T160 oasc.ZkTestServer.send4LetterWord connecting to 127.0.0.1:54522 54522
   [junit4]   1> client port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 113395 T160 oas.SolrTestCaseJ4.deleteCore ###deleteCore
   [junit4]   2> 78791 T159 ccr.ThreadLeakControl.checkThreadLeaks WARNING Will linger awaiting termination of 1 leaked thread(s).
   [junit4]   2> NOTE: test params are: codec=Asserting, sim=DefaultSimilarity, locale=de_AT, timezone=America/Knox_IN
   [junit4]   2> NOTE: Linux 3.8.0-36-generic i386/Oracle Corporation 1.7.0_60-ea (32-bit)/cpus=8,threads=1,free=23566432,total=64880640
   [junit4]   2> NOTE: All tests run in this JVM: [SolrExampleStreamingBinaryTest, LargeVolumeJettyTest, LargeVolumeEmbeddedTest, TestEmbeddedSolrServer, SolrParamTest, JettyWebappTest, SolrExceptionTest, ModifiableSolrParamsTest, TestCoreAdmin, LargeVolumeBinaryJettyTest, CloudSolrServerTest]
   [junit4] Completed on J0 in 79.58s, 2 tests, 1 error <<< FAILURES!

[...truncated 88 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:467: The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:447: The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:45: The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/extra-targets.xml:37: The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build.xml:202: The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/common-build.xml:490: The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/common-build.xml:1275: The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/common-build.xml:907: There were test failures: 49 suites, 283 tests, 1 error

Total time: 51 minutes 37 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 32bit/jdk1.7.0_60-ea-b07 -server -XX:+UseConcMarkSweepGC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



RE: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) - Build # 9828 - Failure!

Posted by Uwe Schindler <uw...@thetaphi.de>.
Hi Dawid,

 

> 1) I assume you're running ant clean in jenkins before any tests commence; so the buildup of stale files should be at most for a single run.

 

YES. The workspace is clean before starting the build (Jenkins takes care). In addition, we run “ant clean” (as part of “ant jenkins”). This is also why I was confused about the file dates in the J0 folder (see screenshot), but those seem to be changed by the test.

 

> you can 'terminate tests early' instead of trying to run all tests (and possibly have multiple failures, each leaving a trail of poop behind.

 

What would you like to propose? I just want that the J0 folder is cleaned after all tests were run (to work around the issue of single tests not cleaning up correctly). The easiest would be some “ant clean” solely on the tests temp folder after running tests (optional). I think this can be done with ANT (some target with <target name=”nukeTestDirs” if=”tests.cleanup.after.run”/>

 

The question is: The J0 folder is removed automatically, if all tests succeed? Who is doing this? Maybe we can change that to also nuke the working folder if the tests failed (with the above property). I just don’t want to duplicate code.

 

> So if you add -Dtests.maxfailures=1 then only a single Solr test would actually leave those temporary files. Would this help?

 

This would not help, as also succeeding tests leave the folders there (see screenshot).

 

Uwe

 

-----

Uwe Schindler

H.-H.-Meier-Allee 63, D-28213 Bremen

http://www.thetaphi.de <http://www.thetaphi.de/> 

eMail: uwe@thetaphi.de

 

From: dawid.weiss@gmail.com [mailto:dawid.weiss@gmail.com] On Behalf Of Dawid Weiss
Sent: Wednesday, March 26, 2014 10:08 AM
To: dev@lucene.apache.org
Subject: Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) - Build # 9828 - Failure!

 

 

The problem may be that if there are open file handles then these folders can't be removed by the JVM that created them until the process dies. 

 

I see a few solutions.

 

1) I assume you're running ant clean in jenkins before any tests commence; so the buildup of stale files should be at most for a single run.

 

2) you can 'terminate tests early' instead of trying to run all tests (and possibly have multiple failures, each leaving a trail of poop behind. This can be done by: 

 

# Repeats N times but skips any tests after the first failure or M

# initial failures.

ant test -Dtests.iters=N -Dtests.failfast=yes -Dtestcase=...

ant test -Dtests.iters=N -Dtests.maxfailures=M -Dtestcase=...

 

So if you add -Dtests.maxfailures=1 then only a single Solr test would actually leave those temporary files. Would this help?

 

Dawid


Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) - Build # 9828 - Failure!

Posted by Dawid Weiss <da...@cs.put.poznan.pl>.
The problem may be that if there are open file handles then these folders
can't be removed by the JVM that created them until the process dies.

I see a few solutions.

1) I assume you're running ant clean in jenkins before any tests commence;
so the buildup of stale files should be at most for a single run.

2) you can 'terminate tests early' instead of trying to run all tests (and
possibly have multiple failures, each leaving a trail of poop behind. This
can be done by:

# Repeats N times but skips any tests after the first failure or M
# initial failures.
ant test -Dtests.iters=N -Dtests.failfast=yes -Dtestcase=...
ant test -Dtests.iters=N -Dtests.maxfailures=M -Dtestcase=...

So if you add -Dtests.maxfailures=1 then only a single Solr test would
actually leave those temporary files. Would this help?

Dawid

RE: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) - Build # 9828 - Failure!

Posted by Uwe Schindler <uw...@thetaphi.de>.
Hi Dawid,

 

Another idea (only as workaround for now):

On the windows builds, can I force the test runner to nuke the “J0, J1,…” folders although tests failed – on non-developer computers I see no reason to keep them after failed tests? Maybe with a sysprop passed to ANT? In that case the tests would produce many files, but the contents are cleaned up after the ANT task finishes.

 

The whole problem always appears, if Windows tests failed, all Jx folders are kept alive and then the next run fails. If the Windows test passed before, the Jx folders seem to be cleaned up and so the following build succeeds.

 

Uwe

 

-----

Uwe Schindler

H.-H.-Meier-Allee 63, D-28213 Bremen

 <http://www.thetaphi.de/> http://www.thetaphi.de

eMail: uwe@thetaphi.de

 

From: dawid.weiss@gmail.com [mailto:dawid.weiss@gmail.com] On Behalf Of Dawid Weiss
Sent: Wednesday, March 26, 2014 9:46 AM
To: dev@lucene.apache.org
Subject: Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) - Build # 9828 - Failure!

 

 

I'll take a look, perhaps I messed something up at the framework level as well. Maybe we should add a 'clean cwd after suite' rule... although it'd probably make a *lot* of tests fail... :)


Dawid

 

On Wed, Mar 26, 2014 at 9:40 AM, Uwe Schindler <uw...@thetaphi.de> wrote:

Hi Dawid,
 

The framework correctly removes its own files on successful run, in the case of a failed build: no (but that’s fine). 

 

The test files should be removed by the test itself (although they may be nuked later by the test framework removing the “Jx” folders). But, if you look into the logs, you always see the well-known message with something like: “!!!! WARNING: best effort to remove C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\solr\build\solr-core\test\J0\.\org.apache.solr.cloud.TestShortCircuitedRequests-1395693845226 FAILED !!!!!”

 

Maybe the order of execution was shanged in test shutdown, so it is no longer possible to delete the test files after shutdown of test (because still open). I have no idea how Solr cleans up after its test, but there must definitely somebody have a look. The remaining files in the folder are also of “successful” test, its just many offenders, keeping the files (doesn’t matter if they succeed or not).

 

I can reproduce the same on my local machine. When running solr tests, I get horrible amounts of data in the J0,J1,J2,… subdirectories. Each test creates a full Solr instance dir and never cleans them up. Some of them do, so you see then disappearing while running test, but the offenders like listed here in the screenshots stay alive. And those are huge.

 

What is also interesting to me (if looking at the screenshot): Some of the test directories have older timestamps, but the parent directory was created when the test runs (later!). I have no idea, *who* changes the last modified date of those folders (maybe happens when unzipping something and lastmod dates are preserved), but the folders are definitely not created at the time shown in Windows Explorer. The workspace was definitely correctly nuked before running the tests – ant clean works!

 

As a quick fix, I can enlarge the virtual disk of this Windows VM, but I am limited because of SSD capacity, so this won’t help – especially if we get more tests. And I don’t think we should require something like 4 GiB of disk space to run Solr tests!

 

I will open a bug report to review the test setup and how the cleanup is done in afterClass for Solr tests. Lucene always removes all files, there are never any relicts.

 

Uwe

 

-----

Uwe Schindler

H.-H.-Meier-Allee 63, D-28213 Bremen

 <http://www.thetaphi.de/> http://www.thetaphi.de

eMail: uwe@thetaphi.de

 

From: dawid.weiss@gmail.com [mailto:dawid.weiss@gmail.com] On Behalf Of Dawid Weiss
Sent: Wednesday, March 26, 2014 8:59 AM


To: dev@lucene.apache.org
Subject: Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) - Build # 9828 - Failure!

 

 

The framework itself should clean up its own directories and files but I don't think it removes test files. It definitely doesn't remove anything

in case of an unsuccessful build -- did those builds pass?


Dawid

 

On Wed, Mar 26, 2014 at 1:47 AM, Uwe Schindler <uw...@thetaphi.de> wrote:

Hi,

 

the windows VM again ran out of disk space a minute ago.

 

The Windows VM had initially approx. 8 GB free space. After running both workspaces (4.x and trunk), the Solr-core work folder used approx. 3 to 4 GiB each. Looking into the directory, it looks like (maybe that is a windows problem), nothing is cleaned up in the JUnit runner directory. I was expecting that tests clean up after running, which seems no longer be the case.

 

Trunk:

 



 

 

4.x

 



-----

Uwe Schindler

H.-H.-Meier-Allee 63, D-28213 Bremen

http://www.thetaphi.de

eMail: uwe@thetaphi.de

 

 

> -----Original Message-----

> From: dawid.weiss@gmail.com [mailto:dawid.weiss@gmail.com] On Behalf

> Of Dawid Weiss

> Sent: Tuesday, March 25, 2014 9:47 AM

> To: dev@lucene.apache.org

> Subject: Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) - Build #

> 9828 - Failure!

> 

> These tests are loooong, so theoretically we could add a guard thread at the

> suite level that would watch for disk capacity thresholds...

> but it still seems like an overkill to me.

> 

> Dawid

> 

> On Tue, Mar 25, 2014 at 9:36 AM, Uwe Schindler < <ma...@thetaphi.de> uwe@thetaphi.de> wrote:

> > Hi,

> >

> > I also analyzed this specific events file. The events file looks quite fine, no

> endless logging in this case. It was also only 95 MB (quite normal). So it looks

> like something else filled the disk space, so checking size of event file is not

> really useful (only for the last case). After reverting the virtual box to the

> clean snapshot it had 9 GB free. So something must fill the disk (e.g. indexes

> other data). If this happens again, I will do a complete analysis of whet is

> there to find in workspace. I only know, that all the 9 GB of space are inside

> the Jenkins Workspace folder, so nothing outside fills the disk (like windows

> itsself).

> >

> > I will think about it a bit more.

> >

> > -----

> > Uwe Schindler

> > H.-H.-Meier-Allee 63, D-28213 Bremen

> >  <http://www.thetaphi.de> http://www.thetaphi.de

> > eMail:  <ma...@thetaphi.de> uwe@thetaphi.de

> >

> >

> >> -----Original Message-----

> >> From:  <ma...@gmail.com> dawid.weiss@gmail.com [ <ma...@gmail.com> mailto:dawid.weiss@gmail.com] On

> Behalf

> >> Of Dawid Weiss

> >> Sent: Tuesday, March 25, 2014 9:25 AM

> >> To:  <ma...@lucene.apache.org> dev@lucene.apache.org

> >> Cc: Mark Miller

> >> Subject: Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) -

> >> Build #

> >> 9828 - Failure!

> >>

> >> > Would it be possible to catch those cases while running tests

> >> > (maybe before the disk is full) and fail the build? Maybe something

> >> > that the event file is not allowed to grow beyond a specific size.

> >> > If it grows, the test framework fails the whole build? We can have

> >> > something like maximum size of 1 GB (configureable).

> >>

> >> I honestly think this is trying to cater for an insane specific

> >> scenario of a faulty test. Think of it: a single test that logs gigs

> >> to disk... Guarding against it may be next to impossible at the test

> >> framework level. We can put a condition in ant that checks for remaining

> temp space and fails if it's less than 5gb...

> >>

> >> Dawid

> >>

> >> ---------------------------------------------------------------------

> >> To unsubscribe, e-mail:  <ma...@lucene.apache.org> dev-unsubscribe@lucene.apache.org For

> >> additional commands, e-mail:  <ma...@lucene.apache.org> dev-help@lucene.apache.org

> >

> >

> > ---------------------------------------------------------------------

> > To unsubscribe, e-mail:  <ma...@lucene.apache.org> dev-unsubscribe@lucene.apache.org For

> > additional commands, e-mail:  <ma...@lucene.apache.org> dev-help@lucene.apache.org

> >

> 

> ---------------------------------------------------------------------

> To unsubscribe, e-mail:  <ma...@lucene.apache.org> dev-unsubscribe@lucene.apache.org For additional

> commands, e-mail:  <ma...@lucene.apache.org> dev-help@lucene.apache.org

 

 


Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) - Build # 9828 - Failure!

Posted by Dawid Weiss <da...@cs.put.poznan.pl>.
I'll take a look, perhaps I messed something up at the framework level as
well. Maybe we should add a 'clean cwd after suite' rule... although it'd
probably make a *lot* of tests fail... :)

Dawid


On Wed, Mar 26, 2014 at 9:40 AM, Uwe Schindler <uw...@thetaphi.de> wrote:

> Hi Dawid,
>
>
>
> The framework correctly removes its own files on successful run, in the
> case of a failed build: no (but that’s fine).
>
>
>
> The test files should be removed by the test itself (although they may be nuked later by the test framework removing the “Jx” folders). But, if you look into the logs, you always see the well-known message with something like: “!!!! WARNING: best effort to remove C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\solr\build\solr-core\test\J0\.\org.apache.solr.cloud.TestShortCircuitedRequests-1395693845226 FAILED !!!!!”
>
>
>
> Maybe the order of execution was shanged in test shutdown, so it is no
> longer possible to delete the test files after shutdown of test (because
> still open). I have no idea how Solr cleans up after its test, but there
> must definitely somebody have a look. The remaining files in the folder are
> also of “successful” test, its just many offenders, keeping the files
> (doesn’t matter if they succeed or not).
>
>
>
> I can reproduce the same on my local machine. When running solr tests, I
> get horrible amounts of data in the J0,J1,J2,… subdirectories. Each test
> creates a full Solr instance dir and never cleans them up. Some of them do,
> so you see then disappearing while running test, but the offenders like
> listed here in the screenshots stay alive. And those are huge.
>
>
>
> What is also interesting to me (if looking at the screenshot): Some of the
> test directories have older timestamps, but the parent directory was
> created when the test runs (later!). I have no idea, **who** changes the
> last modified date of those folders (maybe happens when unzipping something
> and lastmod dates are preserved), but the folders are definitely not
> created at the time shown in Windows Explorer. The workspace was definitely
> correctly nuked before running the tests – ant clean works!
>
>
>
> As a quick fix, I can enlarge the virtual disk of this Windows VM, but I
> am limited because of SSD capacity, so this won’t help – especially if we
> get more tests. And I don’t think we should require something like 4 GiB of
> disk space to run Solr tests!
>
>
>
> I will open a bug report to review the test setup and how the cleanup is
> done in afterClass for Solr tests. Lucene always removes all files, there
> are never any relicts.
>
>
>
> Uwe
>
>
>
> -----
>
> Uwe Schindler
>
> H.-H.-Meier-Allee 63, D-28213 Bremen
>
> http://www.thetaphi.de
>
> eMail: uwe@thetaphi.de
>
>
>
> *From:* dawid.weiss@gmail.com [mailto:dawid.weiss@gmail.com] *On Behalf
> Of *Dawid Weiss
> *Sent:* Wednesday, March 26, 2014 8:59 AM
>
> *To:* dev@lucene.apache.org
> *Subject:* Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) -
> Build # 9828 - Failure!
>
>
>
>
>
> The framework itself should clean up its own directories and files but I
> don't think it removes test files. It definitely doesn't remove anything
>
> in case of an unsuccessful build -- did those builds pass?
>
>
> Dawid
>
>
>
> On Wed, Mar 26, 2014 at 1:47 AM, Uwe Schindler <uw...@thetaphi.de> wrote:
>
> Hi,
>
>
>
> the windows VM again ran out of disk space a minute ago.
>
>
>
> The Windows VM had initially approx. 8 GB free space. After running both
> workspaces (4.x and trunk), the Solr-core work folder used approx. 3 to 4
> GiB each. Looking into the directory, it looks like (maybe that is a
> windows problem), nothing is cleaned up in the JUnit runner directory. I
> was expecting that tests clean up after running, which seems no longer be
> the case.
>
>
>
> Trunk:
>
>
>
>
>
>
>
> 4.x
>
>
>
>  -----
>
> Uwe Schindler
>
> H.-H.-Meier-Allee 63, D-28213 Bremen
>
> http://www.thetaphi.de
>
> eMail: uwe@thetaphi.de
>
>
>
>
>
> > -----Original Message-----
>
> > From: dawid.weiss@gmail.com [mailto:dawid.weiss@gmail.com] On Behalf
>
> > Of Dawid Weiss
>
> > Sent: Tuesday, March 25, 2014 9:47 AM
>
> > To: dev@lucene.apache.org
>
> > Subject: Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) -
> Build #
>
> > 9828 - Failure!
>
> >
>
> > These tests are loooong, so theoretically we could add a guard thread at
> the
>
> > suite level that would watch for disk capacity thresholds...
>
> > but it still seems like an overkill to me.
>
> >
>
> > Dawid
>
> >
>
> > On Tue, Mar 25, 2014 at 9:36 AM, Uwe Schindler <uw...@thetaphi.de> wrote:
>
> > > Hi,
>
> > >
>
> > > I also analyzed this specific events file. The events file looks quite
> fine, no
>
> > endless logging in this case. It was also only 95 MB (quite normal). So
> it looks
>
> > like something else filled the disk space, so checking size of event
> file is not
>
> > really useful (only for the last case). After reverting the virtual box
> to the
>
> > clean snapshot it had 9 GB free. So something must fill the disk (e.g.
> indexes
>
> > other data). If this happens again, I will do a complete analysis of
> whet is
>
> > there to find in workspace. I only know, that all the 9 GB of space are
> inside
>
> > the Jenkins Workspace folder, so nothing outside fills the disk (like
> windows
>
> > itsself).
>
> > >
>
> > > I will think about it a bit more.
>
> > >
>
> > > -----
>
> > > Uwe Schindler
>
> > > H.-H.-Meier-Allee 63, D-28213 Bremen
>
> > > http://www.thetaphi.de
>
> > > eMail: uwe@thetaphi.de
>
> > >
>
> > >
>
> > >> -----Original Message-----
>
> > >> From: dawid.weiss@gmail.com [mailto:dawid.weiss@gmail.com<da...@gmail.com>]
> On
>
> > Behalf
>
> > >> Of Dawid Weiss
>
> > >> Sent: Tuesday, March 25, 2014 9:25 AM
>
> > >> To: dev@lucene.apache.org
>
> > >> Cc: Mark Miller
>
> > >> Subject: Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) -
>
> > >> Build #
>
> > >> 9828 - Failure!
>
> > >>
>
> > >> > Would it be possible to catch those cases while running tests
>
> > >> > (maybe before the disk is full) and fail the build? Maybe something
>
> > >> > that the event file is not allowed to grow beyond a specific size.
>
> > >> > If it grows, the test framework fails the whole build? We can have
>
> > >> > something like maximum size of 1 GB (configureable).
>
> > >>
>
> > >> I honestly think this is trying to cater for an insane specific
>
> > >> scenario of a faulty test. Think of it: a single test that logs gigs
>
> > >> to disk... Guarding against it may be next to impossible at the test
>
> > >> framework level. We can put a condition in ant that checks for
> remaining
>
> > temp space and fails if it's less than 5gb...
>
> > >>
>
> > >> Dawid
>
> > >>
>
> > >> ---------------------------------------------------------------------
>
> > >> To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org For
>
> > >> additional commands, e-mail: dev-help@lucene.apache.org
>
> > >
>
> > >
>
> > > ---------------------------------------------------------------------
>
> > > To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org For
>
> > > additional commands, e-mail: dev-help@lucene.apache.org
>
> > >
>
> >
>
> > ---------------------------------------------------------------------
>
> > To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org For additional
>
> > commands, e-mail: dev-help@lucene.apache.org
>
>
>

RE: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) - Build # 9828 - Failure!

Posted by Uwe Schindler <uw...@thetaphi.de>.
Hi Dawid,
 

The framework correctly removes its own files on successful run, in the case of a failed build: no (but that’s fine). 

 

The test files should be removed by the test itself (although they may be nuked later by the test framework removing the “Jx” folders). But, if you look into the logs, you always see the well-known message with something like: “!!!! WARNING: best effort to remove C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\solr\build\solr-core\test\J0\.\org.apache.solr.cloud.TestShortCircuitedRequests-1395693845226 FAILED !!!!!”

 

Maybe the order of execution was shanged in test shutdown, so it is no longer possible to delete the test files after shutdown of test (because still open). I have no idea how Solr cleans up after its test, but there must definitely somebody have a look. The remaining files in the folder are also of “successful” test, its just many offenders, keeping the files (doesn’t matter if they succeed or not).

 

I can reproduce the same on my local machine. When running solr tests, I get horrible amounts of data in the J0,J1,J2,… subdirectories. Each test creates a full Solr instance dir and never cleans them up. Some of them do, so you see then disappearing while running test, but the offenders like listed here in the screenshots stay alive. And those are huge.

 

What is also interesting to me (if looking at the screenshot): Some of the test directories have older timestamps, but the parent directory was created when the test runs (later!). I have no idea, *who* changes the last modified date of those folders (maybe happens when unzipping something and lastmod dates are preserved), but the folders are definitely not created at the time shown in Windows Explorer. The workspace was definitely correctly nuked before running the tests – ant clean works!

 

As a quick fix, I can enlarge the virtual disk of this Windows VM, but I am limited because of SSD capacity, so this won’t help – especially if we get more tests. And I don’t think we should require something like 4 GiB of disk space to run Solr tests!

 

I will open a bug report to review the test setup and how the cleanup is done in afterClass for Solr tests. Lucene always removes all files, there are never any relicts.

 

Uwe

 

-----

Uwe Schindler

H.-H.-Meier-Allee 63, D-28213 Bremen

 <http://www.thetaphi.de/> http://www.thetaphi.de

eMail: uwe@thetaphi.de

 

From: dawid.weiss@gmail.com [mailto:dawid.weiss@gmail.com] On Behalf Of Dawid Weiss
Sent: Wednesday, March 26, 2014 8:59 AM
To: dev@lucene.apache.org
Subject: Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) - Build # 9828 - Failure!

 

 

The framework itself should clean up its own directories and files but I don't think it removes test files. It definitely doesn't remove anything

in case of an unsuccessful build -- did those builds pass?


Dawid

 

On Wed, Mar 26, 2014 at 1:47 AM, Uwe Schindler <uw...@thetaphi.de> wrote:

Hi,

 

the windows VM again ran out of disk space a minute ago.

 

The Windows VM had initially approx. 8 GB free space. After running both workspaces (4.x and trunk), the Solr-core work folder used approx. 3 to 4 GiB each. Looking into the directory, it looks like (maybe that is a windows problem), nothing is cleaned up in the JUnit runner directory. I was expecting that tests clean up after running, which seems no longer be the case.

 

Trunk:

 



 

 

4.x

 



-----

Uwe Schindler

H.-H.-Meier-Allee 63, D-28213 Bremen

http://www.thetaphi.de

eMail: uwe@thetaphi.de

 

 

> -----Original Message-----

> From: dawid.weiss@gmail.com [mailto:dawid.weiss@gmail.com] On Behalf

> Of Dawid Weiss

> Sent: Tuesday, March 25, 2014 9:47 AM

> To: dev@lucene.apache.org

> Subject: Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) - Build #

> 9828 - Failure!

> 

> These tests are loooong, so theoretically we could add a guard thread at the

> suite level that would watch for disk capacity thresholds...

> but it still seems like an overkill to me.

> 

> Dawid

> 

> On Tue, Mar 25, 2014 at 9:36 AM, Uwe Schindler < <ma...@thetaphi.de> uwe@thetaphi.de> wrote:

> > Hi,

> >

> > I also analyzed this specific events file. The events file looks quite fine, no

> endless logging in this case. It was also only 95 MB (quite normal). So it looks

> like something else filled the disk space, so checking size of event file is not

> really useful (only for the last case). After reverting the virtual box to the

> clean snapshot it had 9 GB free. So something must fill the disk (e.g. indexes

> other data). If this happens again, I will do a complete analysis of whet is

> there to find in workspace. I only know, that all the 9 GB of space are inside

> the Jenkins Workspace folder, so nothing outside fills the disk (like windows

> itsself).

> >

> > I will think about it a bit more.

> >

> > -----

> > Uwe Schindler

> > H.-H.-Meier-Allee 63, D-28213 Bremen

> >  <http://www.thetaphi.de> http://www.thetaphi.de

> > eMail:  <ma...@thetaphi.de> uwe@thetaphi.de

> >

> >

> >> -----Original Message-----

> >> From:  <ma...@gmail.com> dawid.weiss@gmail.com [ <ma...@gmail.com> mailto:dawid.weiss@gmail.com] On

> Behalf

> >> Of Dawid Weiss

> >> Sent: Tuesday, March 25, 2014 9:25 AM

> >> To:  <ma...@lucene.apache.org> dev@lucene.apache.org

> >> Cc: Mark Miller

> >> Subject: Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) -

> >> Build #

> >> 9828 - Failure!

> >>

> >> > Would it be possible to catch those cases while running tests

> >> > (maybe before the disk is full) and fail the build? Maybe something

> >> > that the event file is not allowed to grow beyond a specific size.

> >> > If it grows, the test framework fails the whole build? We can have

> >> > something like maximum size of 1 GB (configureable).

> >>

> >> I honestly think this is trying to cater for an insane specific

> >> scenario of a faulty test. Think of it: a single test that logs gigs

> >> to disk... Guarding against it may be next to impossible at the test

> >> framework level. We can put a condition in ant that checks for remaining

> temp space and fails if it's less than 5gb...

> >>

> >> Dawid

> >>

> >> ---------------------------------------------------------------------

> >> To unsubscribe, e-mail:  <ma...@lucene.apache.org> dev-unsubscribe@lucene.apache.org For

> >> additional commands, e-mail:  <ma...@lucene.apache.org> dev-help@lucene.apache.org

> >

> >

> > ---------------------------------------------------------------------

> > To unsubscribe, e-mail:  <ma...@lucene.apache.org> dev-unsubscribe@lucene.apache.org For

> > additional commands, e-mail:  <ma...@lucene.apache.org> dev-help@lucene.apache.org

> >

> 

> ---------------------------------------------------------------------

> To unsubscribe, e-mail:  <ma...@lucene.apache.org> dev-unsubscribe@lucene.apache.org For additional

> commands, e-mail:  <ma...@lucene.apache.org> dev-help@lucene.apache.org

 


Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) - Build # 9828 - Failure!

Posted by Dawid Weiss <da...@cs.put.poznan.pl>.
The framework itself should clean up its own directories and files but I
don't think it removes test files. It definitely doesn't remove anything
in case of an unsuccessful build -- did those builds pass?

Dawid


On Wed, Mar 26, 2014 at 1:47 AM, Uwe Schindler <uw...@thetaphi.de> wrote:

> Hi,
>
>
>
> the windows VM again ran out of disk space a minute ago.
>
>
>
> The Windows VM had initially approx. 8 GB free space. After running both
> workspaces (4.x and trunk), the Solr-core work folder used approx. 3 to 4
> GiB each. Looking into the directory, it looks like (maybe that is a
> windows problem), nothing is cleaned up in the JUnit runner directory. I
> was expecting that tests clean up after running, which seems no longer be
> the case.
>
>
>
> Trunk:
>
>
>
>
>
>
>
> 4.x
>
>
>
>  -----
>
> Uwe Schindler
>
> H.-H.-Meier-Allee 63, D-28213 Bremen
>
> http://www.thetaphi.de
>
> eMail: uwe@thetaphi.de
>
>
>
>
>
> > -----Original Message-----
>
> > From: dawid.weiss@gmail.com [mailto:dawid.weiss@gmail.com] On Behalf
>
> > Of Dawid Weiss
>
> > Sent: Tuesday, March 25, 2014 9:47 AM
>
> > To: dev@lucene.apache.org
>
> > Subject: Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) -
> Build #
>
> > 9828 - Failure!
>
> >
>
> > These tests are loooong, so theoretically we could add a guard thread at
> the
>
> > suite level that would watch for disk capacity thresholds...
>
> > but it still seems like an overkill to me.
>
> >
>
> > Dawid
>
> >
>
> > On Tue, Mar 25, 2014 at 9:36 AM, Uwe Schindler <uw...@thetaphi.de> wrote:
>
> > > Hi,
>
> > >
>
> > > I also analyzed this specific events file. The events file looks quite
> fine, no
>
> > endless logging in this case. It was also only 95 MB (quite normal). So
> it looks
>
> > like something else filled the disk space, so checking size of event
> file is not
>
> > really useful (only for the last case). After reverting the virtual box
> to the
>
> > clean snapshot it had 9 GB free. So something must fill the disk (e.g.
> indexes
>
> > other data). If this happens again, I will do a complete analysis of
> whet is
>
> > there to find in workspace. I only know, that all the 9 GB of space are
> inside
>
> > the Jenkins Workspace folder, so nothing outside fills the disk (like
> windows
>
> > itsself).
>
> > >
>
> > > I will think about it a bit more.
>
> > >
>
> > > -----
>
> > > Uwe Schindler
>
> > > H.-H.-Meier-Allee 63, D-28213 Bremen
>
> > > http://www.thetaphi.de
>
> > > eMail: uwe@thetaphi.de
>
> > >
>
> > >
>
> > >> -----Original Message-----
>
> > >> From: dawid.weiss@gmail.com [mailto:dawid.weiss@gmail.com<da...@gmail.com>]
> On
>
> > Behalf
>
> > >> Of Dawid Weiss
>
> > >> Sent: Tuesday, March 25, 2014 9:25 AM
>
> > >> To: dev@lucene.apache.org
>
> > >> Cc: Mark Miller
>
> > >> Subject: Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) -
>
> > >> Build #
>
> > >> 9828 - Failure!
>
> > >>
>
> > >> > Would it be possible to catch those cases while running tests
>
> > >> > (maybe before the disk is full) and fail the build? Maybe something
>
> > >> > that the event file is not allowed to grow beyond a specific size.
>
> > >> > If it grows, the test framework fails the whole build? We can have
>
> > >> > something like maximum size of 1 GB (configureable).
>
> > >>
>
> > >> I honestly think this is trying to cater for an insane specific
>
> > >> scenario of a faulty test. Think of it: a single test that logs gigs
>
> > >> to disk... Guarding against it may be next to impossible at the test
>
> > >> framework level. We can put a condition in ant that checks for
> remaining
>
> > temp space and fails if it's less than 5gb...
>
> > >>
>
> > >> Dawid
>
> > >>
>
> > >> ---------------------------------------------------------------------
>
> > >> To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org For
>
> > >> additional commands, e-mail: dev-help@lucene.apache.org
>
> > >
>
> > >
>
> > > ---------------------------------------------------------------------
>
> > > To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org For
>
> > > additional commands, e-mail: dev-help@lucene.apache.org
>
> > >
>
> >
>
> > ---------------------------------------------------------------------
>
> > To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org For additional
>
> > commands, e-mail: dev-help@lucene.apache.org
>

RE: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) - Build # 9828 - Failure!

Posted by Uwe Schindler <uw...@thetaphi.de>.
Hi,

 

the windows VM again ran out of disk space a minute ago.

 

The Windows VM had initially approx. 8 GB free space. After running both workspaces (4.x and trunk), the Solr-core work folder used approx. 3 to 4 GiB each. Looking into the directory, it looks like (maybe that is a windows problem), nothing is cleaned up in the JUnit runner directory. I was expecting that tests clean up after running, which seems no longer be the case.

 

Trunk:

 



 

 

4.x

 



-----

Uwe Schindler

H.-H.-Meier-Allee 63, D-28213 Bremen

http://www.thetaphi.de

eMail: uwe@thetaphi.de

 

 

> -----Original Message-----

> From: dawid.weiss@gmail.com [mailto:dawid.weiss@gmail.com] On Behalf

> Of Dawid Weiss

> Sent: Tuesday, March 25, 2014 9:47 AM

> To: dev@lucene.apache.org

> Subject: Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) - Build #

> 9828 - Failure!

> 

> These tests are loooong, so theoretically we could add a guard thread at the

> suite level that would watch for disk capacity thresholds...

> but it still seems like an overkill to me.

> 

> Dawid

> 

> On Tue, Mar 25, 2014 at 9:36 AM, Uwe Schindler < <ma...@thetaphi.de> uwe@thetaphi.de> wrote:

> > Hi,

> >

> > I also analyzed this specific events file. The events file looks quite fine, no

> endless logging in this case. It was also only 95 MB (quite normal). So it looks

> like something else filled the disk space, so checking size of event file is not

> really useful (only for the last case). After reverting the virtual box to the

> clean snapshot it had 9 GB free. So something must fill the disk (e.g. indexes

> other data). If this happens again, I will do a complete analysis of whet is

> there to find in workspace. I only know, that all the 9 GB of space are inside

> the Jenkins Workspace folder, so nothing outside fills the disk (like windows

> itsself).

> >

> > I will think about it a bit more.

> >

> > -----

> > Uwe Schindler

> > H.-H.-Meier-Allee 63, D-28213 Bremen

> >  <http://www.thetaphi.de> http://www.thetaphi.de

> > eMail:  <ma...@thetaphi.de> uwe@thetaphi.de

> >

> >

> >> -----Original Message-----

> >> From:  <ma...@gmail.com> dawid.weiss@gmail.com [ <ma...@gmail.com> mailto:dawid.weiss@gmail.com] On

> Behalf

> >> Of Dawid Weiss

> >> Sent: Tuesday, March 25, 2014 9:25 AM

> >> To:  <ma...@lucene.apache.org> dev@lucene.apache.org

> >> Cc: Mark Miller

> >> Subject: Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) -

> >> Build #

> >> 9828 - Failure!

> >>

> >> > Would it be possible to catch those cases while running tests

> >> > (maybe before the disk is full) and fail the build? Maybe something

> >> > that the event file is not allowed to grow beyond a specific size.

> >> > If it grows, the test framework fails the whole build? We can have

> >> > something like maximum size of 1 GB (configureable).

> >>

> >> I honestly think this is trying to cater for an insane specific

> >> scenario of a faulty test. Think of it: a single test that logs gigs

> >> to disk... Guarding against it may be next to impossible at the test

> >> framework level. We can put a condition in ant that checks for remaining

> temp space and fails if it's less than 5gb...

> >>

> >> Dawid

> >>

> >> ---------------------------------------------------------------------

> >> To unsubscribe, e-mail:  <ma...@lucene.apache.org> dev-unsubscribe@lucene.apache.org For

> >> additional commands, e-mail:  <ma...@lucene.apache.org> dev-help@lucene.apache.org

> >

> >

> > ---------------------------------------------------------------------

> > To unsubscribe, e-mail:  <ma...@lucene.apache.org> dev-unsubscribe@lucene.apache.org For

> > additional commands, e-mail:  <ma...@lucene.apache.org> dev-help@lucene.apache.org

> >

> 

> ---------------------------------------------------------------------

> To unsubscribe, e-mail:  <ma...@lucene.apache.org> dev-unsubscribe@lucene.apache.org For additional

> commands, e-mail:  <ma...@lucene.apache.org> dev-help@lucene.apache.org


Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) - Build # 9828 - Failure!

Posted by Dawid Weiss <da...@cs.put.poznan.pl>.
These tests are loooong, so theoretically we could add a guard thread
at the suite level that would watch for disk capacity thresholds...
but it still seems like an overkill to me.

Dawid

On Tue, Mar 25, 2014 at 9:36 AM, Uwe Schindler <uw...@thetaphi.de> wrote:
> Hi,
>
> I also analyzed this specific events file. The events file looks quite fine, no endless logging in this case. It was also only 95 MB (quite normal). So it looks like something else filled the disk space, so checking size of event file is not really useful (only for the last case). After reverting the virtual box to the clean snapshot it had 9 GB free. So something must fill the disk (e.g. indexes other data). If this happens again, I will do a complete analysis of whet is there to find in workspace. I only know, that all the 9 GB of space are inside the Jenkins Workspace folder, so nothing outside fills the disk (like windows itsself).
>
> I will think about it a bit more.
>
> -----
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: uwe@thetaphi.de
>
>
>> -----Original Message-----
>> From: dawid.weiss@gmail.com [mailto:dawid.weiss@gmail.com] On Behalf
>> Of Dawid Weiss
>> Sent: Tuesday, March 25, 2014 9:25 AM
>> To: dev@lucene.apache.org
>> Cc: Mark Miller
>> Subject: Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) - Build #
>> 9828 - Failure!
>>
>> > Would it be possible to catch those cases while running tests (maybe
>> > before the disk is full) and fail the build? Maybe something that the
>> > event file is not allowed to grow beyond a specific size. If it grows,
>> > the test framework fails the whole build? We can have something like
>> > maximum size of 1 GB (configureable).
>>
>> I honestly think this is trying to cater for an insane specific scenario of a faulty
>> test. Think of it: a single test that logs gigs to disk... Guarding against it may be
>> next to impossible at the test framework level. We can put a condition in ant
>> that checks for remaining temp space and fails if it's less than 5gb...
>>
>> Dawid
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org For additional
>> commands, e-mail: dev-help@lucene.apache.org
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
> For additional commands, e-mail: dev-help@lucene.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


RE: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) - Build # 9828 - Failure!

Posted by Uwe Schindler <uw...@thetaphi.de>.
Hi,

I also analyzed this specific events file. The events file looks quite fine, no endless logging in this case. It was also only 95 MB (quite normal). So it looks like something else filled the disk space, so checking size of event file is not really useful (only for the last case). After reverting the virtual box to the clean snapshot it had 9 GB free. So something must fill the disk (e.g. indexes other data). If this happens again, I will do a complete analysis of whet is there to find in workspace. I only know, that all the 9 GB of space are inside the Jenkins Workspace folder, so nothing outside fills the disk (like windows itsself).

I will think about it a bit more.

-----
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: uwe@thetaphi.de


> -----Original Message-----
> From: dawid.weiss@gmail.com [mailto:dawid.weiss@gmail.com] On Behalf
> Of Dawid Weiss
> Sent: Tuesday, March 25, 2014 9:25 AM
> To: dev@lucene.apache.org
> Cc: Mark Miller
> Subject: Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) - Build #
> 9828 - Failure!
> 
> > Would it be possible to catch those cases while running tests (maybe
> > before the disk is full) and fail the build? Maybe something that the
> > event file is not allowed to grow beyond a specific size. If it grows,
> > the test framework fails the whole build? We can have something like
> > maximum size of 1 GB (configureable).
> 
> I honestly think this is trying to cater for an insane specific scenario of a faulty
> test. Think of it: a single test that logs gigs to disk... Guarding against it may be
> next to impossible at the test framework level. We can put a condition in ant
> that checks for remaining temp space and fails if it's less than 5gb...
> 
> Dawid
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org For additional
> commands, e-mail: dev-help@lucene.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) - Build # 9828 - Failure!

Posted by Dawid Weiss <da...@cs.put.poznan.pl>.
> Would it be possible to catch those cases while running tests (maybe before
> the disk is full) and fail the build? Maybe something that the event file is
> not allowed to grow beyond a specific size. If it grows, the test framework
> fails the whole build? We can have something like maximum size of 1 GB
> (configureable).

I honestly think this is trying to cater for an insane specific
scenario of a faulty test. Think of it: a single test that logs gigs
to disk... Guarding against it may be next to impossible at the test
framework level. We can put a condition in ant that checks for
remaining temp space and fails if it's less than 5gb...

Dawid

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


RE: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) - Build # 9828 - Failure!

Posted by Uwe Schindler <uw...@thetaphi.de>.
Hi Mark,

 

last night this happened again (for the 3rd time). For the second time in a windows VM. Maybe it was another test than the one we have seen it first.

 

I don’t think disabling SSL helps here. It happens together with SSL, that’s right, but from the log file there seems to be some bug in the test setup: The test tries to reconnect endless without stopping to try again and again and later failing the test. This fills disk space quite fast. This also makes the tests never end.

 

The rate of reconnects is so high, that the log file is filled with megabytes in very short time. When disk is full, the carrot framework is no longer able to handle this case and the whole JVM setup hangs.

 

Unfortunately, I have no data available anymore because I had to revert the Windows Virtualbox VM to the latest clean snapshot:

 

[junit4] Could not serialize report for suite org.apache.solr.cloud.TestShortCircuitedRequests: java.io.IOException: There is not enough space on the disk

   [junit4] Mar 24, 2014 8:44:09 PM com.carrotsearch.ant.tasks.junit4.dependencies.com.google.common.eventbus.EventBus$LoggingSubscriberExceptionHandler handleException

   [junit4] SEVERE: Could not dispatch event: com.carrotsearch.ant.tasks.junit4.listeners.TextReport@1290a22 to public void com.carrotsearch.ant.tasks.junit4.listeners.TextReport.onSuiteResult(com.carrotsearch.ant.tasks.junit4.events.aggregated.AggregatedSuiteResultEvent)

 

I am not even sure if this is the test that caused this.

 

Would it be possible to catch those cases while running tests (maybe before the disk is full) and fail the build? Maybe something that the event file is not allowed to grow beyond a specific size. If it grows, the test framework fails the whole build? We can have something like maximum size of 1 GB (configureable).

 

Uwe

 

-----

Uwe Schindler

H.-H.-Meier-Allee 63, D-28213 Bremen

 <http://www.thetaphi.de/> http://www.thetaphi.de

eMail: uwe@thetaphi.de

 

From: Mark Miller [mailto:markrmiller@gmail.com] 
Sent: Wednesday, March 19, 2014 6:51 PM
To: dev@lucene.apache.org
Subject: Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) - Build # 9828 - Failure!

 

I'll disable SSL for that test for now. SSL in general has been hard to get working smoothly with tests unfortunately.

 

I've got a JIRA issue to look at improving it, but not likely I'll look into it for some time, so until then, tests having issues with SSL should likely simply disable SSL for now.

 

- Mark

 

On Tue, Mar 18, 2014 at 4:54 AM, Dawid Weiss <da...@cs.put.poznan.pl> wrote:

It's a lot of error messages like this one. I have the full syserr
dump if needed.

D.

2773140 T6223 oasc.ChaosMonkeyNothingIsSafeTest$FullThrottleStopableIndexingThread$1.handleError
WARN suss error java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:618)
at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:522)
at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:401)
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:178)
at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:304)
at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:610)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:445)
at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
at org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner.run(ConcurrentUpdateSolrServer.java:232)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)


On Tue, Mar 18, 2014 at 9:46 AM, Uwe Schindler <uw...@thetaphi.de> wrote:
> I dig "tail -10000" to extract the last 10000 lines. The file is also in the archive at same place.
>
> It is indeed a loop. The code loops endless in a "Connection Refused" loop, without any delay between the events. After approx. 2:50 hours this hit the limits of the SSD file system. This test fails so often since it was "fixed", we should revert to @BadApple.
>
> Uwe
>
> -----
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: uwe@thetaphi.de
>
>
>> -----Original Message-----
>> From: dawid.weiss@gmail.com [mailto:dawid.weiss@gmail.com] On Behalf
>> Of Dawid Weiss
>> Sent: Tuesday, March 18, 2014 9:16 AM
>> To: dev@lucene.apache.org
>> Subject: Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) - Build #
>> 9828 - Failure!
>>
>> >       junit4-J0-20140317_230107_233.events    8.17 GB [fingerprint] view
>> >
>> > This build created a 8.17 GB big events file and failed with out of space.
>> How can this happen?
>>
>> Can you peek at it? It's probably something that logs in a loop or something.
>> I'm fetching it right now, let's see if I can figure it out.
>>
>> D.
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org For additional
>> commands, e-mail: dev-help@lucene.apache.org
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
> For additional commands, e-mail: dev-help@lucene.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org





 

-- 

- Mark

 

http://about.me/markrmiller


Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) - Build # 9828 - Failure!

Posted by Mark Miller <ma...@gmail.com>.
I'll disable SSL for that test for now. SSL in general has been hard to get
working smoothly with tests unfortunately.

I've got a JIRA issue to look at improving it, but not likely I'll look
into it for some time, so until then, tests having issues with SSL should
likely simply disable SSL for now.

- Mark


On Tue, Mar 18, 2014 at 4:54 AM, Dawid Weiss
<da...@cs.put.poznan.pl>wrote:

> It's a lot of error messages like this one. I have the full syserr
> dump if needed.
>
> D.
>
> 2773140 T6223
> oasc.ChaosMonkeyNothingIsSafeTest$FullThrottleStopableIndexingThread$1.handleError
> WARN suss error java.net.ConnectException: Connection refused
> at java.net.PlainSocketImpl.socketConnect(Native Method)
> at
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
> at
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
> at
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> at java.net.Socket.connect(Socket.java:579)
> at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:618)
> at
> org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:522)
> at
> org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:401)
> at
> org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:178)
> at
> org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:304)
> at
> org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:610)
> at
> org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:445)
> at
> org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
> at
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
> at
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
> at
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
> at
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner.run(ConcurrentUpdateSolrServer.java:232)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:744)
>
> On Tue, Mar 18, 2014 at 9:46 AM, Uwe Schindler <uw...@thetaphi.de> wrote:
> > I dig "tail -10000" to extract the last 10000 lines. The file is also in
> the archive at same place.
> >
> > It is indeed a loop. The code loops endless in a "Connection Refused"
> loop, without any delay between the events. After approx. 2:50 hours this
> hit the limits of the SSD file system. This test fails so often since it
> was "fixed", we should revert to @BadApple.
> >
> > Uwe
> >
> > -----
> > Uwe Schindler
> > H.-H.-Meier-Allee 63, D-28213 Bremen
> > http://www.thetaphi.de
> > eMail: uwe@thetaphi.de
> >
> >
> >> -----Original Message-----
> >> From: dawid.weiss@gmail.com [mailto:dawid.weiss@gmail.com] On Behalf
> >> Of Dawid Weiss
> >> Sent: Tuesday, March 18, 2014 9:16 AM
> >> To: dev@lucene.apache.org
> >> Subject: Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) -
> Build #
> >> 9828 - Failure!
> >>
> >> >       junit4-J0-20140317_230107_233.events    8.17 GB [fingerprint]
> view
> >> >
> >> > This build created a 8.17 GB big events file and failed with out of
> space.
> >> How can this happen?
> >>
> >> Can you peek at it? It's probably something that logs in a loop or
> something.
> >> I'm fetching it right now, let's see if I can figure it out.
> >>
> >> D.
> >>
> >> ---------------------------------------------------------------------
> >> To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org For
> additional
> >> commands, e-mail: dev-help@lucene.apache.org
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
> > For additional commands, e-mail: dev-help@lucene.apache.org
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
> For additional commands, e-mail: dev-help@lucene.apache.org
>
>


-- 
- Mark

http://about.me/markrmiller

Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) - Build # 9828 - Failure!

Posted by Dawid Weiss <da...@cs.put.poznan.pl>.
It's a lot of error messages like this one. I have the full syserr
dump if needed.

D.

2773140 T6223 oasc.ChaosMonkeyNothingIsSafeTest$FullThrottleStopableIndexingThread$1.handleError
WARN suss error java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:618)
at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:522)
at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:401)
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:178)
at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:304)
at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:610)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:445)
at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
at org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner.run(ConcurrentUpdateSolrServer.java:232)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)

On Tue, Mar 18, 2014 at 9:46 AM, Uwe Schindler <uw...@thetaphi.de> wrote:
> I dig "tail -10000" to extract the last 10000 lines. The file is also in the archive at same place.
>
> It is indeed a loop. The code loops endless in a "Connection Refused" loop, without any delay between the events. After approx. 2:50 hours this hit the limits of the SSD file system. This test fails so often since it was "fixed", we should revert to @BadApple.
>
> Uwe
>
> -----
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: uwe@thetaphi.de
>
>
>> -----Original Message-----
>> From: dawid.weiss@gmail.com [mailto:dawid.weiss@gmail.com] On Behalf
>> Of Dawid Weiss
>> Sent: Tuesday, March 18, 2014 9:16 AM
>> To: dev@lucene.apache.org
>> Subject: Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) - Build #
>> 9828 - Failure!
>>
>> >       junit4-J0-20140317_230107_233.events    8.17 GB [fingerprint] view
>> >
>> > This build created a 8.17 GB big events file and failed with out of space.
>> How can this happen?
>>
>> Can you peek at it? It's probably something that logs in a loop or something.
>> I'm fetching it right now, let's see if I can figure it out.
>>
>> D.
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org For additional
>> commands, e-mail: dev-help@lucene.apache.org
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
> For additional commands, e-mail: dev-help@lucene.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


RE: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) - Build # 9828 - Failure!

Posted by Uwe Schindler <uw...@thetaphi.de>.
I dig "tail -10000" to extract the last 10000 lines. The file is also in the archive at same place.

It is indeed a loop. The code loops endless in a "Connection Refused" loop, without any delay between the events. After approx. 2:50 hours this hit the limits of the SSD file system. This test fails so often since it was "fixed", we should revert to @BadApple.

Uwe

-----
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: uwe@thetaphi.de


> -----Original Message-----
> From: dawid.weiss@gmail.com [mailto:dawid.weiss@gmail.com] On Behalf
> Of Dawid Weiss
> Sent: Tuesday, March 18, 2014 9:16 AM
> To: dev@lucene.apache.org
> Subject: Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) - Build #
> 9828 - Failure!
> 
> >       junit4-J0-20140317_230107_233.events    8.17 GB [fingerprint] view
> >
> > This build created a 8.17 GB big events file and failed with out of space.
> How can this happen?
> 
> Can you peek at it? It's probably something that logs in a loop or something.
> I'm fetching it right now, let's see if I can figure it out.
> 
> D.
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org For additional
> commands, e-mail: dev-help@lucene.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) - Build # 9828 - Failure!

Posted by Dawid Weiss <da...@cs.put.poznan.pl>.
>       junit4-J0-20140317_230107_233.events    8.17 GB [fingerprint] view
>
> This build created a 8.17 GB big events file and failed with out of space. How can this happen?

Can you peek at it? It's probably something that logs in a loop or
something. I'm fetching it right now, let's see if I can figure it
out.

D.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


RE: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) - Build # 9828 - Failure!

Posted by Uwe Schindler <uw...@thetaphi.de>.
Build #9828 (17.03.2014 22:35:20)
http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/9828/

Java: 32bit/jdk1.7.0_51 -server -XX:+UseG1GC
Build-Artefacts:
	junit4-J0-20140317_230107_233.events	8.17 GB	[fingerprint] view
	junit4-J1-20140317_230107_233.events	61.65 MB	[fingerprint] view

This build created a 8.17 GB big events file and failed with out of space. How can this happen?

Uwe

-----
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: uwe@thetaphi.de


> -----Original Message-----
> From: Policeman Jenkins Server [mailto:jenkins@thetaphi.de]
> Sent: Tuesday, March 18, 2014 2:26 AM
> To: dev@lucene.apache.org; steffkes@apache.org; rmuir@apache.org
> Subject: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_51) - Build # 9828
> - Failure!
> 
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/9828/
> Java: 32bit/jdk1.7.0_51 -server -XX:+UseG1GC
> 
> All tests passed
> 
> Build Log:
> [...truncated 12076 lines...]
>    [junit4] JVM J0: stderr was not empty, see:
> /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-
> core/test/temp/junit4-J0-20140317_230107_233.syserr
>    [junit4] >>> JVM J0: stderr (verbatim) ----
>    [junit4] WARN: Unhandled exception in event serialization. ->
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.JsonIOExc
> eption: java.io.IOException: No space left on device
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.Gson.toJs
> on(Gson.java:514)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.events.Serializer.serialize(Serializer.java:61
> )
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.slave.SlaveMain$4.write(SlaveMain.java:3
> 76)
>    [junit4] 	at
> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
>    [junit4] 	at
> java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
>    [junit4] 	at java.io.PrintStream.flush(PrintStream.java:338)
>    [junit4] 	at
> sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:297)
>    [junit4] 	at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:141)
>    [junit4] 	at
> java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
>    [junit4] 	at
> org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:59)
>    [junit4] 	at
> org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:324)
>    [junit4] 	at
> org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
>    [junit4] 	at
> org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
>    [junit4] 	at
> org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppende
> rs(AppenderAttachableImpl.java:66)
>    [junit4] 	at
> org.apache.log4j.Category.callAppenders(Category.java:206)
>    [junit4] 	at org.apache.log4j.Category.forcedLog(Category.java:391)
>    [junit4] 	at org.apache.log4j.Category.log(Category.java:856)
>    [junit4] 	at
> org.slf4j.impl.Log4jLoggerAdapter.warn(Log4jLoggerAdapter.java:478)
>    [junit4] 	at
> org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest$FullThrottleStopableI
> ndexingThread$1.handleError(ChaosMonkeyNothingIsSafeTest.java:284)
>    [junit4] 	at
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner.run(C
> oncurrentUpdateSolrServer.java:256)
>    [junit4] 	at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.jav
> a:1145)
>    [junit4] 	at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.ja
> va:615)
>    [junit4] 	at java.lang.Thread.run(Thread.java:744)
>    [junit4] Caused by: java.io.IOException: No space left on device
>    [junit4] 	at java.io.RandomAccessFile.writeBytes0(Native Method)
>    [junit4] 	at
> java.io.RandomAccessFile.writeBytes(RandomAccessFile.java:520)
>    [junit4] 	at
> java.io.RandomAccessFile.write(RandomAccessFile.java:550)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.slave.RandomAccessFileOutputStream.wri
> te(RandomAccessFileOutputStream.java:28)
>    [junit4] 	at
> java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
>    [junit4] 	at
> sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
>    [junit4] 	at
> sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282)
>    [junit4] 	at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125)
>    [junit4] 	at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:113)
>    [junit4] 	at
> java.io.OutputStreamWriter.write(OutputStreamWriter.java:194)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.stream.Js
> onWriter.string(JsonWriter.java:535)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.stream.Js
> onWriter.value(JsonWriter.java:364)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.bi
> nd.TypeAdapters$22.write(TypeAdapters.java:626)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.bi
> nd.TypeAdapters$22.write(TypeAdapters.java:578)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.St
> reams.write(Streams.java:67)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.GsonToMi
> niGsonTypeAdapterFactory$3.write(GsonToMiniGsonTypeAdapterFactory.ja
> va:98)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.bi
> nd.TypeAdapterRuntimeTypeWrapper.write(TypeAdapterRuntimeTypeWra
> pper.java:66)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.bi
> nd.ReflectiveTypeAdapterFactory$1.write(ReflectiveTypeAdapterFactory.jav
> a:82)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.bi
> nd.ReflectiveTypeAdapterFactory$Adapter.write(ReflectiveTypeAdapterFact
> ory.java:194)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.Gson.toJs
> on(Gson.java:512)
>    [junit4] 	... 22 more
>    [junit4]
>    [junit4] WARN: Event serializer exception. -> java.io.IOException: Serializer
> already closed.
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.events.Serializer.serialize(Serializer.java:41
> )
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.slave.RunListenerEmitter.testFailure(RunLi
> stenerEmitter.java:54)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.slave.NoExceptionRunListenerDecorator.t
> estFailure(NoExceptionRunListenerDecorator.java:55)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.slave.BeforeAfterRunListenerDecorator.te
> stFailure(BeforeAfterRunListenerDecorator.java:60)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.slave.OrderedRunNotifier$4.notifyListener
> (OrderedRunNotifier.java:129)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.slave.OrderedRunNotifier$SafeNotifier.ru
> n(OrderedRunNotifier.java:63)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.slave.OrderedRunNotifier.fireTestFailure(
> OrderedRunNotifier.java:126)
>    [junit4] 	at
> com.carrotsearch.randomizedtesting.ThreadLeakControl$2.evaluate(ThreadL
> eakControl.java:406)
>    [junit4] 	at
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(Randomi
> zedRunner.java:641)
>    [junit4] 	at
> com.carrotsearch.randomizedtesting.RandomizedRunner.access$200(Rando
> mizedRunner.java:128)
>    [junit4] 	at
> com.carrotsearch.randomizedtesting.RandomizedRunner$1.run(Randomized
> Runner.java:558)
>    [junit4]
>    [junit4] WARN: Event serializer exception. -> java.io.IOException: Serializer
> already closed.
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.events.Serializer.serialize(Serializer.java:41
> )
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.slave.RunListenerEmitter.testFinished(Run
> ListenerEmitter.java:113)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.slave.NoExceptionRunListenerDecorator.t
> estFinished(NoExceptionRunListenerDecorator.java:47)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.slave.BeforeAfterRunListenerDecorator.te
> stFinished(BeforeAfterRunListenerDecorator.java:51)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.slave.OrderedRunNotifier$7.notifyListener
> (OrderedRunNotifier.java:179)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.slave.OrderedRunNotifier$SafeNotifier.ru
> n(OrderedRunNotifier.java:63)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.slave.OrderedRunNotifier.fireTestFinished
> (OrderedRunNotifier.java:176)
>    [junit4] 	at
> com.carrotsearch.randomizedtesting.ThreadLeakControl$2.evaluate(ThreadL
> eakControl.java:410)
>    [junit4] 	at
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(Randomi
> zedRunner.java:641)
>    [junit4] 	at
> com.carrotsearch.randomizedtesting.RandomizedRunner.access$200(Rando
> mizedRunner.java:128)
>    [junit4] 	at
> com.carrotsearch.randomizedtesting.RandomizedRunner$1.run(Randomized
> Runner.java:558)
>    [junit4]
>    [junit4] WARN: Event serializer exception. -> java.io.IOException: Serializer
> already closed.
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.events.Serializer.serialize(Serializer.java:41
> )
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.slave.RunListenerEmitter.testFailure(RunLi
> stenerEmitter.java:52)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.slave.NoExceptionRunListenerDecorator.t
> estFailure(NoExceptionRunListenerDecorator.java:55)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.slave.BeforeAfterRunListenerDecorator.te
> stFailure(BeforeAfterRunListenerDecorator.java:60)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.slave.OrderedRunNotifier$4.notifyListener
> (OrderedRunNotifier.java:129)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.slave.OrderedRunNotifier$SafeNotifier.ru
> n(OrderedRunNotifier.java:63)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.slave.OrderedRunNotifier.fireTestFailure(
> OrderedRunNotifier.java:126)
>    [junit4] 	at
> com.carrotsearch.randomizedtesting.RandomizedRunner.fireTestFailure(Ran
> domizedRunner.java:753)
>    [junit4] 	at
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(Randomi
> zedRunner.java:654)
>    [junit4] 	at
> com.carrotsearch.randomizedtesting.RandomizedRunner.access$200(Rando
> mizedRunner.java:128)
>    [junit4] 	at
> com.carrotsearch.randomizedtesting.RandomizedRunner$1.run(Randomized
> Runner.java:558)
>    [junit4]
>    [junit4] WARN: Event serializer exception. -> java.io.IOException: Serializer
> already closed.
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.events.Serializer.serialize(Serializer.java:41
> )
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.slave.RunListenerEmitter.testRunFinished(
> RunListenerEmitter.java:120)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.slave.NoExceptionRunListenerDecorator.t
> estRunFinished(NoExceptionRunListenerDecorator.java:31)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.slave.BeforeAfterRunListenerDecorator.te
> stRunFinished(BeforeAfterRunListenerDecorator.java:33)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.slave.OrderedRunNotifier$2.notifyListener
> (OrderedRunNotifier.java:94)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.slave.OrderedRunNotifier$SafeNotifier.ru
> n(OrderedRunNotifier.java:63)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.slave.OrderedRunNotifier.fireTestRunFinis
> hed(OrderedRunNotifier.java:91)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.execute(SlaveMain.java:1
> 81)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.main(SlaveMain.java:276)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.slave.SlaveMainSafe.main(SlaveMainSafe.j
> ava:12)
>    [junit4]
>    [junit4] WARN: Exception at main loop level. ->
> java.lang.RuntimeException: java.io.IOException: Serializer already closed.
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.slave.StdInLineIterator.computeNext(StdI
> nLineIterator.java:34)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.slave.StdInLineIterator.computeNext(StdI
> nLineIterator.java:13)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.common.collec
> t.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.common.collec
> t.AbstractIterator.hasNext(AbstractIterator.java:138)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.common.collec
> t.Iterators$5.hasNext(Iterators.java:542)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.execute(SlaveMain.java:1
> 69)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.main(SlaveMain.java:276)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.slave.SlaveMainSafe.main(SlaveMainSafe.j
> ava:12)
>    [junit4] Caused by: java.io.IOException: Serializer already closed.
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.events.Serializer.serialize(Serializer.java:41
> )
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.slave.StdInLineIterator.computeNext(StdI
> nLineIterator.java:28)
>    [junit4] 	... 7 more
>    [junit4] <<< JVM J0: EOF ----
> 
> [...truncated 2 lines...]
>    [junit4] ERROR: JVM J0 ended with an exception, command line:
> /var/lib/jenkins/tools/java/32bit/jdk1.7.0_51/jre/bin/java -server -
> XX:+UseG1GC -XX:+HeapDumpOnOutOfMemoryError -
> XX:HeapDumpPath=/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/heapdumps -Dtests.prefix=tests -Dtests.seed=D0DA9EC3A93F0E07 -
> Xmx512M -Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false -
> Dtests.codec=random -Dtests.postingsformat=random -
> Dtests.docvaluesformat=random -Dtests.locale=random -
> Dtests.timezone=random -Dtests.directory=random -
> Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=5.0 -
> Dtests.cleanthreads=perClass -
> Djava.util.logging.config.file=/mnt/ssd/jenkins/workspace/Lucene-Solr-
> trunk-Linux/lucene/tools/junit4/logging.properties -Dtests.nightly=false -
> Dtests.weekly=false -Dtests.slow=true -Dtests.asserts.gracious=false -
> Dtests.multiplier=3 -DtempDir=. -Djava.io.tmpdir=. -
> Djunit4.tempDir=/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/build/solr-core/test/temp -
> Dclover.db.dir=/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/lucene/build/clover/db -
> Djava.security.manager=org.apache.lucene.util.TestSecurityManager -
> Djava.security.policy=/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/lucene/tools/junit4/tests.policy -Dlucene.version=5.0-SNAPSHOT -
> Djetty.testMode=1 -Djetty.insecurerandom=1 -
> Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory -
> Djava.awt.headless=true -Djdk.map.althashing.threshold=0 -
> Dtests.filterstacks=true -Dtests.disableHdfs=true -Dfile.encoding=ISO-8859-1
> -classpath /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/build/solr-
> core/classes/test:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/build/solr-test-
> framework/classes/java:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/test-framework/lib/junit4-ant-
> 2.1.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/core/src/test-files:/mnt/ssd/jenkins/workspace/Lucene-Solr-
> trunk-Linux/lucene/build/test-
> framework/classes/java:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/lucene/build/codecs/classes/java:/mnt/ssd/jenkins/workspace/Lucen
> e-Solr-trunk-Linux/solr/build/solr-
> solrj/classes/java:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/build/solr-
> core/classes/java:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/lucene/build/analysis/common/lucene-analyzers-common-5.0-
> SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/lucene/build/analysis/kuromoji/lucene-analyzers-kuromoji-5.0-
> SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/lucene/build/analysis/phonetic/lucene-analyzers-phonetic-5.0-
> SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/lucene/build/codecs/lucene-codecs-5.0-
> SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/lucene/build/highlighter/lucene-highlighter-5.0-
> SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/lucene/build/memory/lucene-memory-5.0-
> SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/lucene/build/misc/lucene-misc-5.0-
> SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/lucene/build/spatial/lucene-spatial-5.0-
> SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/lucene/build/expressions/lucene-expressions-5.0-
> SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/lucene/build/suggest/lucene-suggest-5.0-
> SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/lucene/build/grouping/lucene-grouping-5.0-
> SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/lucene/build/queries/lucene-queries-5.0-
> SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/lucene/build/queryparser/lucene-queryparser-5.0-
> SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/lucene/build/join/lucene-join-5.0-
> SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/core/lib/antlr-runtime-
> 3.5.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/core/lib/asm-4.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-
> trunk-Linux/solr/core/lib/asm-commons-
> 4.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/core/lib/commons-cli-
> 1.2.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/core/lib/commons-codec-
> 1.9.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/core/lib/commons-configuration-
> 1.6.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/core/lib/commons-fileupload-
> 1.2.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/core/lib/commons-lang-
> 2.6.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/core/lib/concurrentlinkedhashmap-lru-
> 1.2.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/core/lib/dom4j-1.6.1.jar:/mnt/ssd/jenkins/workspace/Lucene-
> Solr-trunk-Linux/solr/core/lib/guava-
> 14.0.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/core/lib/hadoop-annotations-
> 2.2.0.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/core/lib/hadoop-auth-
> 2.2.0.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/core/lib/hadoop-common-
> 2.2.0.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/core/lib/hadoop-hdfs-
> 2.2.0.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/core/lib/hppc-0.5.2.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-
> trunk-Linux/solr/core/lib/joda-time-
> 2.2.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/core/lib/org.restlet-
> 2.1.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/core/lib/org.restlet.ext.servlet-
> 2.1.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/core/lib/protobuf-java-
> 2.5.0.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/core/lib/spatial4j-0.4.1.jar:/mnt/ssd/jenkins/workspace/Lucene-
> Solr-trunk-Linux/solr/solrj/lib/commons-io-
> 2.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/solrj/lib/httpclient-4.3.1.jar:/mnt/ssd/jenkins/workspace/Lucene-
> Solr-trunk-Linux/solr/solrj/lib/httpcore-
> 4.3.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/solrj/lib/httpmime-
> 4.3.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/solrj/lib/jcl-over-slf4j-
> 1.7.6.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/solrj/lib/jul-to-slf4j-
> 1.7.6.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/solrj/lib/log4j-1.2.16.jar:/mnt/ssd/jenkins/workspace/Lucene-
> Solr-trunk-Linux/solr/solrj/lib/noggit-
> 0.5.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/solrj/lib/slf4j-api-1.7.6.jar:/mnt/ssd/jenkins/workspace/Lucene-
> Solr-trunk-Linux/solr/solrj/lib/slf4j-log4j12-
> 1.7.6.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/solrj/lib/wstx-asl-3.2.7.jar:/mnt/ssd/jenkins/workspace/Lucene-
> Solr-trunk-Linux/solr/solrj/lib/zookeeper-
> 3.4.5.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/example/lib/jetty-continuation-
> 8.1.10.v20130312.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/example/lib/jetty-deploy-
> 8.1.10.v20130312.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/example/lib/jetty-http-
> 8.1.10.v20130312.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/example/lib/jetty-io-
> 8.1.10.v20130312.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/example/lib/jetty-jmx-
> 8.1.10.v20130312.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/example/lib/jetty-security-
> 8.1.10.v20130312.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/example/lib/jetty-server-
> 8.1.10.v20130312.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/example/lib/jetty-servlet-
> 8.1.10.v20130312.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/example/lib/jetty-util-
> 8.1.10.v20130312.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/example/lib/jetty-webapp-
> 8.1.10.v20130312.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/example/lib/jetty-xml-
> 8.1.10.v20130312.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/example/lib/servlet-api-
> 3.0.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/example/example-DIH/solr/db/lib/derby-
> 10.9.1.0.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/example/example-DIH/solr/db/lib/hsqldb-
> 1.8.0.10.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/lucene/build/core/classes/java:/mnt/ssd/jenkins/workspace/Lucene-
> Solr-trunk-Linux/lucene/test-framework/lib/junit-
> 4.10.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/test-
> framework/lib/randomizedtesting-runner-
> 2.1.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/core/test-lib/antlr-runtime-
> 3.5.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/test-
> lib/asm-4.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/core/test-lib/asm-commons-
> 4.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/test-
> lib/cglib-nodep-2.2.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/core/test-lib/commons-collections-
> 3.2.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/core/test-lib/dom4j-
> 1.6.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/core/test-lib/easymock-
> 3.0.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/test-
> lib/hadoop-common-2.2.0-tests.jar:/mnt/ssd/jenkins/workspace/Lucene-
> Solr-trunk-Linux/solr/core/test-lib/hadoop-hdfs-2.2.0-
> tests.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/core/test-lib/hppc-0.5.2.jar:/mnt/ssd/jenkins/workspace/Lucene-
> Solr-trunk-Linux/solr/core/test-lib/javax.servlet-api-
> 3.0.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/core/test-lib/jersey-core-
> 1.8.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/test-
> lib/jetty-6.1.26.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/core/test-lib/jetty-util-
> 6.1.26.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-
> Linux/solr/core/test-lib/objenesis-
> 1.2.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/li
> b/ant-launcher.jar:/var/lib/jenkins/.ant/lib/ivy-
> 2.3.0.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/
> lib/ant-
> jai.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib
> /ant-
> swing.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
> /lib/ant-apache-
> oro.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/li
> b/ant-
> jmf.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/li
> b/ant-apache-
> xalan2.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.
> 2/lib/ant-
> javamail.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.
> 8.2/lib/ant-apache-
> resolver.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.
> 8.2/lib/ant-
> testutil.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.
> 2/lib/ant-commons-
> logging.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.
> 2/lib/ant-apache-
> log4j.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/
> lib/ant-
> junit.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/
> lib/ant-
> jsch.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/li
> b/ant-commons-
> net.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/li
> b/ant-apache-
> bsf.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/li
> b/ant-
> jdepend.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.
> 8.2/lib/ant-
> netrexx.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.
> 8.2/lib/ant-apache-
> regexp.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.
> 2/lib/ant-
> junit4.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
> /lib/ant.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8
> .2/lib/ant-apache-
> bcel.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/l
> ib/ant-
> antlr.jar:/var/lib/jenkins/tools/java/32bit/jdk1.7.0_51/lib/tools.jar:/var/lib/je
> nkins/.ivy2/cache/com.carrotsearch.randomizedtesting/junit4-
> ant/jars/junit4-ant-2.1.1.jar -ea:org.apache.lucene... -ea:org.apache.solr...
> com.carrotsearch.ant.tasks.junit4.slave.SlaveMainSafe -eventsfile
> /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-
> core/test/temp/junit4-J0-20140317_230107_233.events
> @/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-
> core/test/temp/junit4-J0-20140317_230107_233.suites
>    [junit4] ERROR: JVM J0 ended with an exception: Forked process returned
> with error code: 240. Very likely a JVM crash.  Process output piped in logs
> above.
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.JUnit4.executeSlave(JUnit4.java:1458)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.JUnit4.access$000(JUnit4.java:133)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.JUnit4$2.call(JUnit4.java:945)
>    [junit4] 	at
> com.carrotsearch.ant.tasks.junit4.JUnit4$2.call(JUnit4.java:942)
>    [junit4] 	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>    [junit4] 	at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.jav
> a:1145)
>    [junit4] 	at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.ja
> va:615)
>    [junit4] 	at java.lang.Thread.run(Thread.java:744)
> 
> BUILD FAILED
> /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:467: The
> following error occurred while executing this line:
> /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:447: The
> following error occurred while executing this line:
> /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:45: The
> following error occurred while executing this line:
> /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/extra-targets.xml:37:
> The following error occurred while executing this line:
> /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build.xml:189:
> The following error occurred while executing this line:
> /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/common-
> build.xml:490: The following error occurred while executing this line:
> /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/common-
> build.xml:1275: The following error occurred while executing this line:
> /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/common-
> build.xml:907: At least one slave process threw an exception, first: Forked
> process returned with error code: 240. Very likely a JVM crash.  Process
> output piped in logs above.
> 
> Total time: 165 minutes 4 seconds
> Build step 'Invoke Ant' marked build as failure Description set: Java:
> 32bit/jdk1.7.0_51 -server -XX:+UseG1GC Archiving artifacts Recording test
> results Email was triggered for: Failure Sending email for trigger: Failure
> 



---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org