You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2014/08/09 17:39:19 UTC
[JENKINS] Lucene-Solr-NightlyTests-4.x - Build # 594 - Still
Failing
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-4.x/594/
1 tests failed.
REGRESSION: org.apache.lucene.index.TestFlushByRamOrCountsPolicy.testFlushDocCount
Error Message:
Captured an uncaught exception in thread: Thread[id=3560, name=Thread-2881, state=RUNNABLE, group=TGRP-TestFlushByRamOrCountsPolicy]
Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=3560, name=Thread-2881, state=RUNNABLE, group=TGRP-TestFlushByRamOrCountsPolicy]
Caused by: java.lang.RuntimeException: java.lang.IllegalArgumentException: Document contains at least one immense term in field="body" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped. Please correct the analyzer to not produce such terms. The prefix of the first immense term is: '[125, 125, 123, 123, 123, 123, 123, 115, 117, 98, 115, 116, 99, 124, 125, 125, 125, 123, 123, 123, 49, 125, 125, 125, 124, 123, 123, 123, 112, 49]...', original message: bytes can be at most 32766 in length; got 94384
at __randomizedtesting.SeedInfo.seed([A89700DEDEDF8395]:0)
at org.apache.lucene.index.TestFlushByRamOrCountsPolicy$IndexThread.run(TestFlushByRamOrCountsPolicy.java:331)
Caused by: java.lang.IllegalArgumentException: Document contains at least one immense term in field="body" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped. Please correct the analyzer to not produce such terms. The prefix of the first immense term is: '[125, 125, 123, 123, 123, 123, 123, 115, 117, 98, 115, 116, 99, 124, 125, 125, 125, 123, 123, 123, 49, 125, 125, 125, 124, 123, 123, 123, 112, 49]...', original message: bytes can be at most 32766 in length; got 94384
at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:671)
at org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:342)
at org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:301)
at org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:241)
at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:451)
at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1539)
at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1254)
at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1235)
at org.apache.lucene.index.TestFlushByRamOrCountsPolicy$IndexThread.run(TestFlushByRamOrCountsPolicy.java:316)
Caused by: org.apache.lucene.util.BytesRefHash$MaxBytesLengthExceededException: bytes can be at most 32766 in length; got 94384
at org.apache.lucene.util.BytesRefHash.add(BytesRefHash.java:284)
at org.apache.lucene.index.TermsHashPerField.add(TermsHashPerField.java:151)
at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:645)
... 8 more
Suppressed: java.lang.AssertionError: close() called in wrong state: INCREMENT
at org.apache.lucene.analysis.MockTokenizer.close(MockTokenizer.java:262)
at org.apache.lucene.analysis.TokenFilter.close(TokenFilter.java:58)
at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:661)
... 8 more
Build Log:
[...truncated 1120 lines...]
[junit4] Suite: org.apache.lucene.index.TestFlushByRamOrCountsPolicy
[junit4] 1> FAILED exc:
[junit4] 1> java.lang.IllegalArgumentException: Document contains at least one immense term in field="body" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped. Please correct the analyzer to not produce such terms. The prefix of the first immense term is: '[125, 125, 123, 123, 123, 123, 123, 115, 117, 98, 115, 116, 99, 124, 125, 125, 125, 123, 123, 123, 49, 125, 125, 125, 124, 123, 123, 123, 112, 49]...', original message: bytes can be at most 32766 in length; got 94384
[junit4] 1> at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:671)
[junit4] 1> at org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:342)
[junit4] 1> at org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:301)
[junit4] 1> at org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:241)
[junit4] 1> at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:451)
[junit4] 1> at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1539)
[junit4] 1> at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1254)
[junit4] 1> at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1235)
[junit4] 1> at org.apache.lucene.index.TestFlushByRamOrCountsPolicy$IndexThread.run(TestFlushByRamOrCountsPolicy.java:316)
[junit4] 1> Caused by: org.apache.lucene.util.BytesRefHash$MaxBytesLengthExceededException: bytes can be at most 32766 in length; got 94384
[junit4] 1> at org.apache.lucene.util.BytesRefHash.add(BytesRefHash.java:284)
[junit4] 1> at org.apache.lucene.index.TermsHashPerField.add(TermsHashPerField.java:151)
[junit4] 1> at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:645)
[junit4] 1> ... 8 more
[junit4] 1> Suppressed: java.lang.AssertionError: close() called in wrong state: INCREMENT
[junit4] 1> at org.apache.lucene.analysis.MockTokenizer.close(MockTokenizer.java:262)
[junit4] 1> at org.apache.lucene.analysis.TokenFilter.close(TokenFilter.java:58)
[junit4] 1> at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:661)
[junit4] 1> ... 8 more
[junit4] 2> NOTE: download the large Jenkins line-docs file by running 'ant get-jenkins-line-docs' in the lucene directory.
[junit4] 2> NOTE: reproduce with: ant test -Dtestcase=TestFlushByRamOrCountsPolicy -Dtests.method=testFlushDocCount -Dtests.seed=A89700DEDEDF8395 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/hudson/lucene-data/enwiki.random.lines.txt -Dtests.locale=es_EC -Dtests.timezone=US/Michigan -Dtests.file.encoding=US-ASCII
[junit4] ERROR 0.54s J0 | TestFlushByRamOrCountsPolicy.testFlushDocCount <<<
[junit4] > Throwable #1: java.lang.AssertionError: expected:<170> but was:<101>
[junit4] > at org.apache.lucene.index.TestFlushByRamOrCountsPolicy.testFlushDocCount(TestFlushByRamOrCountsPolicy.java:160)
[junit4] > at java.lang.Thread.run(Thread.java:745)Throwable #2: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=3560, name=Thread-2881, state=RUNNABLE, group=TGRP-TestFlushByRamOrCountsPolicy]
[junit4] > Caused by: java.lang.RuntimeException: java.lang.IllegalArgumentException: Document contains at least one immense term in field="body" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped. Please correct the analyzer to not produce such terms. The prefix of the first immense term is: '[125, 125, 123, 123, 123, 123, 123, 115, 117, 98, 115, 116, 99, 124, 125, 125, 125, 123, 123, 123, 49, 125, 125, 125, 124, 123, 123, 123, 112, 49]...', original message: bytes can be at most 32766 in length; got 94384
[junit4] > at __randomizedtesting.SeedInfo.seed([A89700DEDEDF8395]:0)
[junit4] > at org.apache.lucene.index.TestFlushByRamOrCountsPolicy$IndexThread.run(TestFlushByRamOrCountsPolicy.java:331)
[junit4] > Caused by: java.lang.IllegalArgumentException: Document contains at least one immense term in field="body" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped. Please correct the analyzer to not produce such terms. The prefix of the first immense term is: '[125, 125, 123, 123, 123, 123, 123, 115, 117, 98, 115, 116, 99, 124, 125, 125, 125, 123, 123, 123, 49, 125, 125, 125, 124, 123, 123, 123, 112, 49]...', original message: bytes can be at most 32766 in length; got 94384
[junit4] > at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:671)
[junit4] > at org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:342)
[junit4] > at org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:301)
[junit4] > at org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:241)
[junit4] > at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:451)
[junit4] > at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1539)
[junit4] > at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1254)
[junit4] > at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1235)
[junit4] > at org.apache.lucene.index.TestFlushByRamOrCountsPolicy$IndexThread.run(TestFlushByRamOrCountsPolicy.java:316)
[junit4] > Caused by: org.apache.lucene.util.BytesRefHash$MaxBytesLengthExceededException: bytes can be at most 32766 in length; got 94384
[junit4] > at org.apache.lucene.util.BytesRefHash.add(BytesRefHash.java:284)
[junit4] > at org.apache.lucene.index.TermsHashPerField.add(TermsHashPerField.java:151)
[junit4] > at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:645)
[junit4] > ... 8 more
[junit4] > Suppressed: java.lang.AssertionError: close() called in wrong state: INCREMENT
[junit4] > at org.apache.lucene.analysis.MockTokenizer.close(MockTokenizer.java:262)
[junit4] > at org.apache.lucene.analysis.TokenFilter.close(TokenFilter.java:58)
[junit4] > at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:661)
[junit4] > ... 8 more
[junit4] 2> NOTE: leaving temporary files on disk at: /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/lucene/build/core/test/J0/./lucene.index.TestFlushByRamOrCountsPolicy-A89700DEDEDF8395-001
[junit4] 2> NOTE: test params are: codec=Lucene49: {titleTokenized=PostingsFormat(name=Memory doPackFST= false), date=Pulsing41(freqCutoff=9 minBlockSize=35 maxBlockSize=138), body=Pulsing41(freqCutoff=9 minBlockSize=35 maxBlockSize=138), title=Pulsing41(freqCutoff=15 minBlockSize=35 maxBlockSize=138), docid=FSTOrd41}, docValues:{titleDV=DocValuesFormat(name=Memory)}, sim=RandomSimilarityProvider(queryNorm=false,coord=crazy): {titleTokenized=DFR G1, body=DFR I(n)L1}, locale=es_EC, timezone=US/Michigan
[junit4] 2> NOTE: FreeBSD 9.1-RELEASE-p3 amd64/Oracle Corporation 1.7.0_60 (64-bit)/cpus=16,threads=1,free=213787512,total=457703424
[junit4] 2> NOTE: All tests run in this JVM: [TestSimilarityProvider, TestSearch, TestLockFactory, TestIndexWriterCommit, TestLazyProxSkipping, TestCompiledAutomaton, TestTermsEnum, ThrowInUncaught, TestFilteredSearch, TestSloppyPhraseQuery, TestScoreCachingWrappingScorer, TestNumericDocValuesUpdates, TestAllFilesHaveCodecHeader, TestDemo, TestNoDeletionPolicy, TestCharTermAttributeImpl, TestMultiValuedNumericRangeQuery, TestNearSpansOrdered, TestDeletionPolicy, TestBytesStore, TestScorerPerf, Nested, Nested, TestOmitPositions, TestDocTermOrdsRewriteMethod, TestFieldsReader, TestTwoPhaseCommitTool, TestSearcherManager, TestNoMergeScheduler, TestSortRescorer, TestDoc, TestQueryWrapperFilter, TestSearchAfter, TestSort, Test2BPostings, TestFieldCacheRewriteMethod, TestLock, TestSimpleAttributeImpl, TestVirtualMethod, TestBufferedChecksum, TestBytesRef, TestFieldReuse, TestPForDeltaDocIdSet, TestIndexableBinaryStringTools, TestIndexableField, TestEarlyTermination, TestPriorityQueue, TestIndexSearcher, TestPerSegmentDeletes, TestMultiPhraseQuery, TestIndexWriterReader, TestNumericRangeQuery32, TestLucene42DocValuesFormat, TestDocValuesWithThreads, TestLookaheadTokenFilter, TestForceMergeForever, TestIndexCommit, TestTermScorer, Test2BBinaryDocValues, TestIndexWriterConfig, TestIntroSorter, TestLiveFieldValues, TestPrefixRandom, TestMathUtil, TestMultiTermQueryRewrites, TestSpans, TestBagOfPostings, TestTermsEnum2, TestFastDecompressionMode, TestFilterIterator, TestIndexWriterExceptions2, TestStoredFieldsFormat, TestSentinelIntSet, TestTopScoreDocCollector, TestStressNRT, TestReuseDocsEnum, TestMergeSchedulerExternal, Test2BPostingsBytes, TestDocIdBitSet, TestIndexReaderClose, TestExceedMaxTermLength, TestTerms, TestPersistentSnapshotDeletionPolicy, TestPhraseQuery, TestUTF32ToUTF8, TestComplexExplanations, TestLucene45DocValuesFormat, TestSpanExplanations, TestQueryRescorer, TestTopDocsCollector, TestTopFieldCollector, TestReusableStringReader, TestSetOnce, Test2BPagedBytes, TestPostingsFormat, TestPhrasePrefixQuery, TestBinaryDocValuesUpdates, TestByteSlices, TestCrash, TestLongBitSet, TestTermVectors, TestCompoundFile, TestIndexWriterForceMerge, TestIndexWriterLockRelease, TestPayloadsOnVectors, TestRecyclingByteBlockAllocator, TestBlockPostingsFormat3, TestFlushByRamOrCountsPolicy]
[junit4] Completed on J0 in 10.63s, 5 tests, 1 error <<< FAILURES!
[...truncated 719 lines...]
BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/build.xml:481: The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/build.xml:454: The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/build.xml:45: The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/extra-targets.xml:37: The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/lucene/build.xml:49: The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/lucene/common-build.xml:1299: The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/lucene/common-build.xml:923: There were test failures: 417 suites, 3600 tests, 1 error, 129 ignored (118 assumptions)
Total time: 88 minutes 55 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Sending artifact delta relative to Lucene-Solr-NightlyTests-4.x #586
Archived 3 artifacts
Archive block size is 32768
Received 0 blocks and 4826209 bytes
Compression is 0.0%
Took 3.1 sec
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure
Re: [JENKINS] Lucene-Solr-NightlyTests-4.x - Build # 594 - Still Failing
Posted by Michael McCandless <lu...@mikemccandless.com>.
I committed a fix ...
Mike McCandless
http://blog.mikemccandless.com
On Sat, Aug 9, 2014 at 11:39 AM, Apache Jenkins Server
<je...@builds.apache.org> wrote:
> Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-4.x/594/
>
> 1 tests failed.
> REGRESSION: org.apache.lucene.index.TestFlushByRamOrCountsPolicy.testFlushDocCount
>
> Error Message:
> Captured an uncaught exception in thread: Thread[id=3560, name=Thread-2881, state=RUNNABLE, group=TGRP-TestFlushByRamOrCountsPolicy]
>
> Stack Trace:
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=3560, name=Thread-2881, state=RUNNABLE, group=TGRP-TestFlushByRamOrCountsPolicy]
> Caused by: java.lang.RuntimeException: java.lang.IllegalArgumentException: Document contains at least one immense term in field="body" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped. Please correct the analyzer to not produce such terms. The prefix of the first immense term is: '[125, 125, 123, 123, 123, 123, 123, 115, 117, 98, 115, 116, 99, 124, 125, 125, 125, 123, 123, 123, 49, 125, 125, 125, 124, 123, 123, 123, 112, 49]...', original message: bytes can be at most 32766 in length; got 94384
> at __randomizedtesting.SeedInfo.seed([A89700DEDEDF8395]:0)
> at org.apache.lucene.index.TestFlushByRamOrCountsPolicy$IndexThread.run(TestFlushByRamOrCountsPolicy.java:331)
> Caused by: java.lang.IllegalArgumentException: Document contains at least one immense term in field="body" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped. Please correct the analyzer to not produce such terms. The prefix of the first immense term is: '[125, 125, 123, 123, 123, 123, 123, 115, 117, 98, 115, 116, 99, 124, 125, 125, 125, 123, 123, 123, 49, 125, 125, 125, 124, 123, 123, 123, 112, 49]...', original message: bytes can be at most 32766 in length; got 94384
> at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:671)
> at org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:342)
> at org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:301)
> at org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:241)
> at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:451)
> at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1539)
> at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1254)
> at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1235)
> at org.apache.lucene.index.TestFlushByRamOrCountsPolicy$IndexThread.run(TestFlushByRamOrCountsPolicy.java:316)
> Caused by: org.apache.lucene.util.BytesRefHash$MaxBytesLengthExceededException: bytes can be at most 32766 in length; got 94384
> at org.apache.lucene.util.BytesRefHash.add(BytesRefHash.java:284)
> at org.apache.lucene.index.TermsHashPerField.add(TermsHashPerField.java:151)
> at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:645)
> ... 8 more
> Suppressed: java.lang.AssertionError: close() called in wrong state: INCREMENT
> at org.apache.lucene.analysis.MockTokenizer.close(MockTokenizer.java:262)
> at org.apache.lucene.analysis.TokenFilter.close(TokenFilter.java:58)
> at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:661)
> ... 8 more
>
>
>
>
> Build Log:
> [...truncated 1120 lines...]
> [junit4] Suite: org.apache.lucene.index.TestFlushByRamOrCountsPolicy
> [junit4] 1> FAILED exc:
> [junit4] 1> java.lang.IllegalArgumentException: Document contains at least one immense term in field="body" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped. Please correct the analyzer to not produce such terms. The prefix of the first immense term is: '[125, 125, 123, 123, 123, 123, 123, 115, 117, 98, 115, 116, 99, 124, 125, 125, 125, 123, 123, 123, 49, 125, 125, 125, 124, 123, 123, 123, 112, 49]...', original message: bytes can be at most 32766 in length; got 94384
> [junit4] 1> at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:671)
> [junit4] 1> at org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:342)
> [junit4] 1> at org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:301)
> [junit4] 1> at org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:241)
> [junit4] 1> at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:451)
> [junit4] 1> at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1539)
> [junit4] 1> at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1254)
> [junit4] 1> at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1235)
> [junit4] 1> at org.apache.lucene.index.TestFlushByRamOrCountsPolicy$IndexThread.run(TestFlushByRamOrCountsPolicy.java:316)
> [junit4] 1> Caused by: org.apache.lucene.util.BytesRefHash$MaxBytesLengthExceededException: bytes can be at most 32766 in length; got 94384
> [junit4] 1> at org.apache.lucene.util.BytesRefHash.add(BytesRefHash.java:284)
> [junit4] 1> at org.apache.lucene.index.TermsHashPerField.add(TermsHashPerField.java:151)
> [junit4] 1> at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:645)
> [junit4] 1> ... 8 more
> [junit4] 1> Suppressed: java.lang.AssertionError: close() called in wrong state: INCREMENT
> [junit4] 1> at org.apache.lucene.analysis.MockTokenizer.close(MockTokenizer.java:262)
> [junit4] 1> at org.apache.lucene.analysis.TokenFilter.close(TokenFilter.java:58)
> [junit4] 1> at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:661)
> [junit4] 1> ... 8 more
> [junit4] 2> NOTE: download the large Jenkins line-docs file by running 'ant get-jenkins-line-docs' in the lucene directory.
> [junit4] 2> NOTE: reproduce with: ant test -Dtestcase=TestFlushByRamOrCountsPolicy -Dtests.method=testFlushDocCount -Dtests.seed=A89700DEDEDF8395 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/hudson/lucene-data/enwiki.random.lines.txt -Dtests.locale=es_EC -Dtests.timezone=US/Michigan -Dtests.file.encoding=US-ASCII
> [junit4] ERROR 0.54s J0 | TestFlushByRamOrCountsPolicy.testFlushDocCount <<<
> [junit4] > Throwable #1: java.lang.AssertionError: expected:<170> but was:<101>
> [junit4] > at org.apache.lucene.index.TestFlushByRamOrCountsPolicy.testFlushDocCount(TestFlushByRamOrCountsPolicy.java:160)
> [junit4] > at java.lang.Thread.run(Thread.java:745)Throwable #2: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=3560, name=Thread-2881, state=RUNNABLE, group=TGRP-TestFlushByRamOrCountsPolicy]
> [junit4] > Caused by: java.lang.RuntimeException: java.lang.IllegalArgumentException: Document contains at least one immense term in field="body" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped. Please correct the analyzer to not produce such terms. The prefix of the first immense term is: '[125, 125, 123, 123, 123, 123, 123, 115, 117, 98, 115, 116, 99, 124, 125, 125, 125, 123, 123, 123, 49, 125, 125, 125, 124, 123, 123, 123, 112, 49]...', original message: bytes can be at most 32766 in length; got 94384
> [junit4] > at __randomizedtesting.SeedInfo.seed([A89700DEDEDF8395]:0)
> [junit4] > at org.apache.lucene.index.TestFlushByRamOrCountsPolicy$IndexThread.run(TestFlushByRamOrCountsPolicy.java:331)
> [junit4] > Caused by: java.lang.IllegalArgumentException: Document contains at least one immense term in field="body" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped. Please correct the analyzer to not produce such terms. The prefix of the first immense term is: '[125, 125, 123, 123, 123, 123, 123, 115, 117, 98, 115, 116, 99, 124, 125, 125, 125, 123, 123, 123, 49, 125, 125, 125, 124, 123, 123, 123, 112, 49]...', original message: bytes can be at most 32766 in length; got 94384
> [junit4] > at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:671)
> [junit4] > at org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:342)
> [junit4] > at org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:301)
> [junit4] > at org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:241)
> [junit4] > at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:451)
> [junit4] > at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1539)
> [junit4] > at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1254)
> [junit4] > at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1235)
> [junit4] > at org.apache.lucene.index.TestFlushByRamOrCountsPolicy$IndexThread.run(TestFlushByRamOrCountsPolicy.java:316)
> [junit4] > Caused by: org.apache.lucene.util.BytesRefHash$MaxBytesLengthExceededException: bytes can be at most 32766 in length; got 94384
> [junit4] > at org.apache.lucene.util.BytesRefHash.add(BytesRefHash.java:284)
> [junit4] > at org.apache.lucene.index.TermsHashPerField.add(TermsHashPerField.java:151)
> [junit4] > at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:645)
> [junit4] > ... 8 more
> [junit4] > Suppressed: java.lang.AssertionError: close() called in wrong state: INCREMENT
> [junit4] > at org.apache.lucene.analysis.MockTokenizer.close(MockTokenizer.java:262)
> [junit4] > at org.apache.lucene.analysis.TokenFilter.close(TokenFilter.java:58)
> [junit4] > at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:661)
> [junit4] > ... 8 more
> [junit4] 2> NOTE: leaving temporary files on disk at: /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/lucene/build/core/test/J0/./lucene.index.TestFlushByRamOrCountsPolicy-A89700DEDEDF8395-001
> [junit4] 2> NOTE: test params are: codec=Lucene49: {titleTokenized=PostingsFormat(name=Memory doPackFST= false), date=Pulsing41(freqCutoff=9 minBlockSize=35 maxBlockSize=138), body=Pulsing41(freqCutoff=9 minBlockSize=35 maxBlockSize=138), title=Pulsing41(freqCutoff=15 minBlockSize=35 maxBlockSize=138), docid=FSTOrd41}, docValues:{titleDV=DocValuesFormat(name=Memory)}, sim=RandomSimilarityProvider(queryNorm=false,coord=crazy): {titleTokenized=DFR G1, body=DFR I(n)L1}, locale=es_EC, timezone=US/Michigan
> [junit4] 2> NOTE: FreeBSD 9.1-RELEASE-p3 amd64/Oracle Corporation 1.7.0_60 (64-bit)/cpus=16,threads=1,free=213787512,total=457703424
> [junit4] 2> NOTE: All tests run in this JVM: [TestSimilarityProvider, TestSearch, TestLockFactory, TestIndexWriterCommit, TestLazyProxSkipping, TestCompiledAutomaton, TestTermsEnum, ThrowInUncaught, TestFilteredSearch, TestSloppyPhraseQuery, TestScoreCachingWrappingScorer, TestNumericDocValuesUpdates, TestAllFilesHaveCodecHeader, TestDemo, TestNoDeletionPolicy, TestCharTermAttributeImpl, TestMultiValuedNumericRangeQuery, TestNearSpansOrdered, TestDeletionPolicy, TestBytesStore, TestScorerPerf, Nested, Nested, TestOmitPositions, TestDocTermOrdsRewriteMethod, TestFieldsReader, TestTwoPhaseCommitTool, TestSearcherManager, TestNoMergeScheduler, TestSortRescorer, TestDoc, TestQueryWrapperFilter, TestSearchAfter, TestSort, Test2BPostings, TestFieldCacheRewriteMethod, TestLock, TestSimpleAttributeImpl, TestVirtualMethod, TestBufferedChecksum, TestBytesRef, TestFieldReuse, TestPForDeltaDocIdSet, TestIndexableBinaryStringTools, TestIndexableField, TestEarlyTermination, TestPriorityQueue, TestIndexSearcher, TestPerSegmentDeletes, TestMultiPhraseQuery, TestIndexWriterReader, TestNumericRangeQuery32, TestLucene42DocValuesFormat, TestDocValuesWithThreads, TestLookaheadTokenFilter, TestForceMergeForever, TestIndexCommit, TestTermScorer, Test2BBinaryDocValues, TestIndexWriterConfig, TestIntroSorter, TestLiveFieldValues, TestPrefixRandom, TestMathUtil, TestMultiTermQueryRewrites, TestSpans, TestBagOfPostings, TestTermsEnum2, TestFastDecompressionMode, TestFilterIterator, TestIndexWriterExceptions2, TestStoredFieldsFormat, TestSentinelIntSet, TestTopScoreDocCollector, TestStressNRT, TestReuseDocsEnum, TestMergeSchedulerExternal, Test2BPostingsBytes, TestDocIdBitSet, TestIndexReaderClose, TestExceedMaxTermLength, TestTerms, TestPersistentSnapshotDeletionPolicy, TestPhraseQuery, TestUTF32ToUTF8, TestComplexExplanations, TestLucene45DocValuesFormat, TestSpanExplanations, TestQueryRescorer, TestTopDocsCollector, TestTopFieldCollector, TestReusableStringReader, TestSetOnce, Test2BPagedBytes, TestPostingsFormat, TestPhrasePrefixQuery, TestBinaryDocValuesUpdates, TestByteSlices, TestCrash, TestLongBitSet, TestTermVectors, TestCompoundFile, TestIndexWriterForceMerge, TestIndexWriterLockRelease, TestPayloadsOnVectors, TestRecyclingByteBlockAllocator, TestBlockPostingsFormat3, TestFlushByRamOrCountsPolicy]
> [junit4] Completed on J0 in 10.63s, 5 tests, 1 error <<< FAILURES!
>
> [...truncated 719 lines...]
> BUILD FAILED
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/build.xml:481: The following error occurred while executing this line:
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/build.xml:454: The following error occurred while executing this line:
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/build.xml:45: The following error occurred while executing this line:
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/extra-targets.xml:37: The following error occurred while executing this line:
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/lucene/build.xml:49: The following error occurred while executing this line:
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/lucene/common-build.xml:1299: The following error occurred while executing this line:
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-NightlyTests-4.x/lucene/common-build.xml:923: There were test failures: 417 suites, 3600 tests, 1 error, 129 ignored (118 assumptions)
>
> Total time: 88 minutes 55 seconds
> Build step 'Invoke Ant' marked build as failure
> Archiving artifacts
> Sending artifact delta relative to Lucene-Solr-NightlyTests-4.x #586
> Archived 3 artifacts
> Archive block size is 32768
> Received 0 blocks and 4826209 bytes
> Compression is 0.0%
> Took 3.1 sec
> Recording test results
> Email was triggered for: Failure
> Sending email for trigger: Failure
>
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
> For additional commands, e-mail: dev-help@lucene.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org