You are viewing a plain text version of this content. The canonical link for it is here.
Posted to java-user@lucene.apache.org by Peter Keegan <pe...@gmail.com> on 2009/10/24 23:20:08 UTC

IO exception during merge/optimize

I'm sometimes seeing the following exception from an operation that does a
merge and optimize:
 java.io.IOException: background merge hit exception: _0:C1082866 _1:C79
into _2 [optimize] [mergeDocStores]
I'm pretty sure that it's caused by a temporary low disk space condition,
but I'd like to be able to confirm this. It would be nice to have the java
exception included in the Lucene exception. Any way to get this?

Peter

Re: IO exception during merge/optimize

Posted by Peter Keegan <pe...@gmail.com>.
My last post got truncated - probably exceeded max msg size. Let me know if
you want to see more of the IndexWriter log.

Peter

RE: IO exception during merge/optimize

Posted by Uwe Schindler <uw...@thetaphi.de>.
> >Also, what does Lucene version "2.9 exported - 2009-10-27 15:31:52" mean?
> This appears to be something added by the ant build, since I built Lucene
> from the source code.

This is because it was build from a source artifact with no SVN revision
information. At this place, normally the svn revision number stands,
determined by a svnversion call.




---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: IO exception during merge/optimize

Posted by Peter Keegan <pe...@gmail.com>.
A couple more data points:

RamSize        Index(min)    Optimize(min)        Peak mem
1.9G        24        5        5G
800M        24        5        4G
400M        25          5        3.5G
100M        25        5        3G
50M         26          4        3G

Peter


On Thu, Oct 29, 2009 at 8:49 PM, Mark Miller <ma...@gmail.com> wrote:

> Thanks a lot Peter! Really appreciate it.
>
> Peter Keegan wrote:
> > Mark,
> >
> > With 1.9G, I had to increase the JVM heap significantly (to 8G)  to avoid
> > paging and GC hits. Here is a table comparing indexing times, optimizing
> > times and peak memory usage as a function of the  RAMBufferSize. This was
> > run on a 64-bit server with 32GB RAM:
> >
> > RamSize        Index(min)    Optimize(min)     Max VM
> > 1.9G         24            5                   5G
> > 800M        24            5                  4G
> >
> > Not much difference. I'll make a couple more runs with lower values.
> > Btw, the indexing times are really about 5 min. shorter because of some
> > non-Lucene related delays after the last document.
> >
> > Peter
> >
> >
> >
> > On Thu, Oct 29, 2009 at 4:30 PM, Mark Miller <ma...@gmail.com>
> wrote:
> >
> >
> >> Any chance I could get you to try that again with a buffer of like 800MB
> >> to a gig and do a comparison?
> >>
> >> I've been investigating the returns you get with a larger buffer size.
> >> It appears to be pretty diminishing returns over 100MB or so - at higher
> >> than that, I've gotten both slower speeds for some sizes, and larger
> >> gains for others. But only better by 5-10 docs a second up to a gig. But
> >> I can't reliably test at over a gig - I have only 4 GB of RAM, and even
> >> with that, at over a gig it starts to page and the performance gets hit.
> >> I'd love to see what kind of benefit you see going from around a gig to
> >> just under 2.
> >>
> >> Peter Keegan wrote:
> >>
> >>> Btw, this 2.9 indexer is fast! I indexed 4Gb (1.07 million docs) with
> >>> optimization in just under 30 min.
> >>> I used setRAMBufferSizeMB=1.9G
> >>>
> >>> Peter
> >>>
> >>> On Thu, Oct 29, 2009 at 3:46 PM, Peter Keegan <peterlkeegan@gmail.com
> >>> wrote:
> >>>
> >>>
> >>>
> >>>> A handful of the source documents did contain the U+FFFF character.
> The
> >>>> patch from *LUCENE-2016<
> >>>>
> >> https://issues.apache.org/jira/browse/LUCENE-2016>
> >>
> >>>> *fixed the problem.
> >>>> Thanks Mike!
> >>>>
> >>>> Peter
> >>>>
> >>>>
> >>>> On Wed, Oct 28, 2009 at 1:29 PM, Michael McCandless <
> >>>> lucene@mikemccandless.com> wrote:
> >>>>
> >>>>
> >>>>
> >>>>> Hmm, only a few affected terms, and all this particular
> >>>>> "literals:cfid196$" term, with optional suffixes.  Really strange.
> >>>>>
> >>>>> One things that's odd is the exact term "literals:cfid196$" is
> printed
> >>>>> twice, which should never happen (every unique term should be stored
> >>>>> only once, in the terms dict).
> >>>>>
> >>>>> And, otherwise, CheckIndex got through the index just fine.
> >>>>>
> >>>>> Try searching a TermQuery with these affected terms and see if it
> >>>>> succeeds?  If so, maybe trying making an index with one or two of
> >>>>> them, alone, and see if that index shows the problem?
> >>>>>
> >>>>> OK I'm attaching more mods.  Can you re-run your CheckIndex?  It will
> >>>>> produce an enormous amount of output, but if you can excise the few
> >>>>> lines around when that warning comes out & post back that'd be great.
> >>>>>
> >>>>> Mike
> >>>>>
> >>>>> On Wed, Oct 28, 2009 at 12:23 PM, Peter Keegan <
> peterlkeegan@gmail.com
> >>>>>
> >>>>> wrote:
> >>>>>
> >>>>>
> >>>>>> Just to be safe, I ran with the official jar file from one of the
> >>>>>>
> >>>>>>
> >>>>> mirrors
> >>>>>
> >>>>>
> >>>>>> and reproduced the problem.
> >>>>>> The debug session is not showing any characters = '\uffff' (checking
> >>>>>>
> >>>>>>
> >>>>> this in
> >>>>>
> >>>>>
> >>>>>> Tokenizer).
> >>>>>> The output from the modified CheckIndex follows. There are only a
> few
> >>>>>>
> >>>>>>
> >>>>> terms
> >>>>>
> >>>>>
> >>>>>> with the inconsistency. They are all legitimate terms from the app's
> >>>>>> context. With this info, I might be able to isolate the source
> >>>>>>
> >>>>>>
> >>>>> documents.
> >>>>>
> >>>>>
> >>>>>> What should I be looking for when they are indexed?
> >>>>>>
> >>>>>> CheckInput output:
> >>>>>>
> >>>>>> Opening index @
> >>>>>>
> >>>>>>
> >>>>> D:\mnsavs\lresumes1\lresumes1.luc\lresumes1.search.main.4
> >>>>>
> >>>>>
> >>>>>> Segments file=segments_2 numSegments=3 version=FORMAT_DIAGNOSTICS
> >>>>>>
> >>>>>>
> >>>>> [Lucene
> >>>>>
> >>>>>
> >>>>>> 2.9]
> >>>>>>  1 of 3: name=_0 docCount=413585
> >>>>>>    compound=false
> >>>>>>    hasProx=true
> >>>>>>    numFiles=8
> >>>>>>    size (MB)=1,148.817
> >>>>>>    diagnostics = {os.version=5.2, os=Windows 2003,
> >>>>>>
> >> lucene.version=2.9.0
> >>
> >>>>>> 817268P - 2009-09-21 10:25:09, source=flush, os.arch=amd64,
> >>>>>> java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
> >>>>>>    docStoreOffset=0
> >>>>>>    docStoreSegment=_0
> >>>>>>    docStoreIsCompoundFile=false
> >>>>>>    no deletions
> >>>>>>    test: open reader.........OK
> >>>>>>    test: fields..............OK [33 fields]
> >>>>>>    test: field norms.........OK [33 fields]
> >>>>>>    test: terms, freq, prox...OK [7704753 terms; 180326717 terms/docs
> >>>>>>
> >>>>>>
> >>>>> pairs;
> >>>>>
> >>>>>
> >>>>>> 340244234 tokens]
> >>>>>>    test: stored fields.......OK [1240755 total field count; avg 3
> >>>>>>
> >> fields
> >>
> >>>>>> per doc]
> >>>>>>    test: term vectors........OK [0 total vector count; avg 0
> term/freq
> >>>>>> vector fields per doc]
> >>>>>>
> >>>>>>  2 of 3: name=_1 docCount=359068
> >>>>>>    compound=false
> >>>>>>    hasProx=true
> >>>>>>    numFiles=8
> >>>>>>    size (MB)=1,125.161
> >>>>>>    diagnostics = {os.version=5.2, os=Windows 2003,
> >>>>>>
> >> lucene.version=2.9.0
> >>
> >>>>>> 817268P - 2009-09-21 10:25:09, source=flush, os.arch=amd64,
> >>>>>> java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
> >>>>>>    docStoreOffset=413585
> >>>>>>    docStoreSegment=_0
> >>>>>>    docStoreIsCompoundFile=false
> >>>>>>    no deletions
> >>>>>>    test: open reader.........OK
> >>>>>>    test: fields..............OK [33 fields]
> >>>>>>    test: field norms.........OK [33 fields]
> >>>>>>    test: terms, freq, prox...WARNING: term  literals:cfid196$
> >>>>>>
> >> docFreq=43
> >>
> >>>>> !=
> >>>>>
> >>>>>
> >>>>>> num docs seen 4 + num docs deleted 0
> >>>>>> WARNING: term  literals:cfid196$ docFreq=1 != num docs seen 4 + num
> >>>>>>
> >> docs
> >>
> >>>>>> deleted 0
> >>>>>> WARNING: term  literals:cfid196$ docFreq=1 != num docs seen 4 + num
> >>>>>>
> >> docs
> >>
> >>>>>> deleted 0
> >>>>>> WARNING: term  literals:cfid196$commandant docFreq=1 != num docs
> seen
> >>>>>>
> >> 9
> >>
> >>>>> +
> >>>>>
> >>>>>
> >>>>>> num docs deleted 0
> >>>>>> WARNING: term  literals:cfid196$on docFreq=3178 != num docs seen 1 +
> >>>>>>
> >> num
> >>
> >>>>>> docs deleted 0
> >>>>>> OK [7137621 terms; 179101847 terms/docs pairs; 346076058 tokens]
> >>>>>>    test: stored fields.......OK [1077204 total field count; avg 3
> >>>>>>
> >> fields
> >>
> >>>>>> per doc]
> >>>>>>    test: term vectors........OK [0 total vector count; avg 0
> term/freq
> >>>>>> vector fields per doc]
> >>>>>>
> >>>>>>  3 of 3: name=_2 docCount=304849
> >>>>>>    compound=false
> >>>>>>    hasProx=true
> >>>>>>    numFiles=8
> >>>>>>    size (MB)=962.004
> >>>>>>    diagnostics = {os.version=5.2, os=Windows 2003,
> >>>>>>
> >> lucene.version=2.9.0
> >>
> >>>>>> 817268P - 2009-09-21 10:25:09, source=flush, os.arch=amd64,
> >>>>>> java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
> >>>>>>    docStoreOffset=772653
> >>>>>>    docStoreSegment=_0
> >>>>>>    docStoreIsCompoundFile=false
> >>>>>>    no deletions
> >>>>>>    test: open reader.........OK
> >>>>>>    test: fields..............OK [33 fields]
> >>>>>>    test: field norms.........OK [33 fields]
> >>>>>>    test: terms, freq, prox...WARNING: term  contents:? docFreq=1 !=
> >>>>>>
> >> num
> >>
> >>>>>> docs seen 246 + num docs deleted 0
> >>>>>> WARNING: term  literals:cfid196$ docFreq=45 != num docs seen 4 + num
> >>>>>>
> >>>>>>
> >>>>> docs
> >>>>>
> >>>>>
> >>>>>> deleted 0
> >>>>>> WARNING: term  literals:cfid196$ docFreq=1 != num docs seen 4 + num
> >>>>>>
> >> docs
> >>
> >>>>>> deleted 0
> >>>>>> WARNING: term  literals:cfid196$cashier docFreq=1 != num docs seen
> 37
> >>>>>>
> >> +
> >>
> >>>>> num
> >>>>>
> >>>>>
> >>>>>> docs deleted 0
> >>>>>> WARNING: term  literals:cfid196$interrogation docFreq=181 != num
> docs
> >>>>>>
> >>>>>>
> >>>>> seen 1
> >>>>>
> >>>>>
> >>>>>> + num docs deleted 0
> >>>>>> WARNING: term  literals:cfid196$leader docFreq=1 != num docs seen
> 353
> >>>>>>
> >> +
> >>
> >>>>> num
> >>>>>
> >>>>>
> >>>>>> docs deleted 0
> >>>>>> WARNING: term  literals:cfid196$microsoft docFreq=3114 != num docs
> >>>>>>
> >> seen
> >>
> >>>>> 1 +
> >>>>>
> >>>>>
> >>>>>> num docs deleted 0
> >>>>>> WARNING: term  literals:cfid196$nt docFreq=200 != num docs seen 1 +
> >>>>>>
> >> num
> >>
> >>>>> docs
> >>>>>
> >>>>>
> >>>>>> deleted 0
> >>>>>> OK [6497769 terms; 145296880 terms/docs pairs; 293458734 tokens]
> >>>>>>    test: stored fields.......OK [914547 total field count; avg 3
> >>>>>>
> >> fields
> >>
> >>>>> per
> >>>>>
> >>>>>
> >>>>>> doc]
> >>>>>>    test: term vectors........OK [0 total vector count; avg 0
> term/freq
> >>>>>> vector fields per doc]
> >>>>>>
> >>>>>> No problems were detected with this index.
> >>>>>>
> >>>>>> Peter
> >>>>>>
> >>>>>>
> >>>>>> On Wed, Oct 28, 2009 at 11:29 AM, Michael McCandless <
> >>>>>> lucene@mikemccandless.com> wrote:
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>> On Wed, Oct 28, 2009 at 10:58 AM, Peter Keegan <
> >>>>>>>
> >> peterlkeegan@gmail.com
> >>
> >>>>>>> wrote:
> >>>>>>>
> >>>>>>>
> >>>>>>>> The only change I made to the source code was the patch for
> >>>>>>>>
> >>>>>>>>
> >>>>>>> PayloadNearQuery
> >>>>>>>
> >>>>>>>
> >>>>>>>> (LUCENE-1986).
> >>>>>>>>
> >>>>>>>>
> >>>>>>> That patch certainly shouldn't lead to this.
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>> It's possible that our content contains U+FFFF. I will run in
> >>>>>>>>
> >>>>>>>>
> >>>>> debugger
> >>>>>
> >>>>>
> >>>>>>> and
> >>>>>>>
> >>>>>>>
> >>>>>>>> see.
> >>>>>>>>
> >>>>>>>>
> >>>>>>> OK may as well check just so we cover all possibilities.
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>> The data is 'sensitive', so I may not be able to provide a bad
> >>>>>>>>
> >>>>>>>>
> >>>>> segment,
> >>>>>
> >>>>>
> >>>>>>>> unfortunately.
> >>>>>>>>
> >>>>>>>>
> >>>>>>> OK, maybe we can modify your CheckIndex instead.  Let's start with
> >>>>>>> this, which prints a warning whenever the docFreq differs but
> >>>>>>> otherwise continues (vs throwing RuntimeException).  I'm curious
> how
> >>>>>>> many terms show this, and whether the TermEnum keeps working after
> >>>>>>> this term that has different docFreq:
> >>>>>>>
> >>>>>>> Index: src/java/org/apache/lucene/index/CheckIndex.java
> >>>>>>> ===================================================================
> >>>>>>> --- src/java/org/apache/lucene/index/CheckIndex.java    (revision
> >>>>>>>
> >>>>>>>
> >>>>> 829889)
> >>>>>
> >>>>>
> >>>>>>> +++ src/java/org/apache/lucene/index/CheckIndex.java    (working
> >>>>>>>
> >> copy)
> >>
> >>>>>>> @@ -672,8 +672,8 @@
> >>>>>>>         }
> >>>>>>>
> >>>>>>>         if (freq0 + delCount != docFreq) {
> >>>>>>> -          throw new RuntimeException("term " + term + " docFreq="
> +
> >>>>>>> -                                     docFreq + " != num docs seen
> "
> >>>>>>>
> >> +
> >>
> >>>>>>> freq0 + " + num docs deleted " + delCount);
> >>>>>>> +          System.out.println("WARNING: term  " + term + "
> docFreq="
> >>>>>>>
> >> +
> >>
> >>>>>>> +                             docFreq + " != num docs seen " +
> freq0
> >>>>>>>
> >> +
> >>
> >>>>>>> " + num docs deleted " + delCount);
> >>>>>>>         }
> >>>>>>>       }
> >>>>>>>
> >>>>>>> Mike
> >>>>>>>
> >>>>>>>
> ---------------------------------------------------------------------
> >>>>>>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> >>>>>>> For additional commands, e-mail: java-user-help@lucene.apache.org
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>
> >> --
> >> - Mark
> >>
> >> http://www.lucidimagination.com
> >>
> >>
> >>
> >>
> >> ---------------------------------------------------------------------
> >> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> >> For additional commands, e-mail: java-user-help@lucene.apache.org
> >>
> >>
> >>
> >
> >
>
>
> --
> - Mark
>
> http://www.lucidimagination.com
>
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> For additional commands, e-mail: java-user-help@lucene.apache.org
>
>

Re: IO exception during merge/optimize

Posted by Mark Miller <ma...@gmail.com>.
Thanks a lot Peter! Really appreciate it.

Peter Keegan wrote:
> Mark,
>
> With 1.9G, I had to increase the JVM heap significantly (to 8G)  to avoid
> paging and GC hits. Here is a table comparing indexing times, optimizing
> times and peak memory usage as a function of the  RAMBufferSize. This was
> run on a 64-bit server with 32GB RAM:
>
> RamSize        Index(min)    Optimize(min)     Max VM
> 1.9G         24            5                   5G
> 800M        24            5                  4G
>
> Not much difference. I'll make a couple more runs with lower values.
> Btw, the indexing times are really about 5 min. shorter because of some
> non-Lucene related delays after the last document.
>
> Peter
>
>
>
> On Thu, Oct 29, 2009 at 4:30 PM, Mark Miller <ma...@gmail.com> wrote:
>
>   
>> Any chance I could get you to try that again with a buffer of like 800MB
>> to a gig and do a comparison?
>>
>> I've been investigating the returns you get with a larger buffer size.
>> It appears to be pretty diminishing returns over 100MB or so - at higher
>> than that, I've gotten both slower speeds for some sizes, and larger
>> gains for others. But only better by 5-10 docs a second up to a gig. But
>> I can't reliably test at over a gig - I have only 4 GB of RAM, and even
>> with that, at over a gig it starts to page and the performance gets hit.
>> I'd love to see what kind of benefit you see going from around a gig to
>> just under 2.
>>
>> Peter Keegan wrote:
>>     
>>> Btw, this 2.9 indexer is fast! I indexed 4Gb (1.07 million docs) with
>>> optimization in just under 30 min.
>>> I used setRAMBufferSizeMB=1.9G
>>>
>>> Peter
>>>
>>> On Thu, Oct 29, 2009 at 3:46 PM, Peter Keegan <peterlkeegan@gmail.com
>>> wrote:
>>>
>>>
>>>       
>>>> A handful of the source documents did contain the U+FFFF character. The
>>>> patch from *LUCENE-2016<
>>>>         
>> https://issues.apache.org/jira/browse/LUCENE-2016>
>>     
>>>> *fixed the problem.
>>>> Thanks Mike!
>>>>
>>>> Peter
>>>>
>>>>
>>>> On Wed, Oct 28, 2009 at 1:29 PM, Michael McCandless <
>>>> lucene@mikemccandless.com> wrote:
>>>>
>>>>
>>>>         
>>>>> Hmm, only a few affected terms, and all this particular
>>>>> "literals:cfid196$" term, with optional suffixes.  Really strange.
>>>>>
>>>>> One things that's odd is the exact term "literals:cfid196$" is printed
>>>>> twice, which should never happen (every unique term should be stored
>>>>> only once, in the terms dict).
>>>>>
>>>>> And, otherwise, CheckIndex got through the index just fine.
>>>>>
>>>>> Try searching a TermQuery with these affected terms and see if it
>>>>> succeeds?  If so, maybe trying making an index with one or two of
>>>>> them, alone, and see if that index shows the problem?
>>>>>
>>>>> OK I'm attaching more mods.  Can you re-run your CheckIndex?  It will
>>>>> produce an enormous amount of output, but if you can excise the few
>>>>> lines around when that warning comes out & post back that'd be great.
>>>>>
>>>>> Mike
>>>>>
>>>>> On Wed, Oct 28, 2009 at 12:23 PM, Peter Keegan <peterlkeegan@gmail.com
>>>>>           
>>>>> wrote:
>>>>>
>>>>>           
>>>>>> Just to be safe, I ran with the official jar file from one of the
>>>>>>
>>>>>>             
>>>>> mirrors
>>>>>
>>>>>           
>>>>>> and reproduced the problem.
>>>>>> The debug session is not showing any characters = '\uffff' (checking
>>>>>>
>>>>>>             
>>>>> this in
>>>>>
>>>>>           
>>>>>> Tokenizer).
>>>>>> The output from the modified CheckIndex follows. There are only a few
>>>>>>
>>>>>>             
>>>>> terms
>>>>>
>>>>>           
>>>>>> with the inconsistency. They are all legitimate terms from the app's
>>>>>> context. With this info, I might be able to isolate the source
>>>>>>
>>>>>>             
>>>>> documents.
>>>>>
>>>>>           
>>>>>> What should I be looking for when they are indexed?
>>>>>>
>>>>>> CheckInput output:
>>>>>>
>>>>>> Opening index @
>>>>>>
>>>>>>             
>>>>> D:\mnsavs\lresumes1\lresumes1.luc\lresumes1.search.main.4
>>>>>
>>>>>           
>>>>>> Segments file=segments_2 numSegments=3 version=FORMAT_DIAGNOSTICS
>>>>>>
>>>>>>             
>>>>> [Lucene
>>>>>
>>>>>           
>>>>>> 2.9]
>>>>>>  1 of 3: name=_0 docCount=413585
>>>>>>    compound=false
>>>>>>    hasProx=true
>>>>>>    numFiles=8
>>>>>>    size (MB)=1,148.817
>>>>>>    diagnostics = {os.version=5.2, os=Windows 2003,
>>>>>>             
>> lucene.version=2.9.0
>>     
>>>>>> 817268P - 2009-09-21 10:25:09, source=flush, os.arch=amd64,
>>>>>> java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
>>>>>>    docStoreOffset=0
>>>>>>    docStoreSegment=_0
>>>>>>    docStoreIsCompoundFile=false
>>>>>>    no deletions
>>>>>>    test: open reader.........OK
>>>>>>    test: fields..............OK [33 fields]
>>>>>>    test: field norms.........OK [33 fields]
>>>>>>    test: terms, freq, prox...OK [7704753 terms; 180326717 terms/docs
>>>>>>
>>>>>>             
>>>>> pairs;
>>>>>
>>>>>           
>>>>>> 340244234 tokens]
>>>>>>    test: stored fields.......OK [1240755 total field count; avg 3
>>>>>>             
>> fields
>>     
>>>>>> per doc]
>>>>>>    test: term vectors........OK [0 total vector count; avg 0 term/freq
>>>>>> vector fields per doc]
>>>>>>
>>>>>>  2 of 3: name=_1 docCount=359068
>>>>>>    compound=false
>>>>>>    hasProx=true
>>>>>>    numFiles=8
>>>>>>    size (MB)=1,125.161
>>>>>>    diagnostics = {os.version=5.2, os=Windows 2003,
>>>>>>             
>> lucene.version=2.9.0
>>     
>>>>>> 817268P - 2009-09-21 10:25:09, source=flush, os.arch=amd64,
>>>>>> java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
>>>>>>    docStoreOffset=413585
>>>>>>    docStoreSegment=_0
>>>>>>    docStoreIsCompoundFile=false
>>>>>>    no deletions
>>>>>>    test: open reader.........OK
>>>>>>    test: fields..............OK [33 fields]
>>>>>>    test: field norms.........OK [33 fields]
>>>>>>    test: terms, freq, prox...WARNING: term  literals:cfid196$
>>>>>>             
>> docFreq=43
>>     
>>>>> !=
>>>>>
>>>>>           
>>>>>> num docs seen 4 + num docs deleted 0
>>>>>> WARNING: term  literals:cfid196$ docFreq=1 != num docs seen 4 + num
>>>>>>             
>> docs
>>     
>>>>>> deleted 0
>>>>>> WARNING: term  literals:cfid196$ docFreq=1 != num docs seen 4 + num
>>>>>>             
>> docs
>>     
>>>>>> deleted 0
>>>>>> WARNING: term  literals:cfid196$commandant docFreq=1 != num docs seen
>>>>>>             
>> 9
>>     
>>>>> +
>>>>>
>>>>>           
>>>>>> num docs deleted 0
>>>>>> WARNING: term  literals:cfid196$on docFreq=3178 != num docs seen 1 +
>>>>>>             
>> num
>>     
>>>>>> docs deleted 0
>>>>>> OK [7137621 terms; 179101847 terms/docs pairs; 346076058 tokens]
>>>>>>    test: stored fields.......OK [1077204 total field count; avg 3
>>>>>>             
>> fields
>>     
>>>>>> per doc]
>>>>>>    test: term vectors........OK [0 total vector count; avg 0 term/freq
>>>>>> vector fields per doc]
>>>>>>
>>>>>>  3 of 3: name=_2 docCount=304849
>>>>>>    compound=false
>>>>>>    hasProx=true
>>>>>>    numFiles=8
>>>>>>    size (MB)=962.004
>>>>>>    diagnostics = {os.version=5.2, os=Windows 2003,
>>>>>>             
>> lucene.version=2.9.0
>>     
>>>>>> 817268P - 2009-09-21 10:25:09, source=flush, os.arch=amd64,
>>>>>> java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
>>>>>>    docStoreOffset=772653
>>>>>>    docStoreSegment=_0
>>>>>>    docStoreIsCompoundFile=false
>>>>>>    no deletions
>>>>>>    test: open reader.........OK
>>>>>>    test: fields..............OK [33 fields]
>>>>>>    test: field norms.........OK [33 fields]
>>>>>>    test: terms, freq, prox...WARNING: term  contents:? docFreq=1 !=
>>>>>>             
>> num
>>     
>>>>>> docs seen 246 + num docs deleted 0
>>>>>> WARNING: term  literals:cfid196$ docFreq=45 != num docs seen 4 + num
>>>>>>
>>>>>>             
>>>>> docs
>>>>>
>>>>>           
>>>>>> deleted 0
>>>>>> WARNING: term  literals:cfid196$ docFreq=1 != num docs seen 4 + num
>>>>>>             
>> docs
>>     
>>>>>> deleted 0
>>>>>> WARNING: term  literals:cfid196$cashier docFreq=1 != num docs seen 37
>>>>>>             
>> +
>>     
>>>>> num
>>>>>
>>>>>           
>>>>>> docs deleted 0
>>>>>> WARNING: term  literals:cfid196$interrogation docFreq=181 != num docs
>>>>>>
>>>>>>             
>>>>> seen 1
>>>>>
>>>>>           
>>>>>> + num docs deleted 0
>>>>>> WARNING: term  literals:cfid196$leader docFreq=1 != num docs seen 353
>>>>>>             
>> +
>>     
>>>>> num
>>>>>
>>>>>           
>>>>>> docs deleted 0
>>>>>> WARNING: term  literals:cfid196$microsoft docFreq=3114 != num docs
>>>>>>             
>> seen
>>     
>>>>> 1 +
>>>>>
>>>>>           
>>>>>> num docs deleted 0
>>>>>> WARNING: term  literals:cfid196$nt docFreq=200 != num docs seen 1 +
>>>>>>             
>> num
>>     
>>>>> docs
>>>>>
>>>>>           
>>>>>> deleted 0
>>>>>> OK [6497769 terms; 145296880 terms/docs pairs; 293458734 tokens]
>>>>>>    test: stored fields.......OK [914547 total field count; avg 3
>>>>>>             
>> fields
>>     
>>>>> per
>>>>>
>>>>>           
>>>>>> doc]
>>>>>>    test: term vectors........OK [0 total vector count; avg 0 term/freq
>>>>>> vector fields per doc]
>>>>>>
>>>>>> No problems were detected with this index.
>>>>>>
>>>>>> Peter
>>>>>>
>>>>>>
>>>>>> On Wed, Oct 28, 2009 at 11:29 AM, Michael McCandless <
>>>>>> lucene@mikemccandless.com> wrote:
>>>>>>
>>>>>>
>>>>>>             
>>>>>>> On Wed, Oct 28, 2009 at 10:58 AM, Peter Keegan <
>>>>>>>               
>> peterlkeegan@gmail.com
>>     
>>>>>>> wrote:
>>>>>>>
>>>>>>>               
>>>>>>>> The only change I made to the source code was the patch for
>>>>>>>>
>>>>>>>>                 
>>>>>>> PayloadNearQuery
>>>>>>>
>>>>>>>               
>>>>>>>> (LUCENE-1986).
>>>>>>>>
>>>>>>>>                 
>>>>>>> That patch certainly shouldn't lead to this.
>>>>>>>
>>>>>>>
>>>>>>>               
>>>>>>>> It's possible that our content contains U+FFFF. I will run in
>>>>>>>>
>>>>>>>>                 
>>>>> debugger
>>>>>
>>>>>           
>>>>>>> and
>>>>>>>
>>>>>>>               
>>>>>>>> see.
>>>>>>>>
>>>>>>>>                 
>>>>>>> OK may as well check just so we cover all possibilities.
>>>>>>>
>>>>>>>
>>>>>>>               
>>>>>>>> The data is 'sensitive', so I may not be able to provide a bad
>>>>>>>>
>>>>>>>>                 
>>>>> segment,
>>>>>
>>>>>           
>>>>>>>> unfortunately.
>>>>>>>>
>>>>>>>>                 
>>>>>>> OK, maybe we can modify your CheckIndex instead.  Let's start with
>>>>>>> this, which prints a warning whenever the docFreq differs but
>>>>>>> otherwise continues (vs throwing RuntimeException).  I'm curious how
>>>>>>> many terms show this, and whether the TermEnum keeps working after
>>>>>>> this term that has different docFreq:
>>>>>>>
>>>>>>> Index: src/java/org/apache/lucene/index/CheckIndex.java
>>>>>>> ===================================================================
>>>>>>> --- src/java/org/apache/lucene/index/CheckIndex.java    (revision
>>>>>>>
>>>>>>>               
>>>>> 829889)
>>>>>
>>>>>           
>>>>>>> +++ src/java/org/apache/lucene/index/CheckIndex.java    (working
>>>>>>>               
>> copy)
>>     
>>>>>>> @@ -672,8 +672,8 @@
>>>>>>>         }
>>>>>>>
>>>>>>>         if (freq0 + delCount != docFreq) {
>>>>>>> -          throw new RuntimeException("term " + term + " docFreq=" +
>>>>>>> -                                     docFreq + " != num docs seen "
>>>>>>>               
>> +
>>     
>>>>>>> freq0 + " + num docs deleted " + delCount);
>>>>>>> +          System.out.println("WARNING: term  " + term + " docFreq="
>>>>>>>               
>> +
>>     
>>>>>>> +                             docFreq + " != num docs seen " + freq0
>>>>>>>               
>> +
>>     
>>>>>>> " + num docs deleted " + delCount);
>>>>>>>         }
>>>>>>>       }
>>>>>>>
>>>>>>> Mike
>>>>>>>
>>>>>>> ---------------------------------------------------------------------
>>>>>>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>>>>>>> For additional commands, e-mail: java-user-help@lucene.apache.org
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>               
>>>       
>> --
>> - Mark
>>
>> http://www.lucidimagination.com
>>
>>
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>> For additional commands, e-mail: java-user-help@lucene.apache.org
>>
>>
>>     
>
>   


-- 
- Mark

http://www.lucidimagination.com




---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: IO exception during merge/optimize

Posted by Peter Keegan <pe...@gmail.com>.
Mark,

With 1.9G, I had to increase the JVM heap significantly (to 8G)  to avoid
paging and GC hits. Here is a table comparing indexing times, optimizing
times and peak memory usage as a function of the  RAMBufferSize. This was
run on a 64-bit server with 32GB RAM:

RamSize        Index(min)    Optimize(min)     Max VM
1.9G         24            5                   5G
800M        24            5                  4G

Not much difference. I'll make a couple more runs with lower values.
Btw, the indexing times are really about 5 min. shorter because of some
non-Lucene related delays after the last document.

Peter



On Thu, Oct 29, 2009 at 4:30 PM, Mark Miller <ma...@gmail.com> wrote:

> Any chance I could get you to try that again with a buffer of like 800MB
> to a gig and do a comparison?
>
> I've been investigating the returns you get with a larger buffer size.
> It appears to be pretty diminishing returns over 100MB or so - at higher
> than that, I've gotten both slower speeds for some sizes, and larger
> gains for others. But only better by 5-10 docs a second up to a gig. But
> I can't reliably test at over a gig - I have only 4 GB of RAM, and even
> with that, at over a gig it starts to page and the performance gets hit.
> I'd love to see what kind of benefit you see going from around a gig to
> just under 2.
>
> Peter Keegan wrote:
> > Btw, this 2.9 indexer is fast! I indexed 4Gb (1.07 million docs) with
> > optimization in just under 30 min.
> > I used setRAMBufferSizeMB=1.9G
> >
> > Peter
> >
> > On Thu, Oct 29, 2009 at 3:46 PM, Peter Keegan <peterlkeegan@gmail.com
> >wrote:
> >
> >
> >> A handful of the source documents did contain the U+FFFF character. The
> >> patch from *LUCENE-2016<
> https://issues.apache.org/jira/browse/LUCENE-2016>
> >> *fixed the problem.
> >> Thanks Mike!
> >>
> >> Peter
> >>
> >>
> >> On Wed, Oct 28, 2009 at 1:29 PM, Michael McCandless <
> >> lucene@mikemccandless.com> wrote:
> >>
> >>
> >>> Hmm, only a few affected terms, and all this particular
> >>> "literals:cfid196$" term, with optional suffixes.  Really strange.
> >>>
> >>> One things that's odd is the exact term "literals:cfid196$" is printed
> >>> twice, which should never happen (every unique term should be stored
> >>> only once, in the terms dict).
> >>>
> >>> And, otherwise, CheckIndex got through the index just fine.
> >>>
> >>> Try searching a TermQuery with these affected terms and see if it
> >>> succeeds?  If so, maybe trying making an index with one or two of
> >>> them, alone, and see if that index shows the problem?
> >>>
> >>> OK I'm attaching more mods.  Can you re-run your CheckIndex?  It will
> >>> produce an enormous amount of output, but if you can excise the few
> >>> lines around when that warning comes out & post back that'd be great.
> >>>
> >>> Mike
> >>>
> >>> On Wed, Oct 28, 2009 at 12:23 PM, Peter Keegan <peterlkeegan@gmail.com
> >
> >>> wrote:
> >>>
> >>>> Just to be safe, I ran with the official jar file from one of the
> >>>>
> >>> mirrors
> >>>
> >>>> and reproduced the problem.
> >>>> The debug session is not showing any characters = '\uffff' (checking
> >>>>
> >>> this in
> >>>
> >>>> Tokenizer).
> >>>> The output from the modified CheckIndex follows. There are only a few
> >>>>
> >>> terms
> >>>
> >>>> with the inconsistency. They are all legitimate terms from the app's
> >>>> context. With this info, I might be able to isolate the source
> >>>>
> >>> documents.
> >>>
> >>>> What should I be looking for when they are indexed?
> >>>>
> >>>> CheckInput output:
> >>>>
> >>>> Opening index @
> >>>>
> >>> D:\mnsavs\lresumes1\lresumes1.luc\lresumes1.search.main.4
> >>>
> >>>> Segments file=segments_2 numSegments=3 version=FORMAT_DIAGNOSTICS
> >>>>
> >>> [Lucene
> >>>
> >>>> 2.9]
> >>>>  1 of 3: name=_0 docCount=413585
> >>>>    compound=false
> >>>>    hasProx=true
> >>>>    numFiles=8
> >>>>    size (MB)=1,148.817
> >>>>    diagnostics = {os.version=5.2, os=Windows 2003,
> lucene.version=2.9.0
> >>>> 817268P - 2009-09-21 10:25:09, source=flush, os.arch=amd64,
> >>>> java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
> >>>>    docStoreOffset=0
> >>>>    docStoreSegment=_0
> >>>>    docStoreIsCompoundFile=false
> >>>>    no deletions
> >>>>    test: open reader.........OK
> >>>>    test: fields..............OK [33 fields]
> >>>>    test: field norms.........OK [33 fields]
> >>>>    test: terms, freq, prox...OK [7704753 terms; 180326717 terms/docs
> >>>>
> >>> pairs;
> >>>
> >>>> 340244234 tokens]
> >>>>    test: stored fields.......OK [1240755 total field count; avg 3
> fields
> >>>> per doc]
> >>>>    test: term vectors........OK [0 total vector count; avg 0 term/freq
> >>>> vector fields per doc]
> >>>>
> >>>>  2 of 3: name=_1 docCount=359068
> >>>>    compound=false
> >>>>    hasProx=true
> >>>>    numFiles=8
> >>>>    size (MB)=1,125.161
> >>>>    diagnostics = {os.version=5.2, os=Windows 2003,
> lucene.version=2.9.0
> >>>> 817268P - 2009-09-21 10:25:09, source=flush, os.arch=amd64,
> >>>> java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
> >>>>    docStoreOffset=413585
> >>>>    docStoreSegment=_0
> >>>>    docStoreIsCompoundFile=false
> >>>>    no deletions
> >>>>    test: open reader.........OK
> >>>>    test: fields..............OK [33 fields]
> >>>>    test: field norms.........OK [33 fields]
> >>>>    test: terms, freq, prox...WARNING: term  literals:cfid196$
> docFreq=43
> >>>>
> >>> !=
> >>>
> >>>> num docs seen 4 + num docs deleted 0
> >>>> WARNING: term  literals:cfid196$ docFreq=1 != num docs seen 4 + num
> docs
> >>>> deleted 0
> >>>> WARNING: term  literals:cfid196$ docFreq=1 != num docs seen 4 + num
> docs
> >>>> deleted 0
> >>>> WARNING: term  literals:cfid196$commandant docFreq=1 != num docs seen
> 9
> >>>>
> >>> +
> >>>
> >>>> num docs deleted 0
> >>>> WARNING: term  literals:cfid196$on docFreq=3178 != num docs seen 1 +
> num
> >>>> docs deleted 0
> >>>> OK [7137621 terms; 179101847 terms/docs pairs; 346076058 tokens]
> >>>>    test: stored fields.......OK [1077204 total field count; avg 3
> fields
> >>>> per doc]
> >>>>    test: term vectors........OK [0 total vector count; avg 0 term/freq
> >>>> vector fields per doc]
> >>>>
> >>>>  3 of 3: name=_2 docCount=304849
> >>>>    compound=false
> >>>>    hasProx=true
> >>>>    numFiles=8
> >>>>    size (MB)=962.004
> >>>>    diagnostics = {os.version=5.2, os=Windows 2003,
> lucene.version=2.9.0
> >>>> 817268P - 2009-09-21 10:25:09, source=flush, os.arch=amd64,
> >>>> java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
> >>>>    docStoreOffset=772653
> >>>>    docStoreSegment=_0
> >>>>    docStoreIsCompoundFile=false
> >>>>    no deletions
> >>>>    test: open reader.........OK
> >>>>    test: fields..............OK [33 fields]
> >>>>    test: field norms.........OK [33 fields]
> >>>>    test: terms, freq, prox...WARNING: term  contents:? docFreq=1 !=
> num
> >>>> docs seen 246 + num docs deleted 0
> >>>> WARNING: term  literals:cfid196$ docFreq=45 != num docs seen 4 + num
> >>>>
> >>> docs
> >>>
> >>>> deleted 0
> >>>> WARNING: term  literals:cfid196$ docFreq=1 != num docs seen 4 + num
> docs
> >>>> deleted 0
> >>>> WARNING: term  literals:cfid196$cashier docFreq=1 != num docs seen 37
> +
> >>>>
> >>> num
> >>>
> >>>> docs deleted 0
> >>>> WARNING: term  literals:cfid196$interrogation docFreq=181 != num docs
> >>>>
> >>> seen 1
> >>>
> >>>> + num docs deleted 0
> >>>> WARNING: term  literals:cfid196$leader docFreq=1 != num docs seen 353
> +
> >>>>
> >>> num
> >>>
> >>>> docs deleted 0
> >>>> WARNING: term  literals:cfid196$microsoft docFreq=3114 != num docs
> seen
> >>>>
> >>> 1 +
> >>>
> >>>> num docs deleted 0
> >>>> WARNING: term  literals:cfid196$nt docFreq=200 != num docs seen 1 +
> num
> >>>>
> >>> docs
> >>>
> >>>> deleted 0
> >>>> OK [6497769 terms; 145296880 terms/docs pairs; 293458734 tokens]
> >>>>    test: stored fields.......OK [914547 total field count; avg 3
> fields
> >>>>
> >>> per
> >>>
> >>>> doc]
> >>>>    test: term vectors........OK [0 total vector count; avg 0 term/freq
> >>>> vector fields per doc]
> >>>>
> >>>> No problems were detected with this index.
> >>>>
> >>>> Peter
> >>>>
> >>>>
> >>>> On Wed, Oct 28, 2009 at 11:29 AM, Michael McCandless <
> >>>> lucene@mikemccandless.com> wrote:
> >>>>
> >>>>
> >>>>> On Wed, Oct 28, 2009 at 10:58 AM, Peter Keegan <
> peterlkeegan@gmail.com
> >>>>>
> >>>>> wrote:
> >>>>>
> >>>>>> The only change I made to the source code was the patch for
> >>>>>>
> >>>>> PayloadNearQuery
> >>>>>
> >>>>>> (LUCENE-1986).
> >>>>>>
> >>>>> That patch certainly shouldn't lead to this.
> >>>>>
> >>>>>
> >>>>>> It's possible that our content contains U+FFFF. I will run in
> >>>>>>
> >>> debugger
> >>>
> >>>>> and
> >>>>>
> >>>>>> see.
> >>>>>>
> >>>>> OK may as well check just so we cover all possibilities.
> >>>>>
> >>>>>
> >>>>>> The data is 'sensitive', so I may not be able to provide a bad
> >>>>>>
> >>> segment,
> >>>
> >>>>>> unfortunately.
> >>>>>>
> >>>>> OK, maybe we can modify your CheckIndex instead.  Let's start with
> >>>>> this, which prints a warning whenever the docFreq differs but
> >>>>> otherwise continues (vs throwing RuntimeException).  I'm curious how
> >>>>> many terms show this, and whether the TermEnum keeps working after
> >>>>> this term that has different docFreq:
> >>>>>
> >>>>> Index: src/java/org/apache/lucene/index/CheckIndex.java
> >>>>> ===================================================================
> >>>>> --- src/java/org/apache/lucene/index/CheckIndex.java    (revision
> >>>>>
> >>> 829889)
> >>>
> >>>>> +++ src/java/org/apache/lucene/index/CheckIndex.java    (working
> copy)
> >>>>> @@ -672,8 +672,8 @@
> >>>>>         }
> >>>>>
> >>>>>         if (freq0 + delCount != docFreq) {
> >>>>> -          throw new RuntimeException("term " + term + " docFreq=" +
> >>>>> -                                     docFreq + " != num docs seen "
> +
> >>>>> freq0 + " + num docs deleted " + delCount);
> >>>>> +          System.out.println("WARNING: term  " + term + " docFreq="
> +
> >>>>> +                             docFreq + " != num docs seen " + freq0
> +
> >>>>> " + num docs deleted " + delCount);
> >>>>>         }
> >>>>>       }
> >>>>>
> >>>>> Mike
> >>>>>
> >>>>> ---------------------------------------------------------------------
> >>>>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> >>>>> For additional commands, e-mail: java-user-help@lucene.apache.org
> >>>>>
> >>>>>
> >>>>>
> >>
> >
> >
>
>
> --
> - Mark
>
> http://www.lucidimagination.com
>
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> For additional commands, e-mail: java-user-help@lucene.apache.org
>
>

Re: IO exception during merge/optimize

Posted by Mark Miller <ma...@gmail.com>.
Any chance I could get you to try that again with a buffer of like 800MB
to a gig and do a comparison?

I've been investigating the returns you get with a larger buffer size.
It appears to be pretty diminishing returns over 100MB or so - at higher
than that, I've gotten both slower speeds for some sizes, and larger
gains for others. But only better by 5-10 docs a second up to a gig. But
I can't reliably test at over a gig - I have only 4 GB of RAM, and even
with that, at over a gig it starts to page and the performance gets hit.
I'd love to see what kind of benefit you see going from around a gig to
just under 2.

Peter Keegan wrote:
> Btw, this 2.9 indexer is fast! I indexed 4Gb (1.07 million docs) with
> optimization in just under 30 min.
> I used setRAMBufferSizeMB=1.9G
>
> Peter
>
> On Thu, Oct 29, 2009 at 3:46 PM, Peter Keegan <pe...@gmail.com>wrote:
>
>   
>> A handful of the source documents did contain the U+FFFF character. The
>> patch from *LUCENE-2016<https://issues.apache.org/jira/browse/LUCENE-2016>
>> *fixed the problem.
>> Thanks Mike!
>>
>> Peter
>>
>>
>> On Wed, Oct 28, 2009 at 1:29 PM, Michael McCandless <
>> lucene@mikemccandless.com> wrote:
>>
>>     
>>> Hmm, only a few affected terms, and all this particular
>>> "literals:cfid196$" term, with optional suffixes.  Really strange.
>>>
>>> One things that's odd is the exact term "literals:cfid196$" is printed
>>> twice, which should never happen (every unique term should be stored
>>> only once, in the terms dict).
>>>
>>> And, otherwise, CheckIndex got through the index just fine.
>>>
>>> Try searching a TermQuery with these affected terms and see if it
>>> succeeds?  If so, maybe trying making an index with one or two of
>>> them, alone, and see if that index shows the problem?
>>>
>>> OK I'm attaching more mods.  Can you re-run your CheckIndex?  It will
>>> produce an enormous amount of output, but if you can excise the few
>>> lines around when that warning comes out & post back that'd be great.
>>>
>>> Mike
>>>
>>> On Wed, Oct 28, 2009 at 12:23 PM, Peter Keegan <pe...@gmail.com>
>>> wrote:
>>>       
>>>> Just to be safe, I ran with the official jar file from one of the
>>>>         
>>> mirrors
>>>       
>>>> and reproduced the problem.
>>>> The debug session is not showing any characters = '\uffff' (checking
>>>>         
>>> this in
>>>       
>>>> Tokenizer).
>>>> The output from the modified CheckIndex follows. There are only a few
>>>>         
>>> terms
>>>       
>>>> with the inconsistency. They are all legitimate terms from the app's
>>>> context. With this info, I might be able to isolate the source
>>>>         
>>> documents.
>>>       
>>>> What should I be looking for when they are indexed?
>>>>
>>>> CheckInput output:
>>>>
>>>> Opening index @
>>>>         
>>> D:\mnsavs\lresumes1\lresumes1.luc\lresumes1.search.main.4
>>>       
>>>> Segments file=segments_2 numSegments=3 version=FORMAT_DIAGNOSTICS
>>>>         
>>> [Lucene
>>>       
>>>> 2.9]
>>>>  1 of 3: name=_0 docCount=413585
>>>>    compound=false
>>>>    hasProx=true
>>>>    numFiles=8
>>>>    size (MB)=1,148.817
>>>>    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9.0
>>>> 817268P - 2009-09-21 10:25:09, source=flush, os.arch=amd64,
>>>> java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
>>>>    docStoreOffset=0
>>>>    docStoreSegment=_0
>>>>    docStoreIsCompoundFile=false
>>>>    no deletions
>>>>    test: open reader.........OK
>>>>    test: fields..............OK [33 fields]
>>>>    test: field norms.........OK [33 fields]
>>>>    test: terms, freq, prox...OK [7704753 terms; 180326717 terms/docs
>>>>         
>>> pairs;
>>>       
>>>> 340244234 tokens]
>>>>    test: stored fields.......OK [1240755 total field count; avg 3 fields
>>>> per doc]
>>>>    test: term vectors........OK [0 total vector count; avg 0 term/freq
>>>> vector fields per doc]
>>>>
>>>>  2 of 3: name=_1 docCount=359068
>>>>    compound=false
>>>>    hasProx=true
>>>>    numFiles=8
>>>>    size (MB)=1,125.161
>>>>    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9.0
>>>> 817268P - 2009-09-21 10:25:09, source=flush, os.arch=amd64,
>>>> java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
>>>>    docStoreOffset=413585
>>>>    docStoreSegment=_0
>>>>    docStoreIsCompoundFile=false
>>>>    no deletions
>>>>    test: open reader.........OK
>>>>    test: fields..............OK [33 fields]
>>>>    test: field norms.........OK [33 fields]
>>>>    test: terms, freq, prox...WARNING: term  literals:cfid196$ docFreq=43
>>>>         
>>> !=
>>>       
>>>> num docs seen 4 + num docs deleted 0
>>>> WARNING: term  literals:cfid196$ docFreq=1 != num docs seen 4 + num docs
>>>> deleted 0
>>>> WARNING: term  literals:cfid196$ docFreq=1 != num docs seen 4 + num docs
>>>> deleted 0
>>>> WARNING: term  literals:cfid196$commandant docFreq=1 != num docs seen 9
>>>>         
>>> +
>>>       
>>>> num docs deleted 0
>>>> WARNING: term  literals:cfid196$on docFreq=3178 != num docs seen 1 + num
>>>> docs deleted 0
>>>> OK [7137621 terms; 179101847 terms/docs pairs; 346076058 tokens]
>>>>    test: stored fields.......OK [1077204 total field count; avg 3 fields
>>>> per doc]
>>>>    test: term vectors........OK [0 total vector count; avg 0 term/freq
>>>> vector fields per doc]
>>>>
>>>>  3 of 3: name=_2 docCount=304849
>>>>    compound=false
>>>>    hasProx=true
>>>>    numFiles=8
>>>>    size (MB)=962.004
>>>>    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9.0
>>>> 817268P - 2009-09-21 10:25:09, source=flush, os.arch=amd64,
>>>> java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
>>>>    docStoreOffset=772653
>>>>    docStoreSegment=_0
>>>>    docStoreIsCompoundFile=false
>>>>    no deletions
>>>>    test: open reader.........OK
>>>>    test: fields..............OK [33 fields]
>>>>    test: field norms.........OK [33 fields]
>>>>    test: terms, freq, prox...WARNING: term  contents:? docFreq=1 != num
>>>> docs seen 246 + num docs deleted 0
>>>> WARNING: term  literals:cfid196$ docFreq=45 != num docs seen 4 + num
>>>>         
>>> docs
>>>       
>>>> deleted 0
>>>> WARNING: term  literals:cfid196$ docFreq=1 != num docs seen 4 + num docs
>>>> deleted 0
>>>> WARNING: term  literals:cfid196$cashier docFreq=1 != num docs seen 37 +
>>>>         
>>> num
>>>       
>>>> docs deleted 0
>>>> WARNING: term  literals:cfid196$interrogation docFreq=181 != num docs
>>>>         
>>> seen 1
>>>       
>>>> + num docs deleted 0
>>>> WARNING: term  literals:cfid196$leader docFreq=1 != num docs seen 353 +
>>>>         
>>> num
>>>       
>>>> docs deleted 0
>>>> WARNING: term  literals:cfid196$microsoft docFreq=3114 != num docs seen
>>>>         
>>> 1 +
>>>       
>>>> num docs deleted 0
>>>> WARNING: term  literals:cfid196$nt docFreq=200 != num docs seen 1 + num
>>>>         
>>> docs
>>>       
>>>> deleted 0
>>>> OK [6497769 terms; 145296880 terms/docs pairs; 293458734 tokens]
>>>>    test: stored fields.......OK [914547 total field count; avg 3 fields
>>>>         
>>> per
>>>       
>>>> doc]
>>>>    test: term vectors........OK [0 total vector count; avg 0 term/freq
>>>> vector fields per doc]
>>>>
>>>> No problems were detected with this index.
>>>>
>>>> Peter
>>>>
>>>>
>>>> On Wed, Oct 28, 2009 at 11:29 AM, Michael McCandless <
>>>> lucene@mikemccandless.com> wrote:
>>>>
>>>>         
>>>>> On Wed, Oct 28, 2009 at 10:58 AM, Peter Keegan <peterlkeegan@gmail.com
>>>>>           
>>>>> wrote:
>>>>>           
>>>>>> The only change I made to the source code was the patch for
>>>>>>             
>>>>> PayloadNearQuery
>>>>>           
>>>>>> (LUCENE-1986).
>>>>>>             
>>>>> That patch certainly shouldn't lead to this.
>>>>>
>>>>>           
>>>>>> It's possible that our content contains U+FFFF. I will run in
>>>>>>             
>>> debugger
>>>       
>>>>> and
>>>>>           
>>>>>> see.
>>>>>>             
>>>>> OK may as well check just so we cover all possibilities.
>>>>>
>>>>>           
>>>>>> The data is 'sensitive', so I may not be able to provide a bad
>>>>>>             
>>> segment,
>>>       
>>>>>> unfortunately.
>>>>>>             
>>>>> OK, maybe we can modify your CheckIndex instead.  Let's start with
>>>>> this, which prints a warning whenever the docFreq differs but
>>>>> otherwise continues (vs throwing RuntimeException).  I'm curious how
>>>>> many terms show this, and whether the TermEnum keeps working after
>>>>> this term that has different docFreq:
>>>>>
>>>>> Index: src/java/org/apache/lucene/index/CheckIndex.java
>>>>> ===================================================================
>>>>> --- src/java/org/apache/lucene/index/CheckIndex.java    (revision
>>>>>           
>>> 829889)
>>>       
>>>>> +++ src/java/org/apache/lucene/index/CheckIndex.java    (working copy)
>>>>> @@ -672,8 +672,8 @@
>>>>>         }
>>>>>
>>>>>         if (freq0 + delCount != docFreq) {
>>>>> -          throw new RuntimeException("term " + term + " docFreq=" +
>>>>> -                                     docFreq + " != num docs seen " +
>>>>> freq0 + " + num docs deleted " + delCount);
>>>>> +          System.out.println("WARNING: term  " + term + " docFreq=" +
>>>>> +                             docFreq + " != num docs seen " + freq0 +
>>>>> " + num docs deleted " + delCount);
>>>>>         }
>>>>>       }
>>>>>
>>>>> Mike
>>>>>
>>>>> ---------------------------------------------------------------------
>>>>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>>>>> For additional commands, e-mail: java-user-help@lucene.apache.org
>>>>>
>>>>>
>>>>>           
>>     
>
>   


-- 
- Mark

http://www.lucidimagination.com




---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: IO exception during merge/optimize

Posted by Michael McCandless <lu...@mikemccandless.com>.
I'm glad we finally got to the bottom of this :)  This fix will be in 2.9.1.

This is a nice fast indexing result, too...

Mike

On Thu, Oct 29, 2009 at 3:55 PM, Peter Keegan <pe...@gmail.com> wrote:
> Btw, this 2.9 indexer is fast! I indexed 4Gb (1.07 million docs) with
> optimization in just under 30 min.
> I used setRAMBufferSizeMB=1.9G
>
> Peter
>
> On Thu, Oct 29, 2009 at 3:46 PM, Peter Keegan <pe...@gmail.com>wrote:
>
>> A handful of the source documents did contain the U+FFFF character. The
>> patch from *LUCENE-2016<https://issues.apache.org/jira/browse/LUCENE-2016>
>> *fixed the problem.
>> Thanks Mike!
>>
>> Peter
>>
>>
>> On Wed, Oct 28, 2009 at 1:29 PM, Michael McCandless <
>> lucene@mikemccandless.com> wrote:
>>
>>> Hmm, only a few affected terms, and all this particular
>>> "literals:cfid196$" term, with optional suffixes.  Really strange.
>>>
>>> One things that's odd is the exact term "literals:cfid196$" is printed
>>> twice, which should never happen (every unique term should be stored
>>> only once, in the terms dict).
>>>
>>> And, otherwise, CheckIndex got through the index just fine.
>>>
>>> Try searching a TermQuery with these affected terms and see if it
>>> succeeds?  If so, maybe trying making an index with one or two of
>>> them, alone, and see if that index shows the problem?
>>>
>>> OK I'm attaching more mods.  Can you re-run your CheckIndex?  It will
>>> produce an enormous amount of output, but if you can excise the few
>>> lines around when that warning comes out & post back that'd be great.
>>>
>>> Mike
>>>
>>> On Wed, Oct 28, 2009 at 12:23 PM, Peter Keegan <pe...@gmail.com>
>>> wrote:
>>> > Just to be safe, I ran with the official jar file from one of the
>>> mirrors
>>> > and reproduced the problem.
>>> > The debug session is not showing any characters = '\uffff' (checking
>>> this in
>>> > Tokenizer).
>>> > The output from the modified CheckIndex follows. There are only a few
>>> terms
>>> > with the inconsistency. They are all legitimate terms from the app's
>>> > context. With this info, I might be able to isolate the source
>>> documents.
>>> > What should I be looking for when they are indexed?
>>> >
>>> > CheckInput output:
>>> >
>>> > Opening index @
>>> D:\mnsavs\lresumes1\lresumes1.luc\lresumes1.search.main.4
>>> >
>>> > Segments file=segments_2 numSegments=3 version=FORMAT_DIAGNOSTICS
>>> [Lucene
>>> > 2.9]
>>> >  1 of 3: name=_0 docCount=413585
>>> >    compound=false
>>> >    hasProx=true
>>> >    numFiles=8
>>> >    size (MB)=1,148.817
>>> >    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9.0
>>> > 817268P - 2009-09-21 10:25:09, source=flush, os.arch=amd64,
>>> > java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
>>> >    docStoreOffset=0
>>> >    docStoreSegment=_0
>>> >    docStoreIsCompoundFile=false
>>> >    no deletions
>>> >    test: open reader.........OK
>>> >    test: fields..............OK [33 fields]
>>> >    test: field norms.........OK [33 fields]
>>> >    test: terms, freq, prox...OK [7704753 terms; 180326717 terms/docs
>>> pairs;
>>> > 340244234 tokens]
>>> >    test: stored fields.......OK [1240755 total field count; avg 3 fields
>>> > per doc]
>>> >    test: term vectors........OK [0 total vector count; avg 0 term/freq
>>> > vector fields per doc]
>>> >
>>> >  2 of 3: name=_1 docCount=359068
>>> >    compound=false
>>> >    hasProx=true
>>> >    numFiles=8
>>> >    size (MB)=1,125.161
>>> >    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9.0
>>> > 817268P - 2009-09-21 10:25:09, source=flush, os.arch=amd64,
>>> > java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
>>> >    docStoreOffset=413585
>>> >    docStoreSegment=_0
>>> >    docStoreIsCompoundFile=false
>>> >    no deletions
>>> >    test: open reader.........OK
>>> >    test: fields..............OK [33 fields]
>>> >    test: field norms.........OK [33 fields]
>>> >    test: terms, freq, prox...WARNING: term  literals:cfid196$ docFreq=43
>>> !=
>>> > num docs seen 4 + num docs deleted 0
>>> > WARNING: term  literals:cfid196$ docFreq=1 != num docs seen 4 + num docs
>>> > deleted 0
>>> > WARNING: term  literals:cfid196$ docFreq=1 != num docs seen 4 + num docs
>>> > deleted 0
>>> > WARNING: term  literals:cfid196$commandant docFreq=1 != num docs seen 9
>>> +
>>> > num docs deleted 0
>>> > WARNING: term  literals:cfid196$on docFreq=3178 != num docs seen 1 + num
>>> > docs deleted 0
>>> > OK [7137621 terms; 179101847 terms/docs pairs; 346076058 tokens]
>>> >    test: stored fields.......OK [1077204 total field count; avg 3 fields
>>> > per doc]
>>> >    test: term vectors........OK [0 total vector count; avg 0 term/freq
>>> > vector fields per doc]
>>> >
>>> >  3 of 3: name=_2 docCount=304849
>>> >    compound=false
>>> >    hasProx=true
>>> >    numFiles=8
>>> >    size (MB)=962.004
>>> >    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9.0
>>> > 817268P - 2009-09-21 10:25:09, source=flush, os.arch=amd64,
>>> > java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
>>> >    docStoreOffset=772653
>>> >    docStoreSegment=_0
>>> >    docStoreIsCompoundFile=false
>>> >    no deletions
>>> >    test: open reader.........OK
>>> >    test: fields..............OK [33 fields]
>>> >    test: field norms.........OK [33 fields]
>>> >    test: terms, freq, prox...WARNING: term  contents:? docFreq=1 != num
>>> > docs seen 246 + num docs deleted 0
>>> > WARNING: term  literals:cfid196$ docFreq=45 != num docs seen 4 + num
>>> docs
>>> > deleted 0
>>> > WARNING: term  literals:cfid196$ docFreq=1 != num docs seen 4 + num docs
>>> > deleted 0
>>> > WARNING: term  literals:cfid196$cashier docFreq=1 != num docs seen 37 +
>>> num
>>> > docs deleted 0
>>> > WARNING: term  literals:cfid196$interrogation docFreq=181 != num docs
>>> seen 1
>>> > + num docs deleted 0
>>> > WARNING: term  literals:cfid196$leader docFreq=1 != num docs seen 353 +
>>> num
>>> > docs deleted 0
>>> > WARNING: term  literals:cfid196$microsoft docFreq=3114 != num docs seen
>>> 1 +
>>> > num docs deleted 0
>>> > WARNING: term  literals:cfid196$nt docFreq=200 != num docs seen 1 + num
>>> docs
>>> > deleted 0
>>> > OK [6497769 terms; 145296880 terms/docs pairs; 293458734 tokens]
>>> >    test: stored fields.......OK [914547 total field count; avg 3 fields
>>> per
>>> > doc]
>>> >    test: term vectors........OK [0 total vector count; avg 0 term/freq
>>> > vector fields per doc]
>>> >
>>> > No problems were detected with this index.
>>> >
>>> > Peter
>>> >
>>> >
>>> > On Wed, Oct 28, 2009 at 11:29 AM, Michael McCandless <
>>> > lucene@mikemccandless.com> wrote:
>>> >
>>> >> On Wed, Oct 28, 2009 at 10:58 AM, Peter Keegan <peterlkeegan@gmail.com
>>> >
>>> >> wrote:
>>> >> > The only change I made to the source code was the patch for
>>> >> PayloadNearQuery
>>> >> > (LUCENE-1986).
>>> >>
>>> >> That patch certainly shouldn't lead to this.
>>> >>
>>> >> > It's possible that our content contains U+FFFF. I will run in
>>> debugger
>>> >> and
>>> >> > see.
>>> >>
>>> >> OK may as well check just so we cover all possibilities.
>>> >>
>>> >> > The data is 'sensitive', so I may not be able to provide a bad
>>> segment,
>>> >> > unfortunately.
>>> >>
>>> >> OK, maybe we can modify your CheckIndex instead.  Let's start with
>>> >> this, which prints a warning whenever the docFreq differs but
>>> >> otherwise continues (vs throwing RuntimeException).  I'm curious how
>>> >> many terms show this, and whether the TermEnum keeps working after
>>> >> this term that has different docFreq:
>>> >>
>>> >> Index: src/java/org/apache/lucene/index/CheckIndex.java
>>> >> ===================================================================
>>> >> --- src/java/org/apache/lucene/index/CheckIndex.java    (revision
>>> 829889)
>>> >> +++ src/java/org/apache/lucene/index/CheckIndex.java    (working copy)
>>> >> @@ -672,8 +672,8 @@
>>> >>         }
>>> >>
>>> >>         if (freq0 + delCount != docFreq) {
>>> >> -          throw new RuntimeException("term " + term + " docFreq=" +
>>> >> -                                     docFreq + " != num docs seen " +
>>> >> freq0 + " + num docs deleted " + delCount);
>>> >> +          System.out.println("WARNING: term  " + term + " docFreq=" +
>>> >> +                             docFreq + " != num docs seen " + freq0 +
>>> >> " + num docs deleted " + delCount);
>>> >>         }
>>> >>       }
>>> >>
>>> >> Mike
>>> >>
>>> >> ---------------------------------------------------------------------
>>> >> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>>> >> For additional commands, e-mail: java-user-help@lucene.apache.org
>>> >>
>>> >>
>>> >
>>>
>>
>>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: IO exception during merge/optimize

Posted by Peter Keegan <pe...@gmail.com>.
Btw, this 2.9 indexer is fast! I indexed 4Gb (1.07 million docs) with
optimization in just under 30 min.
I used setRAMBufferSizeMB=1.9G

Peter

On Thu, Oct 29, 2009 at 3:46 PM, Peter Keegan <pe...@gmail.com>wrote:

> A handful of the source documents did contain the U+FFFF character. The
> patch from *LUCENE-2016<https://issues.apache.org/jira/browse/LUCENE-2016>
> *fixed the problem.
> Thanks Mike!
>
> Peter
>
>
> On Wed, Oct 28, 2009 at 1:29 PM, Michael McCandless <
> lucene@mikemccandless.com> wrote:
>
>> Hmm, only a few affected terms, and all this particular
>> "literals:cfid196$" term, with optional suffixes.  Really strange.
>>
>> One things that's odd is the exact term "literals:cfid196$" is printed
>> twice, which should never happen (every unique term should be stored
>> only once, in the terms dict).
>>
>> And, otherwise, CheckIndex got through the index just fine.
>>
>> Try searching a TermQuery with these affected terms and see if it
>> succeeds?  If so, maybe trying making an index with one or two of
>> them, alone, and see if that index shows the problem?
>>
>> OK I'm attaching more mods.  Can you re-run your CheckIndex?  It will
>> produce an enormous amount of output, but if you can excise the few
>> lines around when that warning comes out & post back that'd be great.
>>
>> Mike
>>
>> On Wed, Oct 28, 2009 at 12:23 PM, Peter Keegan <pe...@gmail.com>
>> wrote:
>> > Just to be safe, I ran with the official jar file from one of the
>> mirrors
>> > and reproduced the problem.
>> > The debug session is not showing any characters = '\uffff' (checking
>> this in
>> > Tokenizer).
>> > The output from the modified CheckIndex follows. There are only a few
>> terms
>> > with the inconsistency. They are all legitimate terms from the app's
>> > context. With this info, I might be able to isolate the source
>> documents.
>> > What should I be looking for when they are indexed?
>> >
>> > CheckInput output:
>> >
>> > Opening index @
>> D:\mnsavs\lresumes1\lresumes1.luc\lresumes1.search.main.4
>> >
>> > Segments file=segments_2 numSegments=3 version=FORMAT_DIAGNOSTICS
>> [Lucene
>> > 2.9]
>> >  1 of 3: name=_0 docCount=413585
>> >    compound=false
>> >    hasProx=true
>> >    numFiles=8
>> >    size (MB)=1,148.817
>> >    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9.0
>> > 817268P - 2009-09-21 10:25:09, source=flush, os.arch=amd64,
>> > java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
>> >    docStoreOffset=0
>> >    docStoreSegment=_0
>> >    docStoreIsCompoundFile=false
>> >    no deletions
>> >    test: open reader.........OK
>> >    test: fields..............OK [33 fields]
>> >    test: field norms.........OK [33 fields]
>> >    test: terms, freq, prox...OK [7704753 terms; 180326717 terms/docs
>> pairs;
>> > 340244234 tokens]
>> >    test: stored fields.......OK [1240755 total field count; avg 3 fields
>> > per doc]
>> >    test: term vectors........OK [0 total vector count; avg 0 term/freq
>> > vector fields per doc]
>> >
>> >  2 of 3: name=_1 docCount=359068
>> >    compound=false
>> >    hasProx=true
>> >    numFiles=8
>> >    size (MB)=1,125.161
>> >    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9.0
>> > 817268P - 2009-09-21 10:25:09, source=flush, os.arch=amd64,
>> > java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
>> >    docStoreOffset=413585
>> >    docStoreSegment=_0
>> >    docStoreIsCompoundFile=false
>> >    no deletions
>> >    test: open reader.........OK
>> >    test: fields..............OK [33 fields]
>> >    test: field norms.........OK [33 fields]
>> >    test: terms, freq, prox...WARNING: term  literals:cfid196$ docFreq=43
>> !=
>> > num docs seen 4 + num docs deleted 0
>> > WARNING: term  literals:cfid196$ docFreq=1 != num docs seen 4 + num docs
>> > deleted 0
>> > WARNING: term  literals:cfid196$ docFreq=1 != num docs seen 4 + num docs
>> > deleted 0
>> > WARNING: term  literals:cfid196$commandant docFreq=1 != num docs seen 9
>> +
>> > num docs deleted 0
>> > WARNING: term  literals:cfid196$on docFreq=3178 != num docs seen 1 + num
>> > docs deleted 0
>> > OK [7137621 terms; 179101847 terms/docs pairs; 346076058 tokens]
>> >    test: stored fields.......OK [1077204 total field count; avg 3 fields
>> > per doc]
>> >    test: term vectors........OK [0 total vector count; avg 0 term/freq
>> > vector fields per doc]
>> >
>> >  3 of 3: name=_2 docCount=304849
>> >    compound=false
>> >    hasProx=true
>> >    numFiles=8
>> >    size (MB)=962.004
>> >    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9.0
>> > 817268P - 2009-09-21 10:25:09, source=flush, os.arch=amd64,
>> > java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
>> >    docStoreOffset=772653
>> >    docStoreSegment=_0
>> >    docStoreIsCompoundFile=false
>> >    no deletions
>> >    test: open reader.........OK
>> >    test: fields..............OK [33 fields]
>> >    test: field norms.........OK [33 fields]
>> >    test: terms, freq, prox...WARNING: term  contents:? docFreq=1 != num
>> > docs seen 246 + num docs deleted 0
>> > WARNING: term  literals:cfid196$ docFreq=45 != num docs seen 4 + num
>> docs
>> > deleted 0
>> > WARNING: term  literals:cfid196$ docFreq=1 != num docs seen 4 + num docs
>> > deleted 0
>> > WARNING: term  literals:cfid196$cashier docFreq=1 != num docs seen 37 +
>> num
>> > docs deleted 0
>> > WARNING: term  literals:cfid196$interrogation docFreq=181 != num docs
>> seen 1
>> > + num docs deleted 0
>> > WARNING: term  literals:cfid196$leader docFreq=1 != num docs seen 353 +
>> num
>> > docs deleted 0
>> > WARNING: term  literals:cfid196$microsoft docFreq=3114 != num docs seen
>> 1 +
>> > num docs deleted 0
>> > WARNING: term  literals:cfid196$nt docFreq=200 != num docs seen 1 + num
>> docs
>> > deleted 0
>> > OK [6497769 terms; 145296880 terms/docs pairs; 293458734 tokens]
>> >    test: stored fields.......OK [914547 total field count; avg 3 fields
>> per
>> > doc]
>> >    test: term vectors........OK [0 total vector count; avg 0 term/freq
>> > vector fields per doc]
>> >
>> > No problems were detected with this index.
>> >
>> > Peter
>> >
>> >
>> > On Wed, Oct 28, 2009 at 11:29 AM, Michael McCandless <
>> > lucene@mikemccandless.com> wrote:
>> >
>> >> On Wed, Oct 28, 2009 at 10:58 AM, Peter Keegan <peterlkeegan@gmail.com
>> >
>> >> wrote:
>> >> > The only change I made to the source code was the patch for
>> >> PayloadNearQuery
>> >> > (LUCENE-1986).
>> >>
>> >> That patch certainly shouldn't lead to this.
>> >>
>> >> > It's possible that our content contains U+FFFF. I will run in
>> debugger
>> >> and
>> >> > see.
>> >>
>> >> OK may as well check just so we cover all possibilities.
>> >>
>> >> > The data is 'sensitive', so I may not be able to provide a bad
>> segment,
>> >> > unfortunately.
>> >>
>> >> OK, maybe we can modify your CheckIndex instead.  Let's start with
>> >> this, which prints a warning whenever the docFreq differs but
>> >> otherwise continues (vs throwing RuntimeException).  I'm curious how
>> >> many terms show this, and whether the TermEnum keeps working after
>> >> this term that has different docFreq:
>> >>
>> >> Index: src/java/org/apache/lucene/index/CheckIndex.java
>> >> ===================================================================
>> >> --- src/java/org/apache/lucene/index/CheckIndex.java    (revision
>> 829889)
>> >> +++ src/java/org/apache/lucene/index/CheckIndex.java    (working copy)
>> >> @@ -672,8 +672,8 @@
>> >>         }
>> >>
>> >>         if (freq0 + delCount != docFreq) {
>> >> -          throw new RuntimeException("term " + term + " docFreq=" +
>> >> -                                     docFreq + " != num docs seen " +
>> >> freq0 + " + num docs deleted " + delCount);
>> >> +          System.out.println("WARNING: term  " + term + " docFreq=" +
>> >> +                             docFreq + " != num docs seen " + freq0 +
>> >> " + num docs deleted " + delCount);
>> >>         }
>> >>       }
>> >>
>> >> Mike
>> >>
>> >> ---------------------------------------------------------------------
>> >> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>> >> For additional commands, e-mail: java-user-help@lucene.apache.org
>> >>
>> >>
>> >
>>
>
>

Re: IO exception during merge/optimize

Posted by Peter Keegan <pe...@gmail.com>.
A handful of the source documents did contain the U+FFFF character. The
patch from *LUCENE-2016 <https://issues.apache.org/jira/browse/LUCENE-2016>
*fixed the problem.
Thanks Mike!

Peter

On Wed, Oct 28, 2009 at 1:29 PM, Michael McCandless <
lucene@mikemccandless.com> wrote:

> Hmm, only a few affected terms, and all this particular
> "literals:cfid196$" term, with optional suffixes.  Really strange.
>
> One things that's odd is the exact term "literals:cfid196$" is printed
> twice, which should never happen (every unique term should be stored
> only once, in the terms dict).
>
> And, otherwise, CheckIndex got through the index just fine.
>
> Try searching a TermQuery with these affected terms and see if it
> succeeds?  If so, maybe trying making an index with one or two of
> them, alone, and see if that index shows the problem?
>
> OK I'm attaching more mods.  Can you re-run your CheckIndex?  It will
> produce an enormous amount of output, but if you can excise the few
> lines around when that warning comes out & post back that'd be great.
>
> Mike
>
> On Wed, Oct 28, 2009 at 12:23 PM, Peter Keegan <pe...@gmail.com>
> wrote:
> > Just to be safe, I ran with the official jar file from one of the mirrors
> > and reproduced the problem.
> > The debug session is not showing any characters = '\uffff' (checking this
> in
> > Tokenizer).
> > The output from the modified CheckIndex follows. There are only a few
> terms
> > with the inconsistency. They are all legitimate terms from the app's
> > context. With this info, I might be able to isolate the source documents.
> > What should I be looking for when they are indexed?
> >
> > CheckInput output:
> >
> > Opening index @ D:\mnsavs\lresumes1\lresumes1.luc\lresumes1.search.main.4
> >
> > Segments file=segments_2 numSegments=3 version=FORMAT_DIAGNOSTICS [Lucene
> > 2.9]
> >  1 of 3: name=_0 docCount=413585
> >    compound=false
> >    hasProx=true
> >    numFiles=8
> >    size (MB)=1,148.817
> >    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9.0
> > 817268P - 2009-09-21 10:25:09, source=flush, os.arch=amd64,
> > java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
> >    docStoreOffset=0
> >    docStoreSegment=_0
> >    docStoreIsCompoundFile=false
> >    no deletions
> >    test: open reader.........OK
> >    test: fields..............OK [33 fields]
> >    test: field norms.........OK [33 fields]
> >    test: terms, freq, prox...OK [7704753 terms; 180326717 terms/docs
> pairs;
> > 340244234 tokens]
> >    test: stored fields.......OK [1240755 total field count; avg 3 fields
> > per doc]
> >    test: term vectors........OK [0 total vector count; avg 0 term/freq
> > vector fields per doc]
> >
> >  2 of 3: name=_1 docCount=359068
> >    compound=false
> >    hasProx=true
> >    numFiles=8
> >    size (MB)=1,125.161
> >    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9.0
> > 817268P - 2009-09-21 10:25:09, source=flush, os.arch=amd64,
> > java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
> >    docStoreOffset=413585
> >    docStoreSegment=_0
> >    docStoreIsCompoundFile=false
> >    no deletions
> >    test: open reader.........OK
> >    test: fields..............OK [33 fields]
> >    test: field norms.........OK [33 fields]
> >    test: terms, freq, prox...WARNING: term  literals:cfid196$ docFreq=43
> !=
> > num docs seen 4 + num docs deleted 0
> > WARNING: term  literals:cfid196$ docFreq=1 != num docs seen 4 + num docs
> > deleted 0
> > WARNING: term  literals:cfid196$ docFreq=1 != num docs seen 4 + num docs
> > deleted 0
> > WARNING: term  literals:cfid196$commandant docFreq=1 != num docs seen 9 +
> > num docs deleted 0
> > WARNING: term  literals:cfid196$on docFreq=3178 != num docs seen 1 + num
> > docs deleted 0
> > OK [7137621 terms; 179101847 terms/docs pairs; 346076058 tokens]
> >    test: stored fields.......OK [1077204 total field count; avg 3 fields
> > per doc]
> >    test: term vectors........OK [0 total vector count; avg 0 term/freq
> > vector fields per doc]
> >
> >  3 of 3: name=_2 docCount=304849
> >    compound=false
> >    hasProx=true
> >    numFiles=8
> >    size (MB)=962.004
> >    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9.0
> > 817268P - 2009-09-21 10:25:09, source=flush, os.arch=amd64,
> > java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
> >    docStoreOffset=772653
> >    docStoreSegment=_0
> >    docStoreIsCompoundFile=false
> >    no deletions
> >    test: open reader.........OK
> >    test: fields..............OK [33 fields]
> >    test: field norms.........OK [33 fields]
> >    test: terms, freq, prox...WARNING: term  contents:? docFreq=1 != num
> > docs seen 246 + num docs deleted 0
> > WARNING: term  literals:cfid196$ docFreq=45 != num docs seen 4 + num docs
> > deleted 0
> > WARNING: term  literals:cfid196$ docFreq=1 != num docs seen 4 + num docs
> > deleted 0
> > WARNING: term  literals:cfid196$cashier docFreq=1 != num docs seen 37 +
> num
> > docs deleted 0
> > WARNING: term  literals:cfid196$interrogation docFreq=181 != num docs
> seen 1
> > + num docs deleted 0
> > WARNING: term  literals:cfid196$leader docFreq=1 != num docs seen 353 +
> num
> > docs deleted 0
> > WARNING: term  literals:cfid196$microsoft docFreq=3114 != num docs seen 1
> +
> > num docs deleted 0
> > WARNING: term  literals:cfid196$nt docFreq=200 != num docs seen 1 + num
> docs
> > deleted 0
> > OK [6497769 terms; 145296880 terms/docs pairs; 293458734 tokens]
> >    test: stored fields.......OK [914547 total field count; avg 3 fields
> per
> > doc]
> >    test: term vectors........OK [0 total vector count; avg 0 term/freq
> > vector fields per doc]
> >
> > No problems were detected with this index.
> >
> > Peter
> >
> >
> > On Wed, Oct 28, 2009 at 11:29 AM, Michael McCandless <
> > lucene@mikemccandless.com> wrote:
> >
> >> On Wed, Oct 28, 2009 at 10:58 AM, Peter Keegan <pe...@gmail.com>
> >> wrote:
> >> > The only change I made to the source code was the patch for
> >> PayloadNearQuery
> >> > (LUCENE-1986).
> >>
> >> That patch certainly shouldn't lead to this.
> >>
> >> > It's possible that our content contains U+FFFF. I will run in debugger
> >> and
> >> > see.
> >>
> >> OK may as well check just so we cover all possibilities.
> >>
> >> > The data is 'sensitive', so I may not be able to provide a bad
> segment,
> >> > unfortunately.
> >>
> >> OK, maybe we can modify your CheckIndex instead.  Let's start with
> >> this, which prints a warning whenever the docFreq differs but
> >> otherwise continues (vs throwing RuntimeException).  I'm curious how
> >> many terms show this, and whether the TermEnum keeps working after
> >> this term that has different docFreq:
> >>
> >> Index: src/java/org/apache/lucene/index/CheckIndex.java
> >> ===================================================================
> >> --- src/java/org/apache/lucene/index/CheckIndex.java    (revision
> 829889)
> >> +++ src/java/org/apache/lucene/index/CheckIndex.java    (working copy)
> >> @@ -672,8 +672,8 @@
> >>         }
> >>
> >>         if (freq0 + delCount != docFreq) {
> >> -          throw new RuntimeException("term " + term + " docFreq=" +
> >> -                                     docFreq + " != num docs seen " +
> >> freq0 + " + num docs deleted " + delCount);
> >> +          System.out.println("WARNING: term  " + term + " docFreq=" +
> >> +                             docFreq + " != num docs seen " + freq0 +
> >> " + num docs deleted " + delCount);
> >>         }
> >>       }
> >>
> >> Mike
> >>
> >> ---------------------------------------------------------------------
> >> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> >> For additional commands, e-mail: java-user-help@lucene.apache.org
> >>
> >>
> >
>

Re: IO exception during merge/optimize

Posted by Michael McCandless <lu...@mikemccandless.com>.
Hmm, only a few affected terms, and all this particular
"literals:cfid196$" term, with optional suffixes.  Really strange.

One things that's odd is the exact term "literals:cfid196$" is printed
twice, which should never happen (every unique term should be stored
only once, in the terms dict).

And, otherwise, CheckIndex got through the index just fine.

Try searching a TermQuery with these affected terms and see if it
succeeds?  If so, maybe trying making an index with one or two of
them, alone, and see if that index shows the problem?

OK I'm attaching more mods.  Can you re-run your CheckIndex?  It will
produce an enormous amount of output, but if you can excise the few
lines around when that warning comes out & post back that'd be great.

Mike

On Wed, Oct 28, 2009 at 12:23 PM, Peter Keegan <pe...@gmail.com> wrote:
> Just to be safe, I ran with the official jar file from one of the mirrors
> and reproduced the problem.
> The debug session is not showing any characters = '\uffff' (checking this in
> Tokenizer).
> The output from the modified CheckIndex follows. There are only a few terms
> with the inconsistency. They are all legitimate terms from the app's
> context. With this info, I might be able to isolate the source documents.
> What should I be looking for when they are indexed?
>
> CheckInput output:
>
> Opening index @ D:\mnsavs\lresumes1\lresumes1.luc\lresumes1.search.main.4
>
> Segments file=segments_2 numSegments=3 version=FORMAT_DIAGNOSTICS [Lucene
> 2.9]
> �1 of 3: name=_0 docCount=413585
> � �compound=false
> � �hasProx=true
> � �numFiles=8
> � �size (MB)=1,148.817
> � �diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9.0
> 817268P - 2009-09-21 10:25:09, source=flush, os.arch=amd64,
> java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
> � �docStoreOffset=0
> � �docStoreSegment=_0
> � �docStoreIsCompoundFile=false
> � �no deletions
> � �test: open reader.........OK
> � �test: fields..............OK [33 fields]
> � �test: field norms.........OK [33 fields]
> � �test: terms, freq, prox...OK [7704753 terms; 180326717 terms/docs pairs;
> 340244234 tokens]
> � �test: stored fields.......OK [1240755 total field count; avg 3 fields
> per doc]
> � �test: term vectors........OK [0 total vector count; avg 0 term/freq
> vector fields per doc]
>
> �2 of 3: name=_1 docCount=359068
> � �compound=false
> � �hasProx=true
> � �numFiles=8
> � �size (MB)=1,125.161
> � �diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9.0
> 817268P - 2009-09-21 10:25:09, source=flush, os.arch=amd64,
> java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
> � �docStoreOffset=413585
> � �docStoreSegment=_0
> � �docStoreIsCompoundFile=false
> � �no deletions
> � �test: open reader.........OK
> � �test: fields..............OK [33 fields]
> � �test: field norms.........OK [33 fields]
> � �test: terms, freq, prox...WARNING: term �literals:cfid196$ docFreq=43 !=
> num docs seen 4 + num docs deleted 0
> WARNING: term �literals:cfid196$ docFreq=1 != num docs seen 4 + num docs
> deleted 0
> WARNING: term �literals:cfid196$ docFreq=1 != num docs seen 4 + num docs
> deleted 0
> WARNING: term �literals:cfid196$commandant docFreq=1 != num docs seen 9 +
> num docs deleted 0
> WARNING: term �literals:cfid196$on docFreq=3178 != num docs seen 1 + num
> docs deleted 0
> OK [7137621 terms; 179101847 terms/docs pairs; 346076058 tokens]
> � �test: stored fields.......OK [1077204 total field count; avg 3 fields
> per doc]
> � �test: term vectors........OK [0 total vector count; avg 0 term/freq
> vector fields per doc]
>
> �3 of 3: name=_2 docCount=304849
> � �compound=false
> � �hasProx=true
> � �numFiles=8
> � �size (MB)=962.004
> � �diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9.0
> 817268P - 2009-09-21 10:25:09, source=flush, os.arch=amd64,
> java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
> � �docStoreOffset=772653
> � �docStoreSegment=_0
> � �docStoreIsCompoundFile=false
> � �no deletions
> � �test: open reader.........OK
> � �test: fields..............OK [33 fields]
> � �test: field norms.........OK [33 fields]
> � �test: terms, freq, prox...WARNING: term �contents:? docFreq=1 != num
> docs seen 246 + num docs deleted 0
> WARNING: term �literals:cfid196$ docFreq=45 != num docs seen 4 + num docs
> deleted 0
> WARNING: term �literals:cfid196$ docFreq=1 != num docs seen 4 + num docs
> deleted 0
> WARNING: term �literals:cfid196$cashier docFreq=1 != num docs seen 37 + num
> docs deleted 0
> WARNING: term �literals:cfid196$interrogation docFreq=181 != num docs seen 1
> + num docs deleted 0
> WARNING: term �literals:cfid196$leader docFreq=1 != num docs seen 353 + num
> docs deleted 0
> WARNING: term �literals:cfid196$microsoft docFreq=3114 != num docs seen 1 +
> num docs deleted 0
> WARNING: term �literals:cfid196$nt docFreq=200 != num docs seen 1 + num docs
> deleted 0
> OK [6497769 terms; 145296880 terms/docs pairs; 293458734 tokens]
> � �test: stored fields.......OK [914547 total field count; avg 3 fields per
> doc]
> � �test: term vectors........OK [0 total vector count; avg 0 term/freq
> vector fields per doc]
>
> No problems were detected with this index.
>
> Peter
>
>
> On Wed, Oct 28, 2009 at 11:29 AM, Michael McCandless <
> lucene@mikemccandless.com> wrote:
>
>> On Wed, Oct 28, 2009 at 10:58 AM, Peter Keegan <pe...@gmail.com>
>> wrote:
>> > The only change I made to the source code was the patch for
>> PayloadNearQuery
>> > (LUCENE-1986).
>>
>> That patch certainly shouldn't lead to this.
>>
>> > It's possible that our content contains U+FFFF. I will run in debugger
>> and
>> > see.
>>
>> OK may as well check just so we cover all possibilities.
>>
>> > The data is 'sensitive', so I may not be able to provide a bad segment,
>> > unfortunately.
>>
>> OK, maybe we can modify your CheckIndex instead. �Let's start with
>> this, which prints a warning whenever the docFreq differs but
>> otherwise continues (vs throwing RuntimeException). �I'm curious how
>> many terms show this, and whether the TermEnum keeps working after
>> this term that has different docFreq:
>>
>> Index: src/java/org/apache/lucene/index/CheckIndex.java
>> ===================================================================
>> --- src/java/org/apache/lucene/index/CheckIndex.java � �(revision 829889)
>> +++ src/java/org/apache/lucene/index/CheckIndex.java � �(working copy)
>> @@ -672,8 +672,8 @@
>> � � � � }
>>
>> � � � � if (freq0 + delCount != docFreq) {
>> - � � � � �throw new RuntimeException("term " + term + " docFreq=" +
>> - � � � � � � � � � � � � � � � � � � docFreq + " != num docs seen " +
>> freq0 + " + num docs deleted " + delCount);
>> + � � � � �System.out.println("WARNING: term �" + term + " docFreq=" +
>> + � � � � � � � � � � � � � � docFreq + " != num docs seen " + freq0 +
>> " + num docs deleted " + delCount);
>> � � � � }
>> � � � }
>>
>> Mike
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>> For additional commands, e-mail: java-user-help@lucene.apache.org
>>
>>
>


Re: IO exception during merge/optimize

Posted by Peter Keegan <pe...@gmail.com>.
Just to be safe, I ran with the official jar file from one of the mirrors
and reproduced the problem.
The debug session is not showing any characters = '\uffff' (checking this in
Tokenizer).
The output from the modified CheckIndex follows. There are only a few terms
with the inconsistency. They are all legitimate terms from the app's
context. With this info, I might be able to isolate the source documents.
What should I be looking for when they are indexed?

CheckInput output:

Opening index @ D:\mnsavs\lresumes1\lresumes1.luc\lresumes1.search.main.4

Segments file=segments_2 numSegments=3 version=FORMAT_DIAGNOSTICS [Lucene
2.9]
  1 of 3: name=_0 docCount=413585
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=1,148.817
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9.0
817268P - 2009-09-21 10:25:09, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=0
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [7704753 terms; 180326717 terms/docs pairs;
340244234 tokens]
    test: stored fields.......OK [1240755 total field count; avg 3 fields
per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  2 of 3: name=_1 docCount=359068
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=1,125.161
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9.0
817268P - 2009-09-21 10:25:09, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=413585
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...WARNING: term  literals:cfid196$ docFreq=43 !=
num docs seen 4 + num docs deleted 0
WARNING: term  literals:cfid196$ docFreq=1 != num docs seen 4 + num docs
deleted 0
WARNING: term  literals:cfid196$ docFreq=1 != num docs seen 4 + num docs
deleted 0
WARNING: term  literals:cfid196$commandant docFreq=1 != num docs seen 9 +
num docs deleted 0
WARNING: term  literals:cfid196$on docFreq=3178 != num docs seen 1 + num
docs deleted 0
OK [7137621 terms; 179101847 terms/docs pairs; 346076058 tokens]
    test: stored fields.......OK [1077204 total field count; avg 3 fields
per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  3 of 3: name=_2 docCount=304849
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=962.004
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9.0
817268P - 2009-09-21 10:25:09, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=772653
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...WARNING: term  contents:? docFreq=1 != num
docs seen 246 + num docs deleted 0
WARNING: term  literals:cfid196$ docFreq=45 != num docs seen 4 + num docs
deleted 0
WARNING: term  literals:cfid196$ docFreq=1 != num docs seen 4 + num docs
deleted 0
WARNING: term  literals:cfid196$cashier docFreq=1 != num docs seen 37 + num
docs deleted 0
WARNING: term  literals:cfid196$interrogation docFreq=181 != num docs seen 1
+ num docs deleted 0
WARNING: term  literals:cfid196$leader docFreq=1 != num docs seen 353 + num
docs deleted 0
WARNING: term  literals:cfid196$microsoft docFreq=3114 != num docs seen 1 +
num docs deleted 0
WARNING: term  literals:cfid196$nt docFreq=200 != num docs seen 1 + num docs
deleted 0
OK [6497769 terms; 145296880 terms/docs pairs; 293458734 tokens]
    test: stored fields.......OK [914547 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

No problems were detected with this index.

Peter


On Wed, Oct 28, 2009 at 11:29 AM, Michael McCandless <
lucene@mikemccandless.com> wrote:

> On Wed, Oct 28, 2009 at 10:58 AM, Peter Keegan <pe...@gmail.com>
> wrote:
> > The only change I made to the source code was the patch for
> PayloadNearQuery
> > (LUCENE-1986).
>
> That patch certainly shouldn't lead to this.
>
> > It's possible that our content contains U+FFFF. I will run in debugger
> and
> > see.
>
> OK may as well check just so we cover all possibilities.
>
> > The data is 'sensitive', so I may not be able to provide a bad segment,
> > unfortunately.
>
> OK, maybe we can modify your CheckIndex instead.  Let's start with
> this, which prints a warning whenever the docFreq differs but
> otherwise continues (vs throwing RuntimeException).  I'm curious how
> many terms show this, and whether the TermEnum keeps working after
> this term that has different docFreq:
>
> Index: src/java/org/apache/lucene/index/CheckIndex.java
> ===================================================================
> --- src/java/org/apache/lucene/index/CheckIndex.java    (revision 829889)
> +++ src/java/org/apache/lucene/index/CheckIndex.java    (working copy)
> @@ -672,8 +672,8 @@
>         }
>
>         if (freq0 + delCount != docFreq) {
> -          throw new RuntimeException("term " + term + " docFreq=" +
> -                                     docFreq + " != num docs seen " +
> freq0 + " + num docs deleted " + delCount);
> +          System.out.println("WARNING: term  " + term + " docFreq=" +
> +                             docFreq + " != num docs seen " + freq0 +
> " + num docs deleted " + delCount);
>         }
>       }
>
> Mike
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> For additional commands, e-mail: java-user-help@lucene.apache.org
>
>

Re: IO exception during merge/optimize

Posted by Michael McCandless <lu...@mikemccandless.com>.
On Wed, Oct 28, 2009 at 10:58 AM, Peter Keegan <pe...@gmail.com> wrote:
> The only change I made to the source code was the patch for PayloadNearQuery
> (LUCENE-1986).

That patch certainly shouldn't lead to this.

> It's possible that our content contains U+FFFF. I will run in debugger and
> see.

OK may as well check just so we cover all possibilities.

> The data is 'sensitive', so I may not be able to provide a bad segment,
> unfortunately.

OK, maybe we can modify your CheckIndex instead.  Let's start with
this, which prints a warning whenever the docFreq differs but
otherwise continues (vs throwing RuntimeException).  I'm curious how
many terms show this, and whether the TermEnum keeps working after
this term that has different docFreq:

Index: src/java/org/apache/lucene/index/CheckIndex.java
===================================================================
--- src/java/org/apache/lucene/index/CheckIndex.java	(revision 829889)
+++ src/java/org/apache/lucene/index/CheckIndex.java	(working copy)
@@ -672,8 +672,8 @@
         }

         if (freq0 + delCount != docFreq) {
-          throw new RuntimeException("term " + term + " docFreq=" +
-                                     docFreq + " != num docs seen " +
freq0 + " + num docs deleted " + delCount);
+          System.out.println("WARNING: term  " + term + " docFreq=" +
+                             docFreq + " != num docs seen " + freq0 +
" + num docs deleted " + delCount);
         }
       }

Mike

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


RE: IO exception during merge/optimize

Posted by Uwe Schindler <uw...@thetaphi.de>.
That's exactly what oal.util.UnicodeUtils does when convertig UTF-8 to
UTF-16 (which is Java's internal encoding).

-----
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: uwe@thetaphi.de

> -----Original Message-----
> From: Michael McCandless [mailto:lucene@mikemccandless.com]
> Sent: Wednesday, October 28, 2009 4:25 PM
> To: java-user@lucene.apache.org
> Subject: Re: IO exception during merge/optimize
> 
> Right, I would expect Lucene would silently truncate the term at the
> U+FFFF, and not lead to this odd exception.
> 
> Mike
> 
> On Wed, Oct 28, 2009 at 11:23 AM, Robert Muir <rc...@gmail.com> wrote:
> > i might be wrong about this, but recently I intentionally tried to
> create
> > index with terms with U+FFFF to see if it would cause a problem :)
> >
> > the U+FFFF seemed to be discarded completely (maybe at UTF-8 encode
> time)...
> > then again I was using RAMDirectory.
> >
> > On Wed, Oct 28, 2009 at 10:58 AM, Peter Keegan
> <pe...@gmail.com>wrote:
> >
> >> The only change I made to the source code was the patch for
> >> PayloadNearQuery
> >> (LUCENE-1986).
> >> It's possible that our content contains U+FFFF. I will run in debugger
> and
> >> see.
> >> The data is 'sensitive', so I may not be able to provide a bad segment,
> >> unfortunately.
> >>
> >> Peter
> >>
> >> On Wed, Oct 28, 2009 at 10:43 AM, Michael McCandless <
> >> lucene@mikemccandless.com> wrote:
> >>
> >> > OK... when you exported the sources & built yourself, you didn't make
> >> > any changes, right?
> >> >
> >> > It's really odd how many of the errors are due to the term
> >> > "literals:cfid196$", or some variation (one time with "on" appended,
> >> > another time with "microsoft").  Do you know what documents typically
> >> > contain that term, and what the context is around it?  Maybe try to
> >> > index only those documents and see if this happens?  (It could
> >> > conceivably be caused by bad data, if this is some weird bug).  One
> >> > question: does your content ever use the [invalid] unicode character
> >> > U+FFFF?  (Lucene uses this internally to mark the end of the term).
> >> >
> >> > Would it be possible to zip up all files starting with _1c (should be
> >> > ~22 MB) and post somewhere that I could download?  That's the
> smallest
> >> > of the broken segments I think.
> >> >
> >> > I don't need the full IW output just yet, thanks.
> >> >
> >> > Mike
> >> >
> >> > On Wed, Oct 28, 2009 at 10:21 AM, Peter Keegan
> <pe...@gmail.com>
> >> > wrote:
> >> > > Yes, I used JDK 1.6.0_16 when running CheckIndex and it reported
> the
> >> same
> >> > > problems when run multiple times.
> >> > >
> >> > >>Also, what does Lucene version "2.9 exported - 2009-10-27 15:31:52"
> >> mean?
> >> > > This appears to be something added by the ant build, since I built
> >> Lucene
> >> > > from the source code.
> >> > >
> >> > > I rebuilt the index using mergeFactor=50, ramBufferSize=200MB,
> >> > > maxBufferedDocs=1000000
> >> > > This produced 49 segments, 9 of which are broken. The broken
> segments
> >> are
> >> > in
> >> > > the latter half, similar to my previous post with 3 segments. Do
> you
> >> > think
> >> > > this could be caused by 'bad' data, for example bad unicode
> characters?
> >> > >
> >> > > Here is the output from CheckIndex:
> >> >
> >> > ---------------------------------------------------------------------
> >> > To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> >> > For additional commands, e-mail: java-user-help@lucene.apache.org
> >> >
> >> >
> >>
> >
> >
> >
> > --
> > Robert Muir
> > rcmuir@gmail.com
> >
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> For additional commands, e-mail: java-user-help@lucene.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: IO exception during merge/optimize

Posted by Robert Muir <rc...@gmail.com>.
thats exactly the result I saw FWIW

On Wed, Oct 28, 2009 at 11:25 AM, Michael McCandless <
lucene@mikemccandless.com> wrote:

> Right, I would expect Lucene would silently truncate the term at the
> U+FFFF, and not lead to this odd exception.
>
> Mike
>
> On Wed, Oct 28, 2009 at 11:23 AM, Robert Muir <rc...@gmail.com> wrote:
> > i might be wrong about this, but recently I intentionally tried to create
> > index with terms with U+FFFF to see if it would cause a problem :)
> >
> > the U+FFFF seemed to be discarded completely (maybe at UTF-8 encode
> time)...
> > then again I was using RAMDirectory.
> >
> > On Wed, Oct 28, 2009 at 10:58 AM, Peter Keegan <peterlkeegan@gmail.com
> >wrote:
> >
> >> The only change I made to the source code was the patch for
> >> PayloadNearQuery
> >> (LUCENE-1986).
> >> It's possible that our content contains U+FFFF. I will run in debugger
> and
> >> see.
> >> The data is 'sensitive', so I may not be able to provide a bad segment,
> >> unfortunately.
> >>
> >> Peter
> >>
> >> On Wed, Oct 28, 2009 at 10:43 AM, Michael McCandless <
> >> lucene@mikemccandless.com> wrote:
> >>
> >> > OK... when you exported the sources & built yourself, you didn't make
> >> > any changes, right?
> >> >
> >> > It's really odd how many of the errors are due to the term
> >> > "literals:cfid196$", or some variation (one time with "on" appended,
> >> > another time with "microsoft").  Do you know what documents typically
> >> > contain that term, and what the context is around it?  Maybe try to
> >> > index only those documents and see if this happens?  (It could
> >> > conceivably be caused by bad data, if this is some weird bug).  One
> >> > question: does your content ever use the [invalid] unicode character
> >> > U+FFFF?  (Lucene uses this internally to mark the end of the term).
> >> >
> >> > Would it be possible to zip up all files starting with _1c (should be
> >> > ~22 MB) and post somewhere that I could download?  That's the smallest
> >> > of the broken segments I think.
> >> >
> >> > I don't need the full IW output just yet, thanks.
> >> >
> >> > Mike
> >> >
> >> > On Wed, Oct 28, 2009 at 10:21 AM, Peter Keegan <
> peterlkeegan@gmail.com>
> >> > wrote:
> >> > > Yes, I used JDK 1.6.0_16 when running CheckIndex and it reported the
> >> same
> >> > > problems when run multiple times.
> >> > >
> >> > >>Also, what does Lucene version "2.9 exported - 2009-10-27 15:31:52"
> >> mean?
> >> > > This appears to be something added by the ant build, since I built
> >> Lucene
> >> > > from the source code.
> >> > >
> >> > > I rebuilt the index using mergeFactor=50, ramBufferSize=200MB,
> >> > > maxBufferedDocs=1000000
> >> > > This produced 49 segments, 9 of which are broken. The broken
> segments
> >> are
> >> > in
> >> > > the latter half, similar to my previous post with 3 segments. Do you
> >> > think
> >> > > this could be caused by 'bad' data, for example bad unicode
> characters?
> >> > >
> >> > > Here is the output from CheckIndex:
> >> >
> >> > ---------------------------------------------------------------------
> >> > To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> >> > For additional commands, e-mail: java-user-help@lucene.apache.org
> >> >
> >> >
> >>
> >
> >
> >
> > --
> > Robert Muir
> > rcmuir@gmail.com
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> For additional commands, e-mail: java-user-help@lucene.apache.org
>
>


-- 
Robert Muir
rcmuir@gmail.com

Re: IO exception during merge/optimize

Posted by Michael McCandless <lu...@mikemccandless.com>.
Right, I would expect Lucene would silently truncate the term at the
U+FFFF, and not lead to this odd exception.

Mike

On Wed, Oct 28, 2009 at 11:23 AM, Robert Muir <rc...@gmail.com> wrote:
> i might be wrong about this, but recently I intentionally tried to create
> index with terms with U+FFFF to see if it would cause a problem :)
>
> the U+FFFF seemed to be discarded completely (maybe at UTF-8 encode time)...
> then again I was using RAMDirectory.
>
> On Wed, Oct 28, 2009 at 10:58 AM, Peter Keegan <pe...@gmail.com>wrote:
>
>> The only change I made to the source code was the patch for
>> PayloadNearQuery
>> (LUCENE-1986).
>> It's possible that our content contains U+FFFF. I will run in debugger and
>> see.
>> The data is 'sensitive', so I may not be able to provide a bad segment,
>> unfortunately.
>>
>> Peter
>>
>> On Wed, Oct 28, 2009 at 10:43 AM, Michael McCandless <
>> lucene@mikemccandless.com> wrote:
>>
>> > OK... when you exported the sources & built yourself, you didn't make
>> > any changes, right?
>> >
>> > It's really odd how many of the errors are due to the term
>> > "literals:cfid196$", or some variation (one time with "on" appended,
>> > another time with "microsoft").  Do you know what documents typically
>> > contain that term, and what the context is around it?  Maybe try to
>> > index only those documents and see if this happens?  (It could
>> > conceivably be caused by bad data, if this is some weird bug).  One
>> > question: does your content ever use the [invalid] unicode character
>> > U+FFFF?  (Lucene uses this internally to mark the end of the term).
>> >
>> > Would it be possible to zip up all files starting with _1c (should be
>> > ~22 MB) and post somewhere that I could download?  That's the smallest
>> > of the broken segments I think.
>> >
>> > I don't need the full IW output just yet, thanks.
>> >
>> > Mike
>> >
>> > On Wed, Oct 28, 2009 at 10:21 AM, Peter Keegan <pe...@gmail.com>
>> > wrote:
>> > > Yes, I used JDK 1.6.0_16 when running CheckIndex and it reported the
>> same
>> > > problems when run multiple times.
>> > >
>> > >>Also, what does Lucene version "2.9 exported - 2009-10-27 15:31:52"
>> mean?
>> > > This appears to be something added by the ant build, since I built
>> Lucene
>> > > from the source code.
>> > >
>> > > I rebuilt the index using mergeFactor=50, ramBufferSize=200MB,
>> > > maxBufferedDocs=1000000
>> > > This produced 49 segments, 9 of which are broken. The broken segments
>> are
>> > in
>> > > the latter half, similar to my previous post with 3 segments. Do you
>> > think
>> > > this could be caused by 'bad' data, for example bad unicode characters?
>> > >
>> > > Here is the output from CheckIndex:
>> >
>> > ---------------------------------------------------------------------
>> > To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>> > For additional commands, e-mail: java-user-help@lucene.apache.org
>> >
>> >
>>
>
>
>
> --
> Robert Muir
> rcmuir@gmail.com
>

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: IO exception during merge/optimize

Posted by Robert Muir <rc...@gmail.com>.
i might be wrong about this, but recently I intentionally tried to create
index with terms with U+FFFF to see if it would cause a problem :)

the U+FFFF seemed to be discarded completely (maybe at UTF-8 encode time)...
then again I was using RAMDirectory.

On Wed, Oct 28, 2009 at 10:58 AM, Peter Keegan <pe...@gmail.com>wrote:

> The only change I made to the source code was the patch for
> PayloadNearQuery
> (LUCENE-1986).
> It's possible that our content contains U+FFFF. I will run in debugger and
> see.
> The data is 'sensitive', so I may not be able to provide a bad segment,
> unfortunately.
>
> Peter
>
> On Wed, Oct 28, 2009 at 10:43 AM, Michael McCandless <
> lucene@mikemccandless.com> wrote:
>
> > OK... when you exported the sources & built yourself, you didn't make
> > any changes, right?
> >
> > It's really odd how many of the errors are due to the term
> > "literals:cfid196$", or some variation (one time with "on" appended,
> > another time with "microsoft").  Do you know what documents typically
> > contain that term, and what the context is around it?  Maybe try to
> > index only those documents and see if this happens?  (It could
> > conceivably be caused by bad data, if this is some weird bug).  One
> > question: does your content ever use the [invalid] unicode character
> > U+FFFF?  (Lucene uses this internally to mark the end of the term).
> >
> > Would it be possible to zip up all files starting with _1c (should be
> > ~22 MB) and post somewhere that I could download?  That's the smallest
> > of the broken segments I think.
> >
> > I don't need the full IW output just yet, thanks.
> >
> > Mike
> >
> > On Wed, Oct 28, 2009 at 10:21 AM, Peter Keegan <pe...@gmail.com>
> > wrote:
> > > Yes, I used JDK 1.6.0_16 when running CheckIndex and it reported the
> same
> > > problems when run multiple times.
> > >
> > >>Also, what does Lucene version "2.9 exported - 2009-10-27 15:31:52"
> mean?
> > > This appears to be something added by the ant build, since I built
> Lucene
> > > from the source code.
> > >
> > > I rebuilt the index using mergeFactor=50, ramBufferSize=200MB,
> > > maxBufferedDocs=1000000
> > > This produced 49 segments, 9 of which are broken. The broken segments
> are
> > in
> > > the latter half, similar to my previous post with 3 segments. Do you
> > think
> > > this could be caused by 'bad' data, for example bad unicode characters?
> > >
> > > Here is the output from CheckIndex:
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> > For additional commands, e-mail: java-user-help@lucene.apache.org
> >
> >
>



-- 
Robert Muir
rcmuir@gmail.com

Re: IO exception during merge/optimize

Posted by Peter Keegan <pe...@gmail.com>.
The only change I made to the source code was the patch for PayloadNearQuery
(LUCENE-1986).
It's possible that our content contains U+FFFF. I will run in debugger and
see.
The data is 'sensitive', so I may not be able to provide a bad segment,
unfortunately.

Peter

On Wed, Oct 28, 2009 at 10:43 AM, Michael McCandless <
lucene@mikemccandless.com> wrote:

> OK... when you exported the sources & built yourself, you didn't make
> any changes, right?
>
> It's really odd how many of the errors are due to the term
> "literals:cfid196$", or some variation (one time with "on" appended,
> another time with "microsoft").  Do you know what documents typically
> contain that term, and what the context is around it?  Maybe try to
> index only those documents and see if this happens?  (It could
> conceivably be caused by bad data, if this is some weird bug).  One
> question: does your content ever use the [invalid] unicode character
> U+FFFF?  (Lucene uses this internally to mark the end of the term).
>
> Would it be possible to zip up all files starting with _1c (should be
> ~22 MB) and post somewhere that I could download?  That's the smallest
> of the broken segments I think.
>
> I don't need the full IW output just yet, thanks.
>
> Mike
>
> On Wed, Oct 28, 2009 at 10:21 AM, Peter Keegan <pe...@gmail.com>
> wrote:
> > Yes, I used JDK 1.6.0_16 when running CheckIndex and it reported the same
> > problems when run multiple times.
> >
> >>Also, what does Lucene version "2.9 exported - 2009-10-27 15:31:52" mean?
> > This appears to be something added by the ant build, since I built Lucene
> > from the source code.
> >
> > I rebuilt the index using mergeFactor=50, ramBufferSize=200MB,
> > maxBufferedDocs=1000000
> > This produced 49 segments, 9 of which are broken. The broken segments are
> in
> > the latter half, similar to my previous post with 3 segments. Do you
> think
> > this could be caused by 'bad' data, for example bad unicode characters?
> >
> > Here is the output from CheckIndex:
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> For additional commands, e-mail: java-user-help@lucene.apache.org
>
>

Re: IO exception during merge/optimize

Posted by Michael McCandless <lu...@mikemccandless.com>.
OK... when you exported the sources & built yourself, you didn't make
any changes, right?

It's really odd how many of the errors are due to the term
"literals:cfid196$", or some variation (one time with "on" appended,
another time with "microsoft").  Do you know what documents typically
contain that term, and what the context is around it?  Maybe try to
index only those documents and see if this happens?  (It could
conceivably be caused by bad data, if this is some weird bug).  One
question: does your content ever use the [invalid] unicode character
U+FFFF?  (Lucene uses this internally to mark the end of the term).

Would it be possible to zip up all files starting with _1c (should be
~22 MB) and post somewhere that I could download?  That's the smallest
of the broken segments I think.

I don't need the full IW output just yet, thanks.

Mike

On Wed, Oct 28, 2009 at 10:21 AM, Peter Keegan <pe...@gmail.com> wrote:
> Yes, I used JDK 1.6.0_16 when running CheckIndex and it reported the same
> problems when run multiple times.
>
>>Also, what does Lucene version "2.9 exported - 2009-10-27 15:31:52" mean?
> This appears to be something added by the ant build, since I built Lucene
> from the source code.
>
> I rebuilt the index using mergeFactor=50, ramBufferSize=200MB,
> maxBufferedDocs=1000000
> This produced 49 segments, 9 of which are broken. The broken segments are in
> the latter half, similar to my previous post with 3 segments. Do you think
> this could be caused by 'bad' data, for example bad unicode characters?
>
> Here is the output from CheckIndex:

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: IO exception during merge/optimize

Posted by Peter Keegan <pe...@gmail.com>.
Yes, I used JDK 1.6.0_16 when running CheckIndex and it reported the same
problems when run multiple times.

>Also, what does Lucene version "2.9 exported - 2009-10-27 15:31:52" mean?
This appears to be something added by the ant build, since I built Lucene
from the source code.

I rebuilt the index using mergeFactor=50, ramBufferSize=200MB,
maxBufferedDocs=1000000
This produced 49 segments, 9 of which are broken. The broken segments are in
the latter half, similar to my previous post with 3 segments. Do you think
this could be caused by 'bad' data, for example bad unicode characters?

Here is the output from CheckIndex:

Opening index @ D:\mnsavs\lresumes1\lresumes1.luc\lresumes1.search.main.3

Segments file=segments_2 numSegments=49 version=FORMAT_DIAGNOSTICS [Lucene
2.9]
  1 of 49: name=_0 docCount=32607
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=104.853
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=0
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [1044042 terms; 15675239 terms/docs pairs;
30451901 tokens]
    test: stored fields.......OK [97821 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  2 of 49: name=_1 docCount=29043
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=99.056
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=32607
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [1039244 terms; 14310099 terms/docs pairs;
29363393 tokens]
    test: stored fields.......OK [87129 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  3 of 49: name=_2 docCount=28376
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=92.893
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=61650
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [1070860 terms; 13575421 terms/docs pairs;
26651224 tokens]
    test: stored fields.......OK [85128 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  4 of 49: name=_3 docCount=26936
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=85.337
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=90026
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [1100843 terms; 12224580 terms/docs pairs;
24327234 tokens]
    test: stored fields.......OK [80808 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  5 of 49: name=_4 docCount=24129
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=92.877
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=116962
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [953360 terms; 12679962 terms/docs pairs;
28713617 tokens]
    test: stored fields.......OK [72387 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  6 of 49: name=_5 docCount=28737
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=84.085
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=141091
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [1046406 terms; 12476181 terms/docs pairs;
23798602 tokens]
    test: stored fields.......OK [86211 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  7 of 49: name=_6 docCount=31930
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=81.221
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=169828
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [1040633 terms; 12527851 terms/docs pairs;
22544575 tokens]
    test: stored fields.......OK [95790 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  8 of 49: name=_7 docCount=30328
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=79.407
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=201758
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [1011468 terms; 12179814 terms/docs pairs;
22052790 tokens]
    test: stored fields.......OK [90984 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  9 of 49: name=_8 docCount=29899
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=77.691
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=232086
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [984327 terms; 11954498 terms/docs pairs;
21591867 tokens]
    test: stored fields.......OK [89697 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  10 of 49: name=_9 docCount=29256
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=75.306
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=261985
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [964109 terms; 11588018 terms/docs pairs;
21067600 tokens]
    test: stored fields.......OK [87768 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  11 of 49: name=_a docCount=28152
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=73.1
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=291241
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [947892 terms; 11309631 terms/docs pairs;
20057187 tokens]
    test: stored fields.......OK [84456 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  12 of 49: name=_b docCount=28085
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=79.339
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=319393
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [874964 terms; 12908370 terms/docs pairs;
21303545 tokens]
    test: stored fields.......OK [84255 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  13 of 49: name=_c docCount=25826
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=68.772
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=347478
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [909574 terms; 10594343 terms/docs pairs;
18824830 tokens]
    test: stored fields.......OK [77478 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  14 of 49: name=_d docCount=24897
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=65.666
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=373304
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [900276 terms; 10033030 terms/docs pairs;
18078077 tokens]
    test: stored fields.......OK [74691 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  15 of 49: name=_e docCount=23703
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=64.102
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=398201
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [879633 terms; 9727229 terms/docs pairs;
17663716 tokens]
    test: stored fields.......OK [71109 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  16 of 49: name=_f docCount=22817
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=62.733
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=421904
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [858745 terms; 9502077 terms/docs pairs;
17280189 tokens]
    test: stored fields.......OK [68451 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  17 of 49: name=_g docCount=22048
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=61.599
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=444721
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [832264 terms; 9250137 terms/docs pairs;
17231073 tokens]
    test: stored fields.......OK [66144 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  18 of 49: name=_h docCount=20403
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=60.463
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=466769
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [806620 terms; 9140615 terms/docs pairs;
17229721 tokens]
    test: stored fields.......OK [61209 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  19 of 49: name=_i docCount=15832
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=61.809
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=487172
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [743992 terms; 9521640 terms/docs pairs;
18856338 tokens]
    test: stored fields.......OK [47496 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  20 of 49: name=_j docCount=15530
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=60.477
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=503004
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [736987 terms; 9326695 terms/docs pairs;
18446953 tokens]
    test: stored fields.......OK [46590 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  21 of 49: name=_k docCount=15154
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=59.197
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=518534
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [725501 terms; 9080575 terms/docs pairs;
18086724 tokens]
    test: stored fields.......OK [45462 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  22 of 49: name=_l docCount=15143
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=58.065
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=533688
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [718454 terms; 9001690 terms/docs pairs;
17486800 tokens]
    test: stored fields.......OK [45429 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  23 of 49: name=_m docCount=14772
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=57.173
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=548831
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [703510 terms; 8820732 terms/docs pairs;
17436315 tokens]
    test: stored fields.......OK [44316 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  24 of 49: name=_n docCount=14553
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=55.757
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=563603
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [699431 terms; 8655384 terms/docs pairs;
16754881 tokens]
    test: stored fields.......OK [43659 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  25 of 49: name=_o docCount=13915
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=54.793
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=578156
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [685499 terms; 8400803 terms/docs pairs;
16618782 tokens]
    test: stored fields.......OK [41745 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  26 of 49: name=_p docCount=13700
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=53.546
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=592071
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [678014 terms; 8217915 terms/docs pairs;
16282687 tokens]
    test: stored fields.......OK [41100 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  27 of 49: name=_q docCount=13629
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=52.761
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=605771
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [667953 terms; 8140291 terms/docs pairs;
15990296 tokens]
    test: stored fields.......OK [40887 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  28 of 49: name=_r docCount=15237
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=48.489
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=619400
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [705543 terms; 7039002 terms/docs pairs;
13716437 tokens]
    test: stored fields.......OK [45711 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  29 of 49: name=_s docCount=15104
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=47.815
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=634637
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [690968 terms; 6931518 terms/docs pairs;
13491286 tokens]
    test: stored fields.......OK [45312 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  30 of 49: name=_t docCount=14397
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=46.781
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=649741
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [676367 terms; 6715361 terms/docs pairs;
13217449 tokens]
    test: stored fields.......OK [43191 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  31 of 49: name=_u docCount=14221
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=45.819
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=664138
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...ERROR [term literals:cfid196$ docFreq=2 != num
docs seen 1 + num docs deleted 0]
java.lang.RuntimeException: term literals:cfid196$ docFreq=2 != num docs
seen 1 + num docs deleted 0
    at org.apache.lucene.index.CheckIndex.testTermIndex(CheckIndex.java:675)
    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:530)
    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
    test: stored fields.......OK [42663 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]
FAILED
    WARNING: fixIndex() would remove reference to this segment; full
exception:
java.lang.RuntimeException: Term Index test failed
    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:543)
    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)

  32 of 49: name=_v docCount=13609
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=44.718
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=678359
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [656642 terms; 6391224 terms/docs pairs;
12712494 tokens]
    test: stored fields.......OK [40827 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  33 of 49: name=_w docCount=13667
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=43.979
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=691968
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [644235 terms; 6335940 terms/docs pairs;
12493135 tokens]
    test: stored fields.......OK [41001 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  34 of 49: name=_x docCount=13127
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=43.355
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=705635
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...ERROR [term literals:cfid196$on docFreq=143 !=
num docs seen 1 + num docs deleted 0]
java.lang.RuntimeException: term literals:cfid196$on docFreq=143 != num docs
seen 1 + num docs deleted 0
    at org.apache.lucene.index.CheckIndex.testTermIndex(CheckIndex.java:675)
    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:530)
    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
    test: stored fields.......OK [39381 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]
FAILED
    WARNING: fixIndex() would remove reference to this segment; full
exception:
java.lang.RuntimeException: Term Index test failed
    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:543)
    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)

  35 of 49: name=_y docCount=18883
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=44.149
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=718762
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [623172 terms; 7078171 terms/docs pairs;
11703710 tokens]
    test: stored fields.......OK [56649 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  36 of 49: name=_z docCount=32665
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=96.094
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=737645
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...ERROR [term literals:cfid196$ docFreq=5 != num
docs seen 1 + num docs deleted 0]
java.lang.RuntimeException: term literals:cfid196$ docFreq=5 != num docs
seen 1 + num docs deleted 0
    at org.apache.lucene.index.CheckIndex.testTermIndex(CheckIndex.java:675)
    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:530)
    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
    test: stored fields.......OK [97995 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]
FAILED
    WARNING: fixIndex() would remove reference to this segment; full
exception:
java.lang.RuntimeException: Term Index test failed
    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:543)
    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)

  37 of 49: name=_10 docCount=30048
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=93.591
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=770310
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [1104452 terms; 13801203 terms/docs pairs;
26908610 tokens]
    test: stored fields.......OK [90144 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  38 of 49: name=_11 docCount=28918
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=92.029
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=800358
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [1071103 terms; 13482603 terms/docs pairs;
26689595 tokens]
    test: stored fields.......OK [86754 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  39 of 49: name=_12 docCount=28061
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=90.18
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=829276
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [1046474 terms; 13165220 terms/docs pairs;
26172166 tokens]
    test: stored fields.......OK [84183 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  40 of 49: name=_13 docCount=27951
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=88.815
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=857337
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...ERROR [term contents:? docFreq=1 != num docs
seen 23 + num docs deleted 0]
java.lang.RuntimeException: term contents:? docFreq=1 != num docs seen 23 +
num docs deleted 0
    at org.apache.lucene.index.CheckIndex.testTermIndex(CheckIndex.java:675)
    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:530)
    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
    test: stored fields.......OK [83853 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]
FAILED
    WARNING: fixIndex() would remove reference to this segment; full
exception:
java.lang.RuntimeException: Term Index test failed
    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:543)
    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)

  41 of 49: name=_14 docCount=25719
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=84.413
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=885288
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...ERROR [term literals:cfid196$microsoft
docFreq=261 != num docs seen 1 + num docs deleted 0]
java.lang.RuntimeException: term literals:cfid196$microsoft docFreq=261 !=
num docs seen 1 + num docs deleted 0
    at org.apache.lucene.index.CheckIndex.testTermIndex(CheckIndex.java:675)
    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:530)
    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
    test: stored fields.......OK [77157 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]
FAILED
    WARNING: fixIndex() would remove reference to this segment; full
exception:
java.lang.RuntimeException: Term Index test failed
    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:543)
    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)

  42 of 49: name=_15 docCount=24619
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=82.36
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=911007
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...ERROR [term literals:cfid196$ docFreq=4 != num
docs seen 1 + num docs deleted 0]
java.lang.RuntimeException: term literals:cfid196$ docFreq=4 != num docs
seen 1 + num docs deleted 0
    at org.apache.lucene.index.CheckIndex.testTermIndex(CheckIndex.java:675)
    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:530)
    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
    test: stored fields.......OK [73857 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]
FAILED
    WARNING: fixIndex() would remove reference to this segment; full
exception:
java.lang.RuntimeException: Term Index test failed
    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:543)
    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)

  43 of 49: name=_16 docCount=24737
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=81.418
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=935626
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [980037 terms; 11775562 terms/docs pairs;
23725750 tokens]
    test: stored fields.......OK [74211 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  44 of 49: name=_17 docCount=24024
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=78.846
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=960363
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [972926 terms; 11434745 terms/docs pairs;
22790354 tokens]
    test: stored fields.......OK [72072 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  45 of 49: name=_18 docCount=23092
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=77.403
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=984387
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [949210 terms; 11127490 terms/docs pairs;
22554444 tokens]
    test: stored fields.......OK [69276 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  46 of 49: name=_19 docCount=21704
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=75.87
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=1007479
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...ERROR [term literals:cfid196$ docFreq=4 != num
docs seen 1 + num docs deleted 0]
java.lang.RuntimeException: term literals:cfid196$ docFreq=4 != num docs
seen 1 + num docs deleted 0
    at org.apache.lucene.index.CheckIndex.testTermIndex(CheckIndex.java:675)
    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:530)
    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
    test: stored fields.......OK [65112 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]
FAILED
    WARNING: fixIndex() would remove reference to this segment; full
exception:
java.lang.RuntimeException: Term Index test failed
    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:543)
    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)

  47 of 49: name=_1a docCount=21226
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=73.991
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=1029183
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [917383 terms; 10553486 terms/docs pairs;
21664562 tokens]
    test: stored fields.......OK [63678 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  48 of 49: name=_1b docCount=20658
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=72.712
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=1050409
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...ERROR [term literals:cfid196$ docFreq=4 != num
docs seen 1 + num docs deleted 0]
java.lang.RuntimeException: term literals:cfid196$ docFreq=4 != num docs
seen 1 + num docs deleted 0
    at org.apache.lucene.index.CheckIndex.testTermIndex(CheckIndex.java:675)
    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:530)
    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
    test: stored fields.......OK [61974 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]
FAILED
    WARNING: fixIndex() would remove reference to this segment; full
exception:
java.lang.RuntimeException: Term Index test failed
    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:543)
    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)

  49 of 49: name=_1c docCount=6343
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=21.919
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-28 09:13:38, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=1071067
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...ERROR [term literals:cfid196$ docFreq=3 != num
docs seen 1 + num docs deleted 0]
java.lang.RuntimeException: term literals:cfid196$ docFreq=3 != num docs
seen 1 + num docs deleted 0
    at org.apache.lucene.index.CheckIndex.testTermIndex(CheckIndex.java:675)
    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:530)
    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
    test: stored fields.......OK [19029 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]
FAILED
    WARNING: fixIndex() would remove reference to this segment; full
exception:
java.lang.RuntimeException: Term Index test failed
    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:543)
    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)

WARNING: 9 broken segments (containing 187007 documents) detected
WARNING: would write new segments file, and 187007 documents would be lost,
if -fix were specified

Here is the output from the IndexWriter:

IFD [Indexer]: setInfoStream
deletionPolicy=org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy@2f74219d
IW 0 [Indexer]: setInfoStream:
dir=org.apache.lucene.store.SimpleFSDirectory@D:\mnsavs\lresumes1\lresumes1.luc\lresumes1.update.main.3
autoCommit=false
mergePolicy=org.apache.lucene.index.LogByteSizeMergePolicy@714ae2c1mergeScheduler=org.apache.lucene.index.ConcurrentMergeScheduler@6b6d2702ramBufferSizeMB=16.0
maxBufferedDocs=-1 maxBuffereDeleteTerms=-1
maxFieldLength=2147483647 index=
IW 0 [Indexer]: setRAMBufferSizeMB 200.0
IW 0 [Indexer]: setMaxBufferedDocs 1000000
IW 0 [Indexer]: flush at getReader
IW 0 [Indexer]:   flush: segment=null docStoreSegment=null docStoreOffset=0
flushDocs=false flushDeletes=true flushDocStores=false numDocs=0
numBufDelTerms=0
IW 0 [Indexer]:   index before flush
IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=195.53 allocMB=195.53
deletesMB=4.492 triggerMB=200
IW 0 [UpdWriterBuild]:   flush: segment=_0 docStoreSegment=_0
docStoreOffset=0 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=32607 numBufDelTerms=32607
IW 0 [UpdWriterBuild]:   index before flush
IW 0 [UpdWriterBuild]: DW: flush postings as segment _0 numDocs=32607
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=205028352 newFlushedSize=109946095
docs/MB=310.979 new/old=53.625%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [1 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [1 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 1 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 1 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=191.496 allocMB=195.53
deletesMB=8.523 triggerMB=200
IW 0 [UpdWriterBuild]:   flush: segment=_1 docStoreSegment=_0
docStoreOffset=32607 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=29043 numBufDelTerms=29043
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _1 numDocs=29043
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=200798208 newFlushedSize=103867694
docs/MB=293.198 new/old=51.727%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [2 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [2 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 2 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 2 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now balance allocations: usedMB=187.555 vs
trigger=200 allocMB=197.555 deletesMB=12.461 vs trigger=210
byteBlockFree=9.938 charBlockFree=0
IW 0 [UpdWriterBuild]: DW:     nothing to free
IW 0 [UpdWriterBuild]: DW:     after free: freedMB=9.938 usedMB=200.015
allocMB=187.617
IW 0 [UpdWriterBuild]:   flush: segment=_2 docStoreSegment=_0
docStoreOffset=61650 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=28376 numBufDelTerms=28376
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _2 numDocs=28376
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=196665344 newFlushedSize=97404685
docs/MB=305.472 new/old=49.528%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [3 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [3 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 3 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 3 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=183.81 allocMB=190.247
deletesMB=16.203 triggerMB=200
IW 0 [UpdWriterBuild]:   flush: segment=_3 docStoreSegment=_0
docStoreOffset=90026 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=26936 numBufDelTerms=26936
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _3 numDocs=26936
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=192738304 newFlushedSize=89482320
docs/MB=315.643 new/old=46.427%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [4 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [4 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 4 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 4 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now balance allocations: usedMB=180.466 vs
trigger=200 allocMB=194.278 deletesMB=19.556 vs trigger=210 byteBlockFree=0
charBlockFree=3.062
IW 0 [UpdWriterBuild]: DW:     nothing to free
IW 0 [UpdWriterBuild]: DW:     after free: freedMB=13.75 usedMB=200.022
allocMB=180.528
IW 0 [UpdWriterBuild]:   flush: segment=_4 docStoreSegment=_0
docStoreOffset=116962 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=24129 numBufDelTerms=24129
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _4 numDocs=24129
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=189232128 newFlushedSize=97388095
docs/MB=259.797 new/old=51.465%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [5 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [5 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 5 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 5 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now balance allocations: usedMB=176.461 vs
trigger=200 allocMB=189.398 deletesMB=23.551 vs trigger=210
byteBlockFree=12.875 charBlockFree=0
IW 0 [UpdWriterBuild]: DW:     nothing to free
IW 0 [UpdWriterBuild]: DW:     after free: freedMB=12.875 usedMB=200.012
allocMB=176.523
IW 0 [UpdWriterBuild]:   flush: segment=_5 docStoreSegment=_0
docStoreOffset=141091 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=28737 numBufDelTerms=28737
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _5 numDocs=28737
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=185032704 newFlushedSize=88168939
docs/MB=341.764 new/old=47.65%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [6 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [6 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 6 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 6 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=172.017 allocMB=176.523
deletesMB=27.994 triggerMB=200
IW 0 [UpdWriterBuild]:   flush: segment=_6 docStoreSegment=_0
docStoreOffset=169828 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=31930 numBufDelTerms=31930
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _6 numDocs=31930
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=180372480 newFlushedSize=85166081
docs/MB=393.126 new/old=47.217%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [7 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [7 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 7 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 7 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=167.828 allocMB=176.523
deletesMB=32.214 triggerMB=200
IW 0 [UpdWriterBuild]:   flush: segment=_7 docStoreSegment=_0
docStoreOffset=201758 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=30328 numBufDelTerms=30328
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _7 numDocs=30328
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=175980544 newFlushedSize=83263539
docs/MB=381.934 new/old=47.314%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [8 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [8 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 8 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 8 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now balance allocations: usedMB=163.662 vs
trigger=200 allocMB=176.523 deletesMB=36.375 vs trigger=210 byteBlockFree=7
charBlockFree=1.312
IW 0 [UpdWriterBuild]: DW:     nothing to free
IW 0 [UpdWriterBuild]: DW:     after free: freedMB=12.799 usedMB=200.037
allocMB=163.725
IW 0 [UpdWriterBuild]:   flush: segment=_8 docStoreSegment=_0
docStoreOffset=232086 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=29899 numBufDelTerms=29899
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _8 numDocs=29899
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=171612160 newFlushedSize=81464942
docs/MB=384.845 new/old=47.47%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [9 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [9 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 9 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 9 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=159.569 allocMB=163.725
deletesMB=40.446 triggerMB=200
IW 0 [UpdWriterBuild]:   flush: segment=_9 docStoreSegment=_0
docStoreOffset=261985 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=29256 numBufDelTerms=29256
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _9 numDocs=29256
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=167320576 newFlushedSize=78964162
docs/MB=388.494 new/old=47.193%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [10 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [10 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 10 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 10 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=155.663 allocMB=163.725
deletesMB=44.359 triggerMB=200
IW 0 [UpdWriterBuild]:   flush: segment=_a docStoreSegment=_0
docStoreOffset=291241 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=28152 numBufDelTerms=28152
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _a numDocs=28152
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=163224576 newFlushedSize=76650343
docs/MB=385.119 new/old=46.96%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [11 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [11 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 11 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 11 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now balance allocations: usedMB=151.807 vs
trigger=200 allocMB=163.725 deletesMB=48.221 vs trigger=210
byteBlockFree=1.094 charBlockFree=2.812
IW 0 [UpdWriterBuild]: DW:     nothing to free
IW 0 [UpdWriterBuild]: DW:     after free: freedMB=11.855 usedMB=200.028
allocMB=151.869
IW 0 [UpdWriterBuild]:   flush: segment=_b docStoreSegment=_0
docStoreOffset=319393 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=28085 numBufDelTerms=28085
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _b numDocs=28085
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=159180800 newFlushedSize=83192732
docs/MB=353.988 new/old=52.263%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [12 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [12 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 12 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 12 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=148.189 allocMB=155.283
deletesMB=51.811 triggerMB=200
IW 0 [UpdWriterBuild]:   flush: segment=_c docStoreSegment=_0
docStoreOffset=347478 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=25826 numBufDelTerms=25826
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _c numDocs=25826
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=155387904 newFlushedSize=72112070
docs/MB=375.534 new/old=46.408%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [13 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [13 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 13 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 13 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now balance allocations: usedMB=144.731 vs
trigger=200 allocMB=155.283 deletesMB=55.274 vs trigger=210
byteBlockFree=9.594 charBlockFree=0.188
IW 0 [UpdWriterBuild]: DW:     nothing to free
IW 0 [UpdWriterBuild]: DW:     after free: freedMB=10.489 usedMB=200.006
allocMB=144.794
IW 0 [UpdWriterBuild]:   flush: segment=_d docStoreSegment=_0
docStoreOffset=373304 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=24897 numBufDelTerms=24897
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _d numDocs=24897
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=151761920 newFlushedSize=68855841
docs/MB=379.146 new/old=45.371%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [14 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [14 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 14 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 14 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=141.435 allocMB=144.794
deletesMB=58.572 triggerMB=200
IW 0 [UpdWriterBuild]:   flush: segment=_e docStoreSegment=_0
docStoreOffset=398201 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=23703 numBufDelTerms=23703
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _e numDocs=23703
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=148304896 newFlushedSize=67215266
docs/MB=369.773 new/old=45.322%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [15 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [15 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 15 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 15 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=138.261 allocMB=144.794
deletesMB=61.746 triggerMB=200
IW 0 [UpdWriterBuild]:   flush: segment=_f docStoreSegment=_0
docStoreOffset=421904 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=22817 numBufDelTerms=22817
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _f numDocs=22817
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=144976896 newFlushedSize=65780307
docs/MB=363.716 new/old=45.373%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [16 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [16 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 16 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 16 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=135.207 allocMB=144.794
deletesMB=64.812 triggerMB=200
IW 0 [UpdWriterBuild]:   flush: segment=_g docStoreSegment=_0
docStoreOffset=444721 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=22048 numBufDelTerms=22048
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _g numDocs=22048
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=141774848 newFlushedSize=64591081
docs/MB=357.929 new/old=45.559%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [17 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [17 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 17 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 17 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now balance allocations: usedMB=132.374 vs
trigger=200 allocMB=144.794 deletesMB=67.648 vs trigger=210
byteBlockFree=3.312 charBlockFree=2.25
IW 0 [UpdWriterBuild]: DW:     nothing to free
IW 0 [UpdWriterBuild]: DW:     after free: freedMB=12.357 usedMB=200.022
allocMB=132.437
IW 0 [UpdWriterBuild]:   flush: segment=_h docStoreSegment=_0
docStoreOffset=466769 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=20403 numBufDelTerms=20403
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _h numDocs=20403
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=138804224 newFlushedSize=63399427
docs/MB=337.449 new/old=45.675%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [18 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [18 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 18 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 18 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=130.167 allocMB=136.312
deletesMB=69.837 triggerMB=200
IW 0 [UpdWriterBuild]:   flush: segment=_i docStoreSegment=_0
docStoreOffset=487172 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=15832 numBufDelTerms=15832
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _i numDocs=15832
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=136489984 newFlushedSize=64811536
docs/MB=256.144 new/old=47.484%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [19 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [19 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 19 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 19 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=128.046 allocMB=136.312
deletesMB=71.985 triggerMB=200
IW 0 [UpdWriterBuild]:   flush: segment=_j docStoreSegment=_0
docStoreOffset=503004 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=15530 numBufDelTerms=15530
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _j numDocs=15530
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=134265856 newFlushedSize=63414576
docs/MB=256.792 new/old=47.231%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [20 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [20 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 20 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 20 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now balance allocations: usedMB=125.924 vs
trigger=200 allocMB=136.312 deletesMB=74.08 vs trigger=210 byteBlockFree=2.5
charBlockFree=1.969
IW 0 [UpdWriterBuild]: DW:     nothing to free
IW 0 [UpdWriterBuild]: DW:     after free: freedMB=10.325 usedMB=200.004
allocMB=125.986
IW 0 [UpdWriterBuild]:   flush: segment=_k docStoreSegment=_0
docStoreOffset=518534 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=15154 numBufDelTerms=15154
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _k numDocs=15154
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=132040704 newFlushedSize=62072781
docs/MB=255.992 new/old=47.01%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [21 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [21 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 21 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 21 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=123.851 allocMB=125.986
deletesMB=76.174 triggerMB=200
IW 0 [UpdWriterBuild]:   flush: segment=_l docStoreSegment=_0
docStoreOffset=533688 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=15143 numBufDelTerms=15143
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _l numDocs=15143
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=129866752 newFlushedSize=60885538
docs/MB=260.794 new/old=46.883%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [22 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [22 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 22 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 22 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=121.794 allocMB=125.986
deletesMB=78.217 triggerMB=200
IW 0 [UpdWriterBuild]:   flush: segment=_m docStoreSegment=_0
docStoreOffset=548831 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=14772 numBufDelTerms=14772
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _m numDocs=14772
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=127710208 newFlushedSize=59950371
docs/MB=258.373 new/old=46.943%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [23 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [23 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 23 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 23 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=119.778 allocMB=125.986
deletesMB=80.229 triggerMB=200
IW 0 [UpdWriterBuild]:   flush: segment=_n docStoreSegment=_0
docStoreOffset=563603 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=14553 numBufDelTerms=14553
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _n numDocs=14553
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=125596672 newFlushedSize=58464817
docs/MB=261.01 new/old=46.55%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [24 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [24 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 24 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 24 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=117.865 allocMB=125.986
deletesMB=82.154 triggerMB=200
IW 0 [UpdWriterBuild]:   flush: segment=_o docStoreSegment=_0
docStoreOffset=578156 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=13915 numBufDelTerms=13915
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _o numDocs=13915
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=123590656 newFlushedSize=57453924
docs/MB=253.959 new/old=46.487%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [25 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [25 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 25 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 25 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now balance allocations: usedMB=115.978 vs
trigger=200 allocMB=125.986 deletesMB=84.048 vs trigger=210
byteBlockFree=5.438 charBlockFree=1.062
IW 0 [UpdWriterBuild]: DW:     nothing to free
IW 0 [UpdWriterBuild]: DW:     after free: freedMB=9.946 usedMB=200.026
allocMB=116.04
IW 0 [UpdWriterBuild]:   flush: segment=_p docStoreSegment=_0
docStoreOffset=592071 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=13700 numBufDelTerms=13700
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _p numDocs=13700
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=121611264 newFlushedSize=56147055
docs/MB=255.855 new/old=46.169%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [26 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [26 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 26 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 26 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=114.08 allocMB=116.04
deletesMB=85.933 triggerMB=200
IW 0 [UpdWriterBuild]:   flush: segment=_q docStoreSegment=_0
docStoreOffset=605771 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=13629 numBufDelTerms=13629
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _q numDocs=13629
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=119621632 newFlushedSize=55323613
docs/MB=258.317 new/old=46.249%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [27 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [27 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 27 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 27 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=111.958 allocMB=118.677
deletesMB=88.051 triggerMB=200
IW 0 [UpdWriterBuild]:   flush: segment=_r docStoreSegment=_0
docStoreOffset=619400 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=15237 numBufDelTerms=15237
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _r numDocs=15237
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=117396480 newFlushedSize=50844181
docs/MB=314.238 new/old=43.31%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [28 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [28 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 28 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 28 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=109.855 allocMB=118.677
deletesMB=90.152 triggerMB=200
IW 0 [UpdWriterBuild]:   flush: segment=_s docStoreSegment=_0
docStoreOffset=634637 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=15104 numBufDelTerms=15104
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _s numDocs=15104
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=115191808 newFlushedSize=50137876
docs/MB=315.883 new/old=43.526%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [29 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [29 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 29 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 29 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now balance allocations: usedMB=107.847 vs
trigger=200 allocMB=118.677 deletesMB=92.154 vs trigger=210 byteBlockFree=8
charBlockFree=0.656
IW 0 [UpdWriterBuild]: DW:     nothing to free
IW 0 [UpdWriterBuild]: DW:     after free: freedMB=10.768 usedMB=200.001
allocMB=107.909
IW 0 [UpdWriterBuild]:   flush: segment=_t docStoreSegment=_0
docStoreOffset=649741 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=14397 numBufDelTerms=14397
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _t numDocs=14397
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=113085440 newFlushedSize=49053320
docs/MB=307.754 new/old=43.377%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [30 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [30 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 30 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 30 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0 _t:C14397->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=105.897 allocMB=107.909
deletesMB=94.132 triggerMB=200
IW 0 [UpdWriterBuild]:   flush: segment=_u docStoreSegment=_0
docStoreOffset=664138 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=14221 numBufDelTerms=14221
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0 _t:C14397->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _u numDocs=14221
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=111041536 newFlushedSize=48043942
docs/MB=310.378 new/old=43.267%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [31 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [31 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 31 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 31 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0 _t:C14397->_0 _u:C14221->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=104.006 allocMB=107.909
deletesMB=96.025 triggerMB=200
IW 0 [UpdWriterBuild]:   flush: segment=_v docStoreSegment=_0
docStoreOffset=678359 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=13609 numBufDelTerms=13609
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0 _t:C14397->_0 _u:C14221->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _v numDocs=13609
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=109058048 newFlushedSize=46890005
docs/MB=304.331 new/old=42.995%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [32 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [32 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 32 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 32 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0 _t:C14397->_0 _u:C14221->_0 _v:C13609->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=102.099 allocMB=107.909
deletesMB=97.926 triggerMB=200
IW 0 [UpdWriterBuild]:   flush: segment=_w docStoreSegment=_0
docStoreOffset=691968 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=13667 numBufDelTerms=13667
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0 _t:C14397->_0 _u:C14221->_0 _v:C13609->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _w numDocs=13667
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=107058176 newFlushedSize=46115445
docs/MB=310.761 new/old=43.075%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [33 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [33 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 33 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 33 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0 _t:C14397->_0 _u:C14221->_0 _v:C13609->_0
_w:C13667->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=100.259 allocMB=107.909
deletesMB=99.752 triggerMB=200
IW 0 [UpdWriterBuild]:   flush: segment=_x docStoreSegment=_0
docStoreOffset=705635 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=13127 numBufDelTerms=13127
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0 _t:C14397->_0 _u:C14221->_0 _v:C13609->_0
_w:C13667->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _x numDocs=13127
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=105128960 newFlushedSize=45460919
docs/MB=302.78 new/old=43.243%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [34 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [34 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 34 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 34 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0 _t:C14397->_0 _u:C14221->_0 _v:C13609->_0
_w:C13667->_0 _x:C13127->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now balance allocations: usedMB=97.644 vs
trigger=200 allocMB=107.909 deletesMB=102.378 vs trigger=210
byteBlockFree=5.375 charBlockFree=0.969
IW 0 [UpdWriterBuild]: DW:     nothing to free
IW 0 [UpdWriterBuild]: DW:     after free: freedMB=10.203 usedMB=200.022
allocMB=97.706
IW 0 [UpdWriterBuild]:   flush: segment=_y docStoreSegment=_0
docStoreOffset=718762 flushDocs=true flushDeletes=true flushDocStores=false
numDocs=18883 numBufDelTerms=18883
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0 _t:C14397->_0 _u:C14221->_0 _v:C13609->_0
_w:C13667->_0 _x:C13127->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _y numDocs=18883
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=102386688 newFlushedSize=46293479
docs/MB=427.712 new/old=45.214%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [35 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: DW: apply 737645 buffered deleted terms and 0 deleted
docIDs and 0 deleted queries on 35 segments.
IFD [UpdWriterBuild]: now checkpoint "segments_1" [35 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 35 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 35 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0 _t:C14397->_0 _u:C14221->_0 _v:C13609->_0
_w:C13667->_0 _x:C13127->_0 _y:C18883->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=195.476 allocMB=195.538
deletesMB=4.545 triggerMB=200
IW 0 [UpdWriterBuild]:   flush: segment=_z docStoreSegment=_0
docStoreOffset=737645 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=32665 numBufDelTerms=32665
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0 _t:C14397->_0 _u:C14221->_0 _v:C13609->_0
_w:C13667->_0 _x:C13127->_0 _y:C18883->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _z numDocs=32665
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=204971008 newFlushedSize=100761432
docs/MB=339.929 new/old=49.159%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [36 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [36 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 36 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 36 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0 _t:C14397->_0 _u:C14221->_0 _v:C13609->_0
_w:C13667->_0 _x:C13127->_0 _y:C18883->_0 _z:C32665->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=191.292 allocMB=195.538
deletesMB=8.724 triggerMB=200
IW 0 [UpdWriterBuild]:   flush: segment=_10 docStoreSegment=_0
docStoreOffset=770310 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=30048 numBufDelTerms=30048
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0 _t:C14397->_0 _u:C14221->_0 _v:C13609->_0
_w:C13667->_0 _x:C13127->_0 _y:C18883->_0 _z:C32665->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _10 numDocs=30048
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=200584192 newFlushedSize=98136465
docs/MB=321.059 new/old=48.925%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [37 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [37 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 37 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 37 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0 _t:C14397->_0 _u:C14221->_0 _v:C13609->_0
_w:C13667->_0 _x:C13127->_0 _y:C18883->_0 _z:C32665->_0 _10:C30048->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=187.273 allocMB=195.538
deletesMB=12.746 triggerMB=200
IW 0 [UpdWriterBuild]:   flush: segment=_11 docStoreSegment=_0
docStoreOffset=800358 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=28918 numBufDelTerms=28918
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0 _t:C14397->_0 _u:C14221->_0 _v:C13609->_0
_w:C13667->_0 _x:C13127->_0 _y:C18883->_0 _z:C32665->_0 _10:C30048->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _11 numDocs=28918
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=196370432 newFlushedSize=96499609
docs/MB=314.226 new/old=49.142%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [38 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [38 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 38 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 38 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0 _t:C14397->_0 _u:C14221->_0 _v:C13609->_0
_w:C13667->_0 _x:C13127->_0 _y:C18883->_0 _z:C32665->_0 _10:C30048->_0
_11:C28918->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now balance allocations: usedMB=183.367 vs
trigger=200 allocMB=195.538 deletesMB=16.649 vs trigger=210
byteBlockFree=4.906 charBlockFree=1.875
IW 0 [UpdWriterBuild]: DW:     nothing to free
IW 0 [UpdWriterBuild]: DW:     after free: freedMB=12.108 usedMB=200.016
allocMB=183.43
IW 0 [UpdWriterBuild]:   flush: segment=_12 docStoreSegment=_0
docStoreOffset=829276 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=28061 numBufDelTerms=28061
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0 _t:C14397->_0 _u:C14221->_0 _v:C13609->_0
_w:C13667->_0 _x:C13127->_0 _y:C18883->_0 _z:C32665->_0 _10:C30048->_0
_11:C28918->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _12 numDocs=28061
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=192274432 newFlushedSize=94559844
docs/MB=311.169 new/old=49.18%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [39 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [39 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 39 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 39 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0 _t:C14397->_0 _u:C14221->_0 _v:C13609->_0
_w:C13667->_0 _x:C13127->_0 _y:C18883->_0 _z:C32665->_0 _10:C30048->_0
_11:C28918->_0 _12:C28061->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=179.492 allocMB=183.43
deletesMB=20.537 triggerMB=200
IW 0 [UpdWriterBuild]:   flush: segment=_13 docStoreSegment=_0
docStoreOffset=857337 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=27951 numBufDelTerms=27951
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0 _t:C14397->_0 _u:C14221->_0 _v:C13609->_0
_w:C13667->_0 _x:C13127->_0 _y:C18883->_0 _z:C32665->_0 _10:C30048->_0
_11:C28918->_0 _12:C28061->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _13 numDocs=27951
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=188211200 newFlushedSize=93129403
docs/MB=314.71 new/old=49.481%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [40 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [40 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 40 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 40 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0 _t:C14397->_0 _u:C14221->_0 _v:C13609->_0
_w:C13667->_0 _x:C13127->_0 _y:C18883->_0 _z:C32665->_0 _10:C30048->_0
_11:C28918->_0 _12:C28061->_0 _13:C27951->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=175.89 allocMB=183.43
deletesMB=24.113 triggerMB=200
IW 0 [UpdWriterBuild]:   flush: segment=_14 docStoreSegment=_0
docStoreOffset=885288 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=25719 numBufDelTerms=25719
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0 _t:C14397->_0 _u:C14221->_0 _v:C13609->_0
_w:C13667->_0 _x:C13127->_0 _y:C18883->_0 _z:C32665->_0 _10:C30048->_0
_11:C28918->_0 _12:C28061->_0 _13:C27951->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _14 numDocs=25719
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=184433664 newFlushedSize=88513536
docs/MB=304.68 new/old=47.992%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [41 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [41 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 41 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 41 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0 _t:C14397->_0 _u:C14221->_0 _v:C13609->_0
_w:C13667->_0 _x:C13127->_0 _y:C18883->_0 _z:C32665->_0 _10:C30048->_0
_11:C28918->_0 _12:C28061->_0 _13:C27951->_0 _14:C25719->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now balance allocations: usedMB=172.465 vs
trigger=200 allocMB=183.43 deletesMB=27.537 vs trigger=210
byteBlockFree=6.844 charBlockFree=1.156
IW 0 [UpdWriterBuild]: DW:     nothing to free
IW 0 [UpdWriterBuild]: DW:     after free: freedMB=10.902 usedMB=200.002
allocMB=172.527
IW 0 [UpdWriterBuild]:   flush: segment=_15 docStoreSegment=_0
docStoreOffset=911007 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=24619 numBufDelTerms=24619
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0 _t:C14397->_0 _u:C14221->_0 _v:C13609->_0
_w:C13667->_0 _x:C13127->_0 _y:C18883->_0 _z:C32665->_0 _10:C30048->_0
_11:C28918->_0 _12:C28061->_0 _13:C27951->_0 _14:C25719->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _15 numDocs=24619
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=180842496 newFlushedSize=86360225
docs/MB=298.921 new/old=47.754%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [42 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [42 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 42 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 42 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0 _t:C14397->_0 _u:C14221->_0 _v:C13609->_0
_w:C13667->_0 _x:C13127->_0 _y:C18883->_0 _z:C32665->_0 _10:C30048->_0
_11:C28918->_0 _12:C28061->_0 _13:C27951->_0 _14:C25719->_0 _15:C24619->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=169.036 allocMB=172.527
deletesMB=30.977 triggerMB=200
IW 0 [UpdWriterBuild]:   flush: segment=_16 docStoreSegment=_0
docStoreOffset=935626 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=24737 numBufDelTerms=24737
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0 _t:C14397->_0 _u:C14221->_0 _v:C13609->_0
_w:C13667->_0 _x:C13127->_0 _y:C18883->_0 _z:C32665->_0 _10:C30048->_0
_11:C28918->_0 _12:C28061->_0 _13:C27951->_0 _14:C25719->_0 _15:C24619->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _16 numDocs=24737
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=177247232 newFlushedSize=85372240
docs/MB=303.83 new/old=48.166%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [43 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [43 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 43 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 43 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0 _t:C14397->_0 _u:C14221->_0 _v:C13609->_0
_w:C13667->_0 _x:C13127->_0 _y:C18883->_0 _z:C32665->_0 _10:C30048->_0
_11:C28918->_0 _12:C28061->_0 _13:C27951->_0 _14:C25719->_0 _15:C24619->_0
_16:C24737->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=165.696 allocMB=172.527
deletesMB=34.319 triggerMB=200
IW 0 [UpdWriterBuild]:   flush: segment=_17 docStoreSegment=_0
docStoreOffset=960363 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=24024 numBufDelTerms=24024
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0 _t:C14397->_0 _u:C14221->_0 _v:C13609->_0
_w:C13667->_0 _x:C13127->_0 _y:C18883->_0 _z:C32665->_0 _10:C30048->_0
_11:C28918->_0 _12:C28061->_0 _13:C27951->_0 _14:C25719->_0 _15:C24619->_0
_16:C24737->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _17 numDocs=24024
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=173745152 newFlushedSize=82676131
docs/MB=304.695 new/old=47.585%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [44 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [44 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 44 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 44 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0 _t:C14397->_0 _u:C14221->_0 _v:C13609->_0
_w:C13667->_0 _x:C13127->_0 _y:C18883->_0 _z:C32665->_0 _10:C30048->_0
_11:C28918->_0 _12:C28061->_0 _13:C27951->_0 _14:C25719->_0 _15:C24619->_0
_16:C24737->_0 _17:C24024->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now balance allocations: usedMB=162.496 vs
trigger=200 allocMB=172.527 deletesMB=37.53 vs trigger=210
byteBlockFree=4.562 charBlockFree=1.25
IW 0 [UpdWriterBuild]: DW:     nothing to free
IW 0 [UpdWriterBuild]: DW:     after free: freedMB=9.969 usedMB=200.026
allocMB=162.559
IW 0 [UpdWriterBuild]:   flush: segment=_18 docStoreSegment=_0
docStoreOffset=984387 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=23092 numBufDelTerms=23092
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0 _t:C14397->_0 _u:C14221->_0 _v:C13609->_0
_w:C13667->_0 _x:C13127->_0 _y:C18883->_0 _z:C32665->_0 _10:C30048->_0
_11:C28918->_0 _12:C28061->_0 _13:C27951->_0 _14:C25719->_0 _15:C24619->_0
_16:C24737->_0 _17:C24024->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _18 numDocs=23092
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=170389504 newFlushedSize=81162275
docs/MB=298.337 new/old=47.633%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [45 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [45 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 45 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 45 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0 _t:C14397->_0 _u:C14221->_0 _v:C13609->_0
_w:C13667->_0 _x:C13127->_0 _y:C18883->_0 _z:C32665->_0 _10:C30048->_0
_11:C28918->_0 _12:C28061->_0 _13:C27951->_0 _14:C25719->_0 _15:C24619->_0
_16:C24737->_0 _17:C24024->_0 _18:C23092->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=159.453 allocMB=162.559
deletesMB=40.549 triggerMB=200
IW 0 [UpdWriterBuild]:   flush: segment=_19 docStoreSegment=_0
docStoreOffset=1007479 flushDocs=true flushDeletes=false
flushDocStores=false numDocs=21704 numBufDelTerms=21704
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0 _t:C14397->_0 _u:C14221->_0 _v:C13609->_0
_w:C13667->_0 _x:C13127->_0 _y:C18883->_0 _z:C32665->_0 _10:C30048->_0
_11:C28918->_0 _12:C28061->_0 _13:C27951->_0 _14:C25719->_0 _15:C24619->_0
_16:C24737->_0 _17:C24024->_0 _18:C23092->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _19 numDocs=21704
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=167198720 newFlushedSize=79555567
docs/MB=286.068 new/old=47.581%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [46 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [46 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 46 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 46 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0 _t:C14397->_0 _u:C14221->_0 _v:C13609->_0
_w:C13667->_0 _x:C13127->_0 _y:C18883->_0 _z:C32665->_0 _10:C30048->_0
_11:C28918->_0 _12:C28061->_0 _13:C27951->_0 _14:C25719->_0 _15:C24619->_0
_16:C24737->_0 _17:C24024->_0 _18:C23092->_0 _19:C21704->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=156.562 allocMB=162.559
deletesMB=43.501 triggerMB=200
IW 0 [UpdWriterBuild]:   flush: segment=_1a docStoreSegment=_0
docStoreOffset=1029183 flushDocs=true flushDeletes=false
flushDocStores=false numDocs=21226 numBufDelTerms=21226
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0 _t:C14397->_0 _u:C14221->_0 _v:C13609->_0
_w:C13667->_0 _x:C13127->_0 _y:C18883->_0 _z:C32665->_0 _10:C30048->_0
_11:C28918->_0 _12:C28061->_0 _13:C27951->_0 _14:C25719->_0 _15:C24619->_0
_16:C24737->_0 _17:C24024->_0 _18:C23092->_0 _19:C21704->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _1a numDocs=21226
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=164167680 newFlushedSize=77585028
docs/MB=286.873 new/old=47.26%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [47 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [47 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 47 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 47 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0 _t:C14397->_0 _u:C14221->_0 _v:C13609->_0
_w:C13667->_0 _x:C13127->_0 _y:C18883->_0 _z:C32665->_0 _10:C30048->_0
_11:C28918->_0 _12:C28061->_0 _13:C27951->_0 _14:C25719->_0 _15:C24619->_0
_16:C24737->_0 _17:C24024->_0 _18:C23092->_0 _19:C21704->_0 _1a:C21226->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=153.64 allocMB=162.559
deletesMB=46.374 triggerMB=200
IW 0 [UpdWriterBuild]:   flush: segment=_1b docStoreSegment=_0
docStoreOffset=1050409 flushDocs=true flushDeletes=false
flushDocStores=false numDocs=20658 numBufDelTerms=20658
IW 0 [UpdWriterBuild]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0 _t:C14397->_0 _u:C14221->_0 _v:C13609->_0
_w:C13667->_0 _x:C13127->_0 _y:C18883->_0 _z:C32665->_0 _10:C30048->_0
_11:C28918->_0 _12:C28061->_0 _13:C27951->_0 _14:C25719->_0 _15:C24619->_0
_16:C24737->_0 _17:C24024->_0 _18:C23092->_0 _19:C21704->_0 _1a:C21226->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _1b numDocs=20658
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=161102848 newFlushedSize=76244155
docs/MB=284.107 new/old=47.326%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [48 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [48 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 48 segments
IW 0 [UpdWriterBuild]: LMP:   level 3.982974 to 4.732974: 48 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0 _t:C14397->_0 _u:C14221->_0 _v:C13609->_0
_w:C13667->_0 _x:C13127->_0 _y:C18883->_0 _z:C32665->_0 _10:C30048->_0
_11:C28918->_0 _12:C28061->_0 _13:C27951->_0 _14:C25719->_0 _15:C24619->_0
_16:C24737->_0 _17:C24024->_0 _18:C23092->_0 _19:C21704->_0 _1a:C21226->_0
_1b:C20658->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [Indexer]: now flush at close
IW 0 [Indexer]:   flush: segment=_1c docStoreSegment=_0
docStoreOffset=1071067 flushDocs=true flushDeletes=true flushDocStores=true
numDocs=6343 numBufDelTerms=6343
IW 0 [Indexer]:   index before flush _0:C32607->_0 _1:C29043->_0
_2:C28376->_0 _3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0
_7:C30328->_0 _8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0
_c:C25826->_0 _d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0
_h:C20403->_0 _i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0
_m:C14772->_0 _n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0
_r:C15237->_0 _s:C15104->_0 _t:C14397->_0 _u:C14221->_0 _v:C13609->_0
_w:C13667->_0 _x:C13127->_0 _y:C18883->_0 _z:C32665->_0 _10:C30048->_0
_11:C28918->_0 _12:C28061->_0 _13:C27951->_0 _14:C25719->_0 _15:C24619->_0
_16:C24737->_0 _17:C24024->_0 _18:C23092->_0 _19:C21704->_0 _1a:C21226->_0
_1b:C20658->_0
IW 0 [Indexer]:   flush shared docStore segment _0
IW 0 [Indexer]: DW: closeDocStore: 2 files to flush to segment _0
numDocs=1077410
IW 0 [Indexer]: DW: flush postings as segment _1c numDocs=6343
IW 0 [Indexer]: DW:   oldRAMSize=62278656 newFlushedSize=22983684
docs/MB=289.384 new/old=36.905%
IFD [Indexer]: now checkpoint "segments_1" [49 segments ; isCommit = false]
IW 0 [Indexer]: DW: apply 339765 buffered deleted terms and 0 deleted docIDs
and 0 deleted queries on 49 segments.
IFD [Indexer]: now checkpoint "segments_1" [49 segments ; isCommit = false]
IW 0 [Indexer]: LMP: findMerges: 49 segments
IW 0 [Indexer]: LMP:   level 3.982974 to 4.732974: 49 segments
IW 0 [Indexer]: CMS: now merge
IW 0 [Indexer]: CMS:   index: _0:C32607->_0 _1:C29043->_0 _2:C28376->_0
_3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0 _7:C30328->_0
_8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0 _c:C25826->_0
_d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0 _h:C20403->_0
_i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0 _m:C14772->_0
_n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0 _r:C15237->_0
_s:C15104->_0 _t:C14397->_0 _u:C14221->_0 _v:C13609->_0 _w:C13667->_0
_x:C13127->_0 _y:C18883->_0 _z:C32665->_0 _10:C30048->_0 _11:C28918->_0
_12:C28061->_0 _13:C27951->_0 _14:C25719->_0 _15:C24619->_0 _16:C24737->_0
_17:C24024->_0 _18:C23092->_0 _19:C21704->_0 _1a:C21226->_0 _1b:C20658->_0
_1c:C6343->_0
IW 0 [Indexer]: CMS:   no more merges pending; now return
IW 0 [Indexer]: CMS: now merge
IW 0 [Indexer]: CMS:   index: _0:C32607->_0 _1:C29043->_0 _2:C28376->_0
_3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0 _7:C30328->_0
_8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0 _c:C25826->_0
_d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0 _h:C20403->_0
_i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0 _m:C14772->_0
_n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0 _r:C15237->_0
_s:C15104->_0 _t:C14397->_0 _u:C14221->_0 _v:C13609->_0 _w:C13667->_0
_x:C13127->_0 _y:C18883->_0 _z:C32665->_0 _10:C30048->_0 _11:C28918->_0
_12:C28061->_0 _13:C27951->_0 _14:C25719->_0 _15:C24619->_0 _16:C24737->_0
_17:C24024->_0 _18:C23092->_0 _19:C21704->_0 _1a:C21226->_0 _1b:C20658->_0
_1c:C6343->_0
IW 0 [Indexer]: CMS:   no more merges pending; now return
IW 0 [Indexer]: now call final commit()
IW 0 [Indexer]: startCommit(): start sizeInBytes=0
IW 0 [Indexer]: startCommit index=_0:C32607->_0 _1:C29043->_0 _2:C28376->_0
_3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0 _7:C30328->_0
_8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0 _c:C25826->_0
_d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0 _h:C20403->_0
_i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0 _m:C14772->_0
_n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0 _r:C15237->_0
_s:C15104->_0 _t:C14397->_0 _u:C14221->_0 _v:C13609->_0 _w:C13667->_0
_x:C13127->_0 _y:C18883->_0 _z:C32665->_0 _10:C30048->_0 _11:C28918->_0
_12:C28061->_0 _13:C27951->_0 _14:C25719->_0 _15:C24619->_0 _16:C24737->_0
_17:C24024->_0 _18:C23092->_0 _19:C21704->_0 _1a:C21226->_0 _1b:C20658->_0
_1c:C6343->_0 changeCount=147
IW 0 [Indexer]: now sync _9.tis
IW 0 [Indexer]: now sync _8.tis
IW 0 [Indexer]: now sync _d.tii
IW 0 [Indexer]: now sync _q.prx
IW 0 [Indexer]: now sync _15.frq
IW 0 [Indexer]: now sync _9.tii
IW 0 [Indexer]: now sync _c.tii
IW 0 [Indexer]: now sync _k.fnm
IW 0 [Indexer]: now sync _7.fnm
IW 0 [Indexer]: now sync _17.frq
IW 0 [Indexer]: now sync _4.frq
IW 0 [Indexer]: now sync _7.tis
IW 0 [Indexer]: now sync _c.tis
IW 0 [Indexer]: now sync _10.tii
IW 0 [Indexer]: now sync _y.nrm
IW 0 [Indexer]: now sync _13.fnm
IW 0 [Indexer]: now sync _d.tis
IW 0 [Indexer]: now sync _w.fnm
IW 0 [Indexer]: now sync _u.tii
IW 0 [Indexer]: now sync _10.tis
IW 0 [Indexer]: now sync _a.prx
IW 0 [Indexer]: now sync _2.prx
IW 0 [Indexer]: now sync _u.tis
IW 0 [Indexer]: now sync _7.tii
IW 0 [Indexer]: now sync _8.frq
IW 0 [Indexer]: now sync _2.frq
IW 0 [Indexer]: now sync _h.frq
IW 0 [Indexer]: now sync _18.tis
IW 0 [Indexer]: now sync _15.tis
IW 0 [Indexer]: now sync _10.prx
IW 0 [Indexer]: now sync _14.frq
IW 0 [Indexer]: now sync _8.tii
IW 0 [Indexer]: now sync _g.nrm
IW 0 [Indexer]: now sync _15.tii
IW 0 [Indexer]: now sync _18.tii
IW 0 [Indexer]: now sync _0.frq
IW 0 [Indexer]: now sync _a.fnm
IW 0 [Indexer]: now sync _2.nrm
IW 0 [Indexer]: now sync _r.nrm
IW 0 [Indexer]: now sync _y.tis
IW 0 [Indexer]: now sync _1b.prx
IW 0 [Indexer]: now sync _1a.fnm
IW 0 [Indexer]: now sync _t.frq
IW 0 [Indexer]: now sync _y.tii
IW 0 [Indexer]: now sync _6.tis
IW 0 [Indexer]: now sync _m.tis
IW 0 [Indexer]: now sync _p.tii
IW 0 [Indexer]: now sync _v.prx
IW 0 [Indexer]: now sync _m.tii
IW 0 [Indexer]: now sync _y.fnm
IW 0 [Indexer]: now sync _g.prx
IW 0 [Indexer]: now sync _x.fnm
IW 0 [Indexer]: now sync _p.tis
IW 0 [Indexer]: now sync _z.prx
IW 0 [Indexer]: now sync _15.nrm
IW 0 [Indexer]: now sync _i.frq
IW 0 [Indexer]: now sync _10.frq
IW 0 [Indexer]: now sync _4.nrm
IW 0 [Indexer]: now sync _6.tii
IW 0 [Indexer]: now sync _5.frq
IW 0 [Indexer]: now sync _g.frq
IW 0 [Indexer]: now sync _2.fnm
IW 0 [Indexer]: now sync _b.frq
IW 0 [Indexer]: now sync _1a.prx
IW 0 [Indexer]: now sync _x.prx
IW 0 [Indexer]: now sync _18.fnm
IW 0 [Indexer]: now sync _a.nrm
IW 0 [Indexer]: now sync _1b.frq
IW 0 [Indexer]: now sync _f.tis
IW 0 [Indexer]: now sync _0.tis
IW 0 [Indexer]: now sync _g.tis
IW 0 [Indexer]: now sync _1a.tis
IW 0 [Indexer]: now sync _5.prx
IW 0 [Indexer]: now sync _1c.nrm
IW 0 [Indexer]: now sync _15.prx
IW 0 [Indexer]: now sync _m.prx
IW 0 [Indexer]: now sync _1a.tii
IW 0 [Indexer]: now sync _4.prx
IW 0 [Indexer]: now sync _12.tii
IW 0 [Indexer]: now sync _i.tis
IW 0 [Indexer]: now sync _o.prx
IW 0 [Indexer]: now sync _5.nrm
IW 0 [Indexer]: now sync _p.nrm
IW 0 [Indexer]: now sync _g.tii
IW 0 [Indexer]: now sync _7.frq
IW 0 [Indexer]: now sync _i.tii
IW 0 [Indexer]: now sync _12.tis
IW 0 [Indexer]: now sync _f.tii
IW 0 [Indexer]: now sync _s.nrm
IW 0 [Indexer]: now sync _e.fnm
IW 0 [Indexer]: now sync _m.fnm
IW 0 [Indexer]: now sync _o.fnm
IW 0 [Indexer]: now sync _h.fnm
IW 0 [Indexer]: now sync _19.fnm
IW 0 [Indexer]: now sync _h.tis
IW 0 [Indexer]: now sync _j.tis
IW 0 [Indexer]: now sync _u.nrm
IW 0 [Indexer]: now sync _0.tii
IW 0 [Indexer]: now sync _h.tii
IW 0 [Indexer]: now sync _j.tii
IW 0 [Indexer]: now sync _q.fnm
IW 0 [Indexer]: now sync _14.fnm
IW 0 [Indexer]: now sync _6.nrm
IW 0 [Indexer]: now sync _16.tii
IW 0 [Indexer]: now sync _x.nrm
IW 0 [Indexer]: now sync _3.tii
IW 0 [Indexer]: now sync _1c.tii
IW 0 [Indexer]: now sync _t.fnm
IW 0 [Indexer]: now sync _1a.frq
IW 0 [Indexer]: now sync _6.frq
IW 0 [Indexer]: now sync _u.frq
IW 0 [Indexer]: now sync _e.tis
IW 0 [Indexer]: now sync _b.nrm
IW 0 [Indexer]: now sync _s.fnm
IW 0 [Indexer]: now sync _w.prx
IW 0 [Indexer]: now sync _t.prx
IW 0 [Indexer]: now sync _s.tii
IW 0 [Indexer]: now sync _e.tii
IW 0 [Indexer]: now sync _14.nrm
IW 0 [Indexer]: now sync _l.tis
IW 0 [Indexer]: now sync _l.tii
IW 0 [Indexer]: now sync _16.tis
IW 0 [Indexer]: now sync _n.nrm
IW 0 [Indexer]: now sync _10.fnm
IW 0 [Indexer]: now sync _5.fnm
IW 0 [Indexer]: now sync _b.prx
IW 0 [Indexer]: now sync _n.prx
IW 0 [Indexer]: now sync _x.frq
IW 0 [Indexer]: now sync _14.tis
IW 0 [Indexer]: now sync _b.tis
IW 0 [Indexer]: now sync _11.frq
IW 0 [Indexer]: now sync _12.nrm
IW 0 [Indexer]: now sync _b.tii
IW 0 [Indexer]: now sync _1c.tis
IW 0 [Indexer]: now sync _14.tii
IW 0 [Indexer]: now sync _8.prx
IW 0 [Indexer]: now sync _s.tis
IW 0 [Indexer]: now sync _18.frq
IW 0 [Indexer]: now sync _6.prx
IW 0 [Indexer]: now sync _b.fnm
IW 0 [Indexer]: now sync _3.tis
IW 0 [Indexer]: now sync _z.frq
IW 0 [Indexer]: now sync _1.frq
IW 0 [Indexer]: now sync _a.tii
IW 0 [Indexer]: now sync _e.frq
IW 0 [Indexer]: now sync _m.frq
IW 0 [Indexer]: now sync _2.tii
IW 0 [Indexer]: now sync _17.prx
IW 0 [Indexer]: now sync _14.prx
IW 0 [Indexer]: now sync _7.nrm
IW 0 [Indexer]: now sync _n.frq
IW 0 [Indexer]: now sync _18.prx
IW 0 [Indexer]: now sync _c.nrm
IW 0 [Indexer]: now sync _q.nrm
IW 0 [Indexer]: now sync _17.nrm
IW 0 [Indexer]: now sync _v.fnm
IW 0 [Indexer]: now sync _16.frq
IW 0 [Indexer]: now sync _x.tis
IW 0 [Indexer]: now sync _a.tis
IW 0 [Indexer]: now sync _o.frq
IW 0 [Indexer]: now sync _d.frq
IW 0 [Indexer]: now sync _l.nrm
IW 0 [Indexer]: now sync _13.prx
IW 0 [Indexer]: now sync _9.prx
IW 0 [Indexer]: now sync _l.prx
IW 0 [Indexer]: now sync _c.prx
IW 0 [Indexer]: now sync _q.frq
IW 0 [Indexer]: now sync _x.tii
IW 0 [Indexer]: now sync _z.nrm
IW 0 [Indexer]: now sync _2.tis
IW 0 [Indexer]: now sync _11.prx
IW 0 [Indexer]: now sync _19.frq
IW 0 [Indexer]: now sync _c.frq
IW 0 [Indexer]: now sync _12.frq
IW 0 [Indexer]: now sync _k.prx
IW 0 [Indexer]: now sync _l.frq
IW 0 [Indexer]: now sync _3.fnm
IW 0 [Indexer]: now sync _p.frq
IW 0 [Indexer]: now sync _v.tis
IW 0 [Indexer]: now sync _n.tis
IW 0 [Indexer]: now sync _18.nrm
IW 0 [Indexer]: now sync _r.fnm
IW 0 [Indexer]: now sync _1b.nrm
IW 0 [Indexer]: now sync _u.fnm
IW 0 [Indexer]: now sync _k.nrm
IW 0 [Indexer]: now sync _1a.nrm
IW 0 [Indexer]: now sync _j.nrm
IW 0 [Indexer]: now sync _1b.tis
IW 0 [Indexer]: now sync _1c.frq
IW 0 [Indexer]: now sync _1.tii
IW 0 [Indexer]: now sync _n.tii
IW 0 [Indexer]: now sync _q.tis
IW 0 [Indexer]: now sync _j.prx
IW 0 [Indexer]: now sync _e.nrm
IW 0 [Indexer]: now sync _1b.tii
IW 0 [Indexer]: now sync _1.tis
IW 0 [Indexer]: now sync _q.tii
IW 0 [Indexer]: now sync _19.nrm
IW 0 [Indexer]: now sync _16.prx
IW 0 [Indexer]: now sync _9.fnm
IW 0 [Indexer]: now sync _12.prx
IW 0 [Indexer]: now sync _m.nrm
IW 0 [Indexer]: now sync _y.prx
IW 0 [Indexer]: now sync _v.nrm
IW 0 [Indexer]: now sync _d.prx
IW 0 [Indexer]: now sync _13.tis
IW 0 [Indexer]: now sync _r.prx
IW 0 [Indexer]: now sync _8.nrm
IW 0 [Indexer]: now sync _19.prx
IW 0 [Indexer]: now sync _o.nrm
IW 0 [Indexer]: now sync _11.fnm
IW 0 [Indexer]: now sync _v.tii
IW 0 [Indexer]: now sync _13.tii
IW 0 [Indexer]: now sync _d.nrm
IW 0 [Indexer]: now sync _6.fnm
IW 0 [Indexer]: now sync _c.fnm
IW 0 [Indexer]: now sync _13.frq
IW 0 [Indexer]: now sync _j.frq
IW 0 [Indexer]: now sync _f.frq
IW 0 [Indexer]: now sync _p.prx
IW 0 [Indexer]: now sync _16.nrm
IW 0 [Indexer]: now sync _h.prx
IW 0 [Indexer]: now sync _r.tii
IW 0 [Indexer]: now sync _4.fnm
IW 0 [Indexer]: now sync _19.tii
IW 0 [Indexer]: now sync _z.fnm
IW 0 [Indexer]: now sync _1.fnm
IW 0 [Indexer]: now sync _h.nrm
IW 0 [Indexer]: now sync _o.tis
IW 0 [Indexer]: now sync _19.tis
IW 0 [Indexer]: now sync _k.tii
IW 0 [Indexer]: now sync _v.frq
IW 0 [Indexer]: now sync _16.fnm
IW 0 [Indexer]: now sync _5.tis
IW 0 [Indexer]: now sync _w.tii
IW 0 [Indexer]: now sync _t.tis
IW 0 [Indexer]: now sync _15.fnm
IW 0 [Indexer]: now sync _0.prx
IW 0 [Indexer]: now sync _n.fnm
IW 0 [Indexer]: now sync _k.tis
IW 0 [Indexer]: now sync _w.tis
IW 0 [Indexer]: now sync _3.nrm
IW 0 [Indexer]: now sync _f.nrm
IW 0 [Indexer]: now sync _w.frq
IW 0 [Indexer]: now sync _0.fnm
IW 0 [Indexer]: now sync _t.tii
IW 0 [Indexer]: now sync _a.frq
IW 0 [Indexer]: now sync _s.prx
IW 0 [Indexer]: now sync _d.fnm
IW 0 [Indexer]: now sync _3.prx
IW 0 [Indexer]: now sync _17.fnm
IW 0 [Indexer]: now sync _5.tii
IW 0 [Indexer]: now sync _s.frq
IW 0 [Indexer]: now sync _1.prx
IW 0 [Indexer]: now sync _8.fnm
IW 0 [Indexer]: now sync _f.prx
IW 0 [Indexer]: now sync _12.fnm
IW 0 [Indexer]: now sync _u.prx
IW 0 [Indexer]: now sync _o.tii
IW 0 [Indexer]: now sync _t.nrm
IW 0 [Indexer]: now sync _e.prx
IW 0 [Indexer]: now sync _r.tis
IW 0 [Indexer]: now sync _1.nrm
IW 0 [Indexer]: now sync _i.prx
IW 0 [Indexer]: now sync _17.tis
IW 0 [Indexer]: now sync _i.nrm
IW 0 [Indexer]: now sync _3.frq
IW 0 [Indexer]: now sync _11.tii
IW 0 [Indexer]: now sync _z.tii
IW 0 [Indexer]: now sync _l.fnm
IW 0 [Indexer]: now sync _9.nrm
IW 0 [Indexer]: now sync _p.fnm
IW 0 [Indexer]: now sync _13.nrm
IW 0 [Indexer]: now sync _y.frq
IW 0 [Indexer]: now sync _11.tis
IW 0 [Indexer]: now sync _z.tis
IW 0 [Indexer]: now sync _g.fnm
IW 0 [Indexer]: now sync _1c.fnm
IW 0 [Indexer]: now sync _11.nrm
IW 0 [Indexer]: now sync _w.nrm
IW 0 [Indexer]: now sync _r.frq
IW 0 [Indexer]: now sync _4.tis
IW 0 [Indexer]: now sync _17.tii
IW 0 [Indexer]: now sync _0.nrm
IW 0 [Indexer]: now sync _4.tii
IW 0 [Indexer]: now sync _1b.fnm
IW 0 [Indexer]: now sync _1c.prx
IW 0 [Indexer]: now sync _9.frq
IW 0 [Indexer]: now sync _7.prx
IW 0 [Indexer]: now sync _f.fnm
IW 0 [Indexer]: now sync _j.fnm
IW 0 [Indexer]: now sync _0.fdx
IW 0 [Indexer]: now sync _10.nrm
IW 0 [Indexer]: now sync _k.frq
IW 0 [Indexer]: now sync _i.fnm
IW 0 [Indexer]: now sync _0.fdt
IW 0 [Indexer]: done all syncs
IW 0 [Indexer]: commit: pendingCommit != null
IW 0 [Indexer]: commit: wrote segments file "segments_2"
IFD [Indexer]: now checkpoint "segments_2" [49 segments ; isCommit = true]
IFD [Indexer]: deleteCommits: now decRef commit "segments_1"
IFD [Indexer]: delete "segments_1"
IW 0 [Indexer]: commit: done
IW 0 [Indexer]: at close: _0:C32607->_0 _1:C29043->_0 _2:C28376->_0
_3:C26936->_0 _4:C24129->_0 _5:C28737->_0 _6:C31930->_0 _7:C30328->_0
_8:C29899->_0 _9:C29256->_0 _a:C28152->_0 _b:C28085->_0 _c:C25826->_0
_d:C24897->_0 _e:C23703->_0 _f:C22817->_0 _g:C22048->_0 _h:C20403->_0
_i:C15832->_0 _j:C15530->_0 _k:C15154->_0 _l:C15143->_0 _m:C14772->_0
_n:C14553->_0 _o:C13915->_0 _p:C13700->_0 _q:C13629->_0 _r:C15237->_0
_s:C15104->_0 _t:C14397->_0 _u:C14221->_0 _v:C13609->_0 _w:C13667->_0
_x:C13127->_0 _y:C18883->_0 _z:C32665->_0 _10:C30048->_0 _11:C28918->_0
_12:C28061->_0 _13:C27951->_0 _14:C25719->_0 _15:C24619->_0 _16:C24737->_0
_17:C24024->_0 _18:C23092->_0 _19:C21704->_0 _1a:C21226->_0 _1b:C20658->_0
_1c:C6343->_0


Peter

On Wed, Oct 28, 2009 at 5:23 AM, Michael McCandless <
lucene@mikemccandless.com> wrote:

> The unit tests do test multi-segment indexes (though we could always
> use deeper testing, here), but, don't test big-ish indexes, like this,
> very well.
>
> Are you also using JDK 1.6.0_16 when running CheckIndex?  If you run
> CheckIndex on the same index several times in a row, does it report
> precisely the same problems?
>
> This exception in CheckIndex is very odd: it checks whether the
> docFreq reported for each term in the TermEnum actually matches the
> number of docs that it was able to iterate through using the
> TermPositions.  The reason why it's very odd is that under the hood
> TermPositions is supposed to be using the very same source of docFreq,
> to figure out how many docs it's supposed to read.  It may not be
> index corruption but rather a bug somewhere in CheckIndex, or, in the
> APIs its using.
>
> What settings are you using in your IndexWriter (that differ from its
> defaults)?  If you eg increase the frequency of flushing, can you get
> this error to happen with a smaller number of docs added?  Just trying
> to box the issue in...
>
> Also, what does Lucene version "2.9 exported - 2009-10-27 15:31:52" mean?
>
> Mike
>
> On Tue, Oct 27, 2009 at 4:10 PM, Peter Keegan <pe...@gmail.com>
> wrote:
> > Without the optimize, it looks like there are errors on all segments
> except
> > the first:
> >
> > Opening index @ D:\mnsavs\lresumes1\lresumes1.luc\lresumes1.search.main.2
> >
> > Segments file=segments_2 numSegments=3 version=FORMAT_DIAGNOSTICS [Lucene
> > 2.9]
> >  1 of 3: name=_0 docCount=413557
> >    compound=false
> >    hasProx=true
> >    numFiles=8
> >    size (MB)=1,148.795
> >    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
> > exported - 2009-10-27 15:31:52, source=flush, os.arch=amd64,
> > java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
> >    docStoreOffset=0
> >    docStoreSegment=_0
> >    docStoreIsCompoundFile=false
> >    no deletions
> >    test: open reader.........OK
> >    test: fields..............OK [33 fields]
> >    test: field norms.........OK [33 fields]
> >    test: terms, freq, prox...OK [7704599 terms; 180318249 terms/docs
> pairs;
> > 340258711 tokens]
> >    test: stored fields.......OK [1240671 total field count; avg 3 fields
> > per doc]
> >    test: term vectors........OK [0 total vector count; avg 0 term/freq
> > vector fields per doc]
> >
> >  2 of 3: name=_1 docCount=359203
> >    compound=false
> >    hasProx=true
> >    numFiles=8
> >    size (MB)=1,125.103
> >    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
> > exported - 2009-10-27 15:31:52, source=flush, os.arch=amd64,
> > java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
> >    docStoreOffset=413557
> >    docStoreSegment=_0
> >    docStoreIsCompoundFile=false
> >    no deletions
> >    test: open reader.........OK
> >    test: fields..............OK [33 fields]
> >    test: field norms.........OK [33 fields]
> >    test: terms, freq, prox...ERROR [term literals:cfid196$ docFreq=43 !=
> > num docs seen 4 + num docs deleted 0]
> > java.lang.RuntimeException: term literals:cfid196$ docFreq=43 != num docs
> > seen 4 + num docs deleted 0
> >    at
> org.apache.lucene.index.CheckIndex.testTermIndex(CheckIndex.java:675)
> >    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:530)
> >    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
> >    test: stored fields.......OK [1077609 total field count; avg 3 fields
> > per doc]
> >    test: term vectors........OK [0 total vector count; avg 0 term/freq
> > vector fields per doc]
> > FAILED
> >    WARNING: fixIndex() would remove reference to this segment; full
> > exception:
> > java.lang.RuntimeException: Term Index test failed
> >    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:543)
> >    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
> >
> >  3 of 3: name=_2 docCount=304659
> >    compound=false
> >    hasProx=true
> >    numFiles=8
> >    size (MB)=961.764
> >    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
> > exported - 2009-10-27 15:31:52, source=flush, os.arch=amd64,
> > java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
> >    docStoreOffset=772760
> >    docStoreSegment=_0
> >    docStoreIsCompoundFile=false
> >    no deletions
> >    test: open reader.........OK
> >    test: fields..............OK [33 fields]
> >    test: field norms.........OK [33 fields]
> >    test: terms, freq, prox...ERROR [term contents:? docFreq=1 != num docs
> > seen 245 + num docs deleted 0]
> > java.lang.RuntimeException: term contents:? docFreq=1 != num docs seen
> 245 +
> > num docs deleted 0
> >    at
> org.apache.lucene.index.CheckIndex.testTermIndex(CheckIndex.java:675)
> >    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:530)
> >    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
> >    test: stored fields.......OK [913977 total field count; avg 3 fields
> per
> > doc]
> >    test: term vectors........OK [0 total vector count; avg 0 term/freq
> > vector fields per doc]
> > FAILED
> >    WARNING: fixIndex() would remove reference to this segment; full
> > exception:
> > java.lang.RuntimeException: Term Index test failed
> >    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:543)
> >    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
> >
> > WARNING: 2 broken segments (containing 663862 documents) detected
> > WARNING: would write new segments file, and 663862 documents would be
> lost,
> > if -fix were specified
> >
> >
> > Do the unit tests create multi-segment indexes?
> >
> > Peter
> >
> > On Tue, Oct 27, 2009 at 3:08 PM, Peter Keegan <peterlkeegan@gmail.com
> >wrote:
> >
> >> It's reproducible with a large no. of docs (>1 million), but not with
> 100K
> >> docs.
> >> I got same error with jvm 1.6.0_16.
> >> The index was optimized after all docs are added. I'll try removing the
> >> optimize.
> >>
> >> Peter
> >>
> >>
> >> On Tue, Oct 27, 2009 at 2:57 PM, Michael McCandless <
> >> lucene@mikemccandless.com> wrote:
> >>
> >>> This is odd -- is it reproducible?
> >>>
> >>> Can you narrow it down to a small set of docs that when indexed
> >>> produce a corrupted index?
> >>>
> >>> If you attempt to optimize the index, does it fail?
> >>>
> >>> Mike
> >>>
> >>> On Tue, Oct 27, 2009 at 1:40 PM, Peter Keegan <pe...@gmail.com>
> >>> wrote:
> >>> > It seems the index is corrupted immediately after the initial build
> >>> (ample
> >>> > disk space was provided):
> >>> >
> >>> > Output from CheckIndex:
> >>> >
> >>> > Opening index @
> >>> D:\mnsavs\lresumes1\lresumes1.luc\lresumes1.search.main.2
> >>> >
> >>> > Segments file=segments_3 numSegments=1 version=FORMAT_DIAGNOSTICS
> >>> [Lucene
> >>> > 2.9]
> >>> >  1 of 1: name=_7 docCount=1077025
> >>> >    compound=false
> >>> >    hasProx=true
> >>> >    numFiles=8
> >>> >    size (MB)=3,201.196
> >>> >    diagnostics = {optimize=true, mergeFactor=7, os.version=5.2,
> >>> os=Windows
> >>> > 2003, mergeDocStores=false, lucene.version=2.9 exported - 2009-10-26
> >>> > 07:58:55, source=merge, os.arch=amd64, java.version=1.6.0_03,
> >>> > java.vendor=Sun Microsystems Inc.}
> >>> >    docStoreOffset=0
> >>> >    docStoreSegment=_0
> >>> >    docStoreIsCompoundFile=false
> >>> >    no deletions
> >>> >    test: open reader.........OK
> >>> >    test: fields..............OK [33 fields]
> >>> >    test: field norms.........OK [33 fields]
> >>> >    test: terms, freq, prox...ERROR [term contents:? docFreq=1 != num
> >>> docs
> >>> > seen 482 + num docs deleted 0]
> >>> > java.lang.RuntimeException: term contents:? docFreq=1 != num docs
> seen
> >>> 482 +
> >>> > num docs deleted 0
> >>> >    at
> >>> org.apache.lucene.index.CheckIndex.testTermIndex(CheckIndex.java:675)
> >>> >    at
> org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:530)
> >>> >    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
> >>> >    test: stored fields.......OK [3231075 total field count; avg 3
> fields
> >>> > per doc]
> >>> >    test: term vectors........OK [0 total vector count; avg 0
> term/freq
> >>> > vector fields per doc]
> >>> > FAILED
> >>> >    WARNING: fixIndex() would remove reference to this segment; full
> >>> > exception:
> >>> > java.lang.RuntimeException: Term Index test failed
> >>> >    at
> org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:543)
> >>> >    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
> >>> >
> >>> > WARNING: 1 broken segments (containing 1077025 documents) detected
> >>> > WARNING: would write new segments file, and 1077025 documents would
> be
> >>> lost,
> >>> > if -fix were specified
> >>> >
> >>> > Searching on this index seems to be fine, though.
> >>> >
> >>> > Here is the IndexWriter log from the build:
> >>> >
> >>> > IFD [Indexer]: setInfoStream
> >>> >
> >>>
> deletionPolicy=org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy@2a9cfec1
> >>> > IW 0 [Indexer]: setInfoStream:
> >>> > dir=org.apache.lucene.store.SimpleFSDirectory@D
> >>> :\mnsavs\lresumes1\lresumes1.luc\lresumes1.update.main.2
> >>> > autoCommit=false
> >>> >
> >>>
> mergePolicy=org.apache.lucene.index.LogByteSizeMergePolicy@291946c2mergeScheduler
> >>>
> =org.apache.lucene.index.ConcurrentMergeScheduler@3a747fa2ramBufferSizeMB
> >>> =16.0
> >>> > maxBufferedDocs=-1 maxBuffereDeleteTerms=-1
> >>> > maxFieldLength=2147483647 index=
> >>> > IW 0 [Indexer]: setRAMBufferSizeMB 910.25
> >>> > IW 0 [Indexer]: setMaxBufferedDocs 1000000
> >>> > IW 0 [Indexer]: flush at getReader
> >>> > IW 0 [Indexer]:   flush: segment=null docStoreSegment=null
> >>> docStoreOffset=0
> >>> > flushDocs=false flushDeletes=true flushDocStores=false numDocs=0
> >>> > numBufDelTerms=0
> >>> > IW 0 [Indexer]:   index before flush
> >>> > IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=886.463
> >>> allocMB=886.463
> >>> > deletesMB=23.803 triggerMB=910.25
> >>> > IW 0 [UpdWriterBuild]:   flush: segment=_0 docStoreSegment=_0
> >>> > docStoreOffset=0 flushDocs=true flushDeletes=false
> flushDocStores=false
> >>> > numDocs=171638 numBufDelTerms=171638
> >>> > IW 0 [UpdWriterBuild]:   index before flush
> >>> > IW 0 [UpdWriterBuild]: DW: flush postings as segment _0
> numDocs=171638
> >>> > IW 0 [UpdWriterBuild]: DW:   oldRAMSize=929523712
> >>> newFlushedSize=573198529
> >>> > docs/MB=313.985 new/old=61.666%
> >>> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [1 segments ;
> isCommit
> >>> =
> >>> > false]
> >>> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [1 segments ;
> isCommit
> >>> =
> >>> > false]
> >>> > IW 0 [UpdWriterBuild]: LMP: findMerges: 1 segments
> >>> > IW 0 [UpdWriterBuild]: LMP:   level 8.008305 to 8.758305: 1 segments
> >>> > IW 0 [UpdWriterBuild]: CMS: now merge
> >>> > IW 0 [UpdWriterBuild]: CMS:   index: _0:C171638->_0
> >>> > IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
> >>> > IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=857.977
> >>> allocMB=901.32
> >>> > deletesMB=52.274 triggerMB=910.25
> >>> > IW 0 [UpdWriterBuild]:   flush: segment=_1 docStoreSegment=_0
> >>> > docStoreOffset=171638 flushDocs=true flushDeletes=false
> >>> flushDocStores=false
> >>> > numDocs=204995 numBufDelTerms=204995
> >>> > IW 0 [UpdWriterBuild]:   index before flush _0:C171638->_0
> >>> > IW 0 [UpdWriterBuild]: DW: flush postings as segment _1
> numDocs=204995
> >>> > IW 0 [UpdWriterBuild]: DW:   oldRAMSize=899653632
> >>> newFlushedSize=544283851
> >>> > docs/MB=394.928 new/old=60.499%
> >>> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [2 segments ;
> isCommit
> >>> =
> >>> > false]
> >>> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [2 segments ;
> isCommit
> >>> =
> >>> > false]
> >>> > IW 0 [UpdWriterBuild]: LMP: findMerges: 2 segments
> >>> > IW 0 [UpdWriterBuild]: LMP:   level 8.008305 to 8.758305: 2 segments
> >>> > IW 0 [UpdWriterBuild]: CMS: now merge
> >>> > IW 0 [UpdWriterBuild]: CMS:   index: _0:C171638->_0 _1:C204995->_0
> >>> > IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
> >>> > IW 0 [UpdWriterBuild]: DW:   RAM: now balance allocations:
> >>> usedMB=834.645 vs
> >>> > trigger=910.25 allocMB=901.32 deletesMB=75.627 vs trigger=955.762
> >>> > byteBlockFree=35.938 charBlockFree=8.938
> >>> > IW 0 [UpdWriterBuild]: DW:     nothing to free
> >>> > IW 0 [UpdWriterBuild]: DW:     after free: freedMB=66.613
> usedMB=910.272
> >>> > allocMB=834.707
> >>> > IW 0 [UpdWriterBuild]:   flush: segment=_2 docStoreSegment=_0
> >>> > docStoreOffset=376633 flushDocs=true flushDeletes=false
> >>> flushDocStores=false
> >>> > numDocs=168236 numBufDelTerms=168236
> >>> > IW 0 [UpdWriterBuild]:   index before flush _0:C171638->_0
> >>> _1:C204995->_0
> >>> > IW 0 [UpdWriterBuild]: DW: flush postings as segment _2
> numDocs=168236
> >>> > IW 0 [UpdWriterBuild]: DW:   oldRAMSize=875188224
> >>> newFlushedSize=530720464
> >>> > docs/MB=332.394 new/old=60.641%
> >>> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [3 segments ;
> isCommit
> >>> =
> >>> > false]
> >>> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [3 segments ;
> isCommit
> >>> =
> >>> > false]
> >>> > IW 0 [UpdWriterBuild]: LMP: findMerges: 3 segments
> >>> > IW 0 [UpdWriterBuild]: LMP:   level 8.008305 to 8.758305: 3 segments
> >>> > IW 0 [UpdWriterBuild]: CMS: now merge
> >>> > IW 0 [UpdWriterBuild]: CMS:   index: _0:C171638->_0 _1:C204995->_0
> >>> > _2:C168236->_0
> >>> > IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
> >>> > IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=814.282
> >>> allocMB=835.832
> >>> > deletesMB=95.997 triggerMB=910.25
> >>> > IW 0 [UpdWriterBuild]:   flush: segment=_3 docStoreSegment=_0
> >>> > docStoreOffset=544869 flushDocs=true flushDeletes=false
> >>> flushDocStores=false
> >>> > numDocs=146894 numBufDelTerms=146894
> >>> > IW 0 [UpdWriterBuild]:   index before flush _0:C171638->_0
> >>> _1:C204995->_0
> >>> > _2:C168236->_0
> >>> > IW 0 [UpdWriterBuild]: DW: flush postings as segment _3
> numDocs=146894
> >>> > IW 0 [UpdWriterBuild]: DW:   oldRAMSize=853836800
> >>> newFlushedSize=522388771
> >>> > docs/MB=294.856 new/old=61.181%
> >>> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [4 segments ;
> isCommit
> >>> =
> >>> > false]
> >>> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [4 segments ;
> isCommit
> >>> =
> >>> > false]
> >>> > IW 0 [UpdWriterBuild]: LMP: findMerges: 4 segments
> >>> > IW 0 [UpdWriterBuild]: LMP:   level 8.008305 to 8.758305: 4 segments
> >>> > IW 0 [UpdWriterBuild]: CMS: now merge
> >>> > IW 0 [UpdWriterBuild]: CMS:   index: _0:C171638->_0 _1:C204995->_0
> >>> > _2:C168236->_0 _3:C146894->_0
> >>> > IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
> >>> > IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=791.724
> >>> allocMB=835.832
> >>> > deletesMB=118.535 triggerMB=910.25
> >>> > IW 0 [UpdWriterBuild]:   flush: segment=_4 docStoreSegment=_0
> >>> > docStoreOffset=691763 flushDocs=true flushDeletes=false
> >>> flushDocStores=false
> >>> > numDocs=162034 numBufDelTerms=162034
> >>> > IW 0 [UpdWriterBuild]:   index before flush _0:C171638->_0
> >>> _1:C204995->_0
> >>> > _2:C168236->_0 _3:C146894->_0
> >>> > IW 0 [UpdWriterBuild]: DW: flush postings as segment _4
> numDocs=162034
> >>> > IW 0 [UpdWriterBuild]: DW:   oldRAMSize=830182400
> >>> newFlushedSize=498741034
> >>> > docs/MB=340.668 new/old=60.076%
> >>> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [5 segments ;
> isCommit
> >>> =
> >>> > false]
> >>> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [5 segments ;
> isCommit
> >>> =
> >>> > false]
> >>> > IW 0 [UpdWriterBuild]: LMP: findMerges: 5 segments
> >>> > IW 0 [UpdWriterBuild]: LMP:   level 8.008305 to 8.758305: 5 segments
> >>> > IW 0 [UpdWriterBuild]: CMS: now merge
> >>> > IW 0 [UpdWriterBuild]: CMS:   index: _0:C171638->_0 _1:C204995->_0
> >>> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0
> >>> > IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
> >>> > IW 0 [UpdWriterBuild]: DW:   RAM: now balance allocations:
> >>> usedMB=771.396 vs
> >>> > trigger=910.25 allocMB=835.832 deletesMB=138.875 vs trigger=955.762
> >>> > byteBlockFree=39.688 charBlockFree=7.188
> >>> > IW 0 [UpdWriterBuild]: DW:     nothing to free
> >>> > IW 0 [UpdWriterBuild]: DW:     after free: freedMB=64.374
> usedMB=910.271
> >>> > allocMB=771.458
> >>> > IW 0 [UpdWriterBuild]:   flush: segment=_5 docStoreSegment=_0
> >>> > docStoreOffset=853797 flushDocs=true flushDeletes=false
> >>> flushDocStores=false
> >>> > numDocs=146250 numBufDelTerms=146250
> >>> > IW 0 [UpdWriterBuild]:   index before flush _0:C171638->_0
> >>> _1:C204995->_0
> >>> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0
> >>> > IW 0 [UpdWriterBuild]: DW: flush postings as segment _5
> numDocs=146250
> >>> > IW 0 [UpdWriterBuild]: DW:   oldRAMSize=808866816
> >>> newFlushedSize=485212402
> >>> > docs/MB=316.056 new/old=59.987%
> >>> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [6 segments ;
> isCommit
> >>> =
> >>> > false]
> >>> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [6 segments ;
> isCommit
> >>> =
> >>> > false]
> >>> > IW 0 [UpdWriterBuild]: LMP: findMerges: 6 segments
> >>> > IW 0 [UpdWriterBuild]: LMP:   level 8.008305 to 8.758305: 6 segments
> >>> > IW 0 [UpdWriterBuild]: CMS: now merge
> >>> > IW 0 [UpdWriterBuild]: CMS:   index: _0:C171638->_0 _1:C204995->_0
> >>> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0
> >>> > IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
> >>> > IW 0 [Indexer]: commit: start
> >>> > IW 0 [Indexer]: commit: now prepare
> >>> > IW 0 [Indexer]: prepareCommit: flush
> >>> > IW 0 [Indexer]:   flush: segment=_6 docStoreSegment=_0
> >>> > docStoreOffset=1000047 flushDocs=true flushDeletes=true
> >>> flushDocStores=true
> >>> > numDocs=76978 numBufDelTerms=76978
> >>> > IW 0 [Indexer]:   index before flush _0:C171638->_0 _1:C204995->_0
> >>> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0
> >>> > IW 0 [Indexer]:   flush shared docStore segment _0
> >>> > IW 0 [Indexer]: DW: closeDocStore: 2 files to flush to segment _0
> >>> > numDocs=1077025
> >>> > IW 0 [Indexer]: DW: flush postings as segment _6 numDocs=76978
> >>> > IW 0 [Indexer]: DW:   oldRAMSize=486968320 newFlushedSize=273168136
> >>> > docs/MB=295.486 new/old=56.096%
> >>> > IFD [Indexer]: now checkpoint "segments_1" [7 segments ; isCommit =
> >>> false]
> >>> > IW 0 [Indexer]: DW: apply 1077025 buffered deleted terms and 0
> deleted
> >>> > docIDs and 0 deleted queries on 7 segments.
> >>> > IFD [Indexer]: now checkpoint "segments_1" [7 segments ; isCommit =
> >>> false]
> >>> > IW 0 [Indexer]: LMP: findMerges: 7 segments
> >>> > IW 0 [Indexer]: LMP:   level 8.008305 to 8.758305: 7 segments
> >>> > IW 0 [Indexer]: CMS: now merge
> >>> > IW 0 [Indexer]: CMS:   index: _0:C171638->_0 _1:C204995->_0
> >>> _2:C168236->_0
> >>> > _3:C146894->_0 _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
> >>> > IW 0 [Indexer]: CMS:   no more merges pending; now return
> >>> > IW 0 [Indexer]: startCommit(): start sizeInBytes=0
> >>> > IW 0 [Indexer]: startCommit index=_0:C171638->_0 _1:C204995->_0
> >>> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0
> >>> _6:C76978->_0
> >>> > changeCount=21
> >>> > IW 0 [Indexer]: now sync _0.tis
> >>> > IW 0 [Indexer]: now sync _5.prx
> >>> > IW 0 [Indexer]: now sync _3.frq
> >>> > IW 0 [Indexer]: now sync _3.tii
> >>> > IW 0 [Indexer]: now sync _1.frq
> >>> > IW 0 [Indexer]: now sync _6.frq
> >>> > IW 0 [Indexer]: now sync _4.prx
> >>> > IW 0 [Indexer]: now sync _4.fnm
> >>> > IW 0 [Indexer]: now sync _2.tii
> >>> > IW 0 [Indexer]: now sync _3.fnm
> >>> > IW 0 [Indexer]: now sync _1.fnm
> >>> > IW 0 [Indexer]: now sync _6.tis
> >>> > IW 0 [Indexer]: now sync _4.frq
> >>> > IW 0 [Indexer]: now sync _5.nrm
> >>> > IW 0 [Indexer]: now sync _5.tis
> >>> > IW 0 [Indexer]: now sync _1.tii
> >>> > IW 0 [Indexer]: now sync _4.tis
> >>> > IW 0 [Indexer]: now sync _0.prx
> >>> > IW 0 [Indexer]: now sync _3.nrm
> >>> > IW 0 [Indexer]: now sync _4.tii
> >>> > IW 0 [Indexer]: now sync _0.nrm
> >>> > IW 0 [Indexer]: now sync _5.fnm
> >>> > IW 0 [Indexer]: now sync _1.tis
> >>> > IW 0 [Indexer]: now sync _0.fnm
> >>> > IW 0 [Indexer]: now sync _2.prx
> >>> > IW 0 [Indexer]: now sync _6.tii
> >>> > IW 0 [Indexer]: now sync _4.nrm
> >>> > IW 0 [Indexer]: now sync _2.frq
> >>> > IW 0 [Indexer]: now sync _5.frq
> >>> > IW 0 [Indexer]: now sync _3.prx
> >>> > IW 0 [Indexer]: now sync _5.tii
> >>> > IW 0 [Indexer]: now sync _2.fnm
> >>> > IW 0 [Indexer]: now sync _1.prx
> >>> > IW 0 [Indexer]: now sync _2.tis
> >>> > IW 0 [Indexer]: now sync _0.tii
> >>> > IW 0 [Indexer]: now sync _6.prx
> >>> > IW 0 [Indexer]: now sync _0.frq
> >>> > IW 0 [Indexer]: now sync _6.fnm
> >>> > IW 0 [Indexer]: now sync _0.fdx
> >>> > IW 0 [Indexer]: now sync _6.nrm
> >>> > IW 0 [Indexer]: now sync _0.fdt
> >>> > IW 0 [Indexer]: now sync _1.nrm
> >>> > IW 0 [Indexer]: now sync _2.nrm
> >>> > IW 0 [Indexer]: now sync _3.tis
> >>> > IW 0 [Indexer]: done all syncs
> >>> > IW 0 [Indexer]: commit: pendingCommit != null
> >>> > IW 0 [Indexer]: commit: wrote segments file "segments_2"
> >>> > IFD [Indexer]: now checkpoint "segments_2" [7 segments ; isCommit =
> >>> true]
> >>> > IFD [Indexer]: deleteCommits: now decRef commit "segments_1"
> >>> > IFD [Indexer]: delete "segments_1"
> >>> > IW 0 [Indexer]: commit: done
> >>> > IW 0 [Indexer]: optimize: index now _0:C171638->_0 _1:C204995->_0
> >>> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0
> >>> _6:C76978->_0
> >>> > IW 0 [Indexer]:   flush: segment=null docStoreSegment=_6
> >>> docStoreOffset=0
> >>> > flushDocs=false flushDeletes=true flushDocStores=false numDocs=0
> >>> > numBufDelTerms=0
> >>> > IW 0 [Indexer]:   index before flush _0:C171638->_0 _1:C204995->_0
> >>> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0
> >>> _6:C76978->_0
> >>> > IW 0 [Indexer]: add merge to pendingMerges: _0:C171638->_0
> >>> _1:C204995->_0
> >>> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0
> >>> _6:C76978->_0
> >>> > [optimize] [total 1 pending]
> >>> > IW 0 [Indexer]: CMS: now merge
> >>> > IW 0 [Indexer]: CMS:   index: _0:C171638->_0 _1:C204995->_0
> >>> _2:C168236->_0
> >>> > _3:C146894->_0 _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
> >>> > IW 0 [Indexer]: CMS:   consider merge _0:C171638->_0 _1:C204995->_0
> >>> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0
> >>> _6:C76978->_0
> >>> > into _7 [optimize]
> >>> > IW 0 [Indexer]: CMS:     launch new thread [Lucene Merge Thread #0]
> >>> > IW 0 [Indexer]: CMS:   no more merges pending; now return
> >>> > IW 0 [Lucene Merge Thread #0]: CMS:   merge thread: start
> >>> > IW 0 [Lucene Merge Thread #0]: now merge
> >>> >  merge=_0:C171638->_0 _1:C204995->_0 _2:C168236->_0 _3:C146894->_0
> >>> > _4:C162034->_0 _5:C146250->_0 _6:C76978->_0 into _7 [optimize]
> >>> >  merge=org.apache.lucene.index.MergePolicy$OneMerge@78688954
> >>> >  index=_0:C171638->_0 _1:C204995->_0 _2:C168236->_0 _3:C146894->_0
> >>> > _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
> >>> > IW 0 [Lucene Merge Thread #0]: merging _0:C171638->_0 _1:C204995->_0
> >>> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0
> >>> _6:C76978->_0
> >>> > into _7 [optimize]
> >>> > IW 0 [Lucene Merge Thread #0]: merge: total 1077025 docs
> >>> > IW 0 [Lucene Merge Thread #0]: commitMerge: _0:C171638->_0
> >>> _1:C204995->_0
> >>> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0
> >>> _6:C76978->_0
> >>> > into _7 [optimize] index=_0:C171638->_0 _1:C204995->_0 _2:C168236->_0
> >>> > _3:C146894->_0 _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
> >>> > IW 0 [Lucene Merge Thread #0]: commitMergeDeletes _0:C171638->_0
> >>> > _1:C204995->_0 _2:C168236->_0 _3:C146894->_0 _4:C162034->_0
> >>> _5:C146250->_0
> >>> > _6:C76978->_0 into _7 [optimize]
> >>> > IFD [Lucene Merge Thread #0]: now checkpoint "segments_2" [1 segments
> ;
> >>> > isCommit = false]
> >>> > IW 0 [Lucene Merge Thread #0]: CMS:   merge thread: done
> >>> > IW 0 [Indexer]: now flush at close
> >>> > IW 0 [Indexer]:   flush: segment=null docStoreSegment=_6
> >>> docStoreOffset=0
> >>> > flushDocs=false flushDeletes=true flushDocStores=true numDocs=0
> >>> > numBufDelTerms=0
> >>> > IW 0 [Indexer]:   index before flush _7:C1077025->_0
> >>> > IW 0 [Indexer]:   flush shared docStore segment _6
> >>> > IW 0 [Indexer]: DW: closeDocStore: 0 files to flush to segment _6
> >>> numDocs=0
> >>> > IW 0 [Indexer]: CMS: now merge
> >>> > IW 0 [Indexer]: CMS:   index: _7:C1077025->_0
> >>> > IW 0 [Indexer]: CMS:   no more merges pending; now return
> >>> > IW 0 [Indexer]: now call final commit()
> >>> > IW 0 [Indexer]: startCommit(): start sizeInBytes=0
> >>> > IW 0 [Indexer]: startCommit index=_7:C1077025->_0 changeCount=23
> >>> > IW 0 [Indexer]: now sync _7.prx
> >>> > IW 0 [Indexer]: now sync _7.fnm
> >>> > IW 0 [Indexer]: now sync _7.tis
> >>> > IW 0 [Indexer]: now sync _7.nrm
> >>> > IW 0 [Indexer]: now sync _7.tii
> >>> > IW 0 [Indexer]: now sync _7.frq
> >>> > IW 0 [Indexer]: done all syncs
> >>> > IW 0 [Indexer]: commit: pendingCommit != null
> >>> > IW 0 [Indexer]: commit: wrote segments file "segments_3"
> >>> > IFD [Indexer]: now checkpoint "segments_3" [1 segments ; isCommit =
> >>> true]
> >>> > IFD [Indexer]: deleteCommits: now decRef commit "segments_2"
> >>> > IFD [Indexer]: delete "_0.tis"
> >>> > IFD [Indexer]: delete "_5.prx"
> >>> > IFD [Indexer]: delete "_3.tii"
> >>> > IFD [Indexer]: delete "_3.frq"
> >>> > IFD [Indexer]: delete "_1.frq"
> >>> > IFD [Indexer]: delete "_6.frq"
> >>> > IFD [Indexer]: delete "_4.prx"
> >>> > IFD [Indexer]: delete "_4.fnm"
> >>> > IFD [Indexer]: delete "_2.tii"
> >>> > IFD [Indexer]: delete "_3.fnm"
> >>> > IFD [Indexer]: delete "_1.fnm"
> >>> > IFD [Indexer]: delete "_6.tis"
> >>> > IFD [Indexer]: delete "_4.frq"
> >>> > IFD [Indexer]: delete "_5.nrm"
> >>> > IFD [Indexer]: delete "_5.tis"
> >>> > IFD [Indexer]: delete "_1.tii"
> >>> > IFD [Indexer]: delete "_4.tis"
> >>> > IFD [Indexer]: delete "_0.prx"
> >>> > IFD [Indexer]: delete "_3.nrm"
> >>> > IFD [Indexer]: delete "_4.tii"
> >>> > IFD [Indexer]: delete "_0.nrm"
> >>> > IFD [Indexer]: delete "_5.fnm"
> >>> > IFD [Indexer]: delete "_1.tis"
> >>> > IFD [Indexer]: delete "_0.fnm"
> >>> > IFD [Indexer]: delete "_2.prx"
> >>> > IFD [Indexer]: delete "_6.tii"
> >>> > IFD [Indexer]: delete "_4.nrm"
> >>> > IFD [Indexer]: delete "_2.frq"
> >>> > IFD [Indexer]: delete "_5.frq"
> >>> > IFD [Indexer]: delete "_3.prx"
> >>> > IFD [Indexer]: delete "_5.tii"
> >>> > IFD [Indexer]: delete "_2.fnm"
> >>> > IFD [Indexer]: delete "_1.prx"
> >>> > IFD [Indexer]: delete "_2.tis"
> >>> > IFD [Indexer]: delete "_0.tii"
> >>> > IFD [Indexer]: delete "_6.prx"
> >>> > IFD [Indexer]: delete "_0.frq"
> >>> > IFD [Indexer]: delete "segments_2"
> >>> > IFD [Indexer]: delete "_6.fnm"
> >>> > IFD [Indexer]: delete "_6.nrm"
> >>> > IFD [Indexer]: delete "_1.nrm"
> >>> > IFD [Indexer]: delete "_2.nrm"
> >>> > IFD [Indexer]: delete "_3.tis"
> >>> > IW 0 [Indexer]: commit: done
> >>> > IW 0 [Indexer]: at close: _7:C1077025->_0
> >>> >
> >>> > I see no errors.
> >>> > Peter
> >>> >
> >>> >
> >>> > On Tue, Oct 27, 2009 at 10:44 AM, Peter Keegan <
> peterlkeegan@gmail.com
> >>> >wrote:
> >>> >
> >>> >>
> >>> >>
> >>> >> On Tue, Oct 27, 2009 at 10:37 AM, Michael McCandless <
> >>> >> lucene@mikemccandless.com> wrote:
> >>> >>
> >>> >>> OK that exception looks more reasonable, for a disk full event.
> >>> >>>
> >>> >>> But, I can't tell from your followon emails: did this lead to index
> >>> >>> corruption?
> >>> >>>
> >>> >>
> >>> >> Yes, but this may be caused by the application ignoring a Lucene
> >>> exception
> >>> >> somewhere else. I will chase this down.
> >>> >>
> >>> >>>
> >>> >>> Also, I noticed you're using a rather old 1.6.0 JRE (1.6.0_03) --
> you
> >>> >>> really should upgrade that to the latest 1.6.0 -- there's at least
> one
> >>> >>> known problem with Lucene and early 1.6.0 JREs.
> >>> >>>
> >>> >>
> >>> >> Yes, I remember this problem - that's why we stayed at _03
> >>> >> Thanks.
> >>> >>
> >>> >>>
> >>> >>> Mike
> >>> >>>
> >>> >>> On Tue, Oct 27, 2009 at 10:00 AM, Peter Keegan <
> >>> peterlkeegan@gmail.com>
> >>> >>> wrote:
> >>> >>> > After rebuilding the corrupted indexes, the low disk space
> exception
> >>> is
> >>> >>> now
> >>> >>> > occurring as expected. Sorry for the distraction.
> >>> >>> >
> >>> >>> > fyi, here are the details:
> >>> >>> >
> >>> >>> >  java.io.IOException: There is not enough space on the disk
> >>> >>> >    at java.io.RandomAccessFile.writeBytes(Native Method)
> >>> >>> >    at java.io.RandomAccessFile.write(Unknown Source)
> >>> >>> >    at
> >>> >>> >
> >>> >>>
> >>>
> org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexOutput.flushBuffer(SimpleFSDirectory.java:192)
> >>> >>> >    at
> >>> >>> >
> >>> >>>
> >>>
> org.apache.lucene.store.BufferedIndexOutput.flushBuffer(BufferedIndexOutput.java:96)
> >>> >>> >    at
> >>> >>> >
> >>> >>>
> >>>
> org.apache.lucene.store.BufferedIndexOutput.flush(BufferedIndexOutput.java:85)
> >>> >>> >    at
> >>> >>> >
> >>> >>>
> >>>
> org.apache.lucene.store.BufferedIndexOutput.close(BufferedIndexOutput.java:109)
> >>> >>> >    at
> >>> >>> >
> >>> >>>
> >>>
> org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexOutput.close(SimpleFSDirectory.java:199)
> >>> >>> >    at
> >>> org.apache.lucene.index.FieldsWriter.close(FieldsWriter.java:144)
> >>> >>> >    at
> >>> >>> >
> >>> >>>
> >>>
> org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:357)
> >>> >>> >    at
> >>> >>> org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:153)
> >>> >>> >    at
> >>> >>> >
> >>> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:5011)
> >>> >>> >    at
> >>> org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4596)
> >>> >>> >    at
> >>> >>> >
> >>> >>>
> >>>
> org.apache.lucene.index.IndexWriter.resolveExternalSegments(IndexWriter.java:3786)
> >>> >>> >    at
> >>> >>> >
> >>> >>>
> >>>
> org.apache.lucene.index.IndexWriter.addIndexesNoOptimize(IndexWriter.java:3695)
> >>> >>> >
> >>> >>> >
> >>> >>> > And the corresponding index info log:
> >>> >>> >
> >>> >>> > IFD [Indexer]: setInfoStream
> >>> >>> >
> >>> >>>
> >>>
> deletionPolicy=org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy@256ef705
> >>> >>> > IW 1 [Indexer]: setInfoStream:
> >>> >>> > dir=org.apache.lucene.store.SimpleFSDirectory@D
> >>> >>> :\mnsavs\lresumes3\lresumes3.luc\lresumes3.update.main.4
> >>> >>> > autoCommit=false
> >>> >>> >
> >>> >>>
> >>>
> mergePolicy=org.apache.lucene.index.LogByteSizeMergePolicy@181b7c76mergeScheduler
> >>> >>>
> >>>
> =org.apache.lucene.index.ConcurrentMergeScheduler@34883357ramBufferSizeMB
> >>> >>> =16.0
> >>> >>> > maxBufferedDocs=-1 maxBuffereDeleteTerms=-1
> >>> >>> > maxFieldLength=2147483647 index=
> >>> >>> > IW 1 [Indexer]: flush at addIndexesNoOptimize
> >>> >>> > IW 1 [Indexer]:   flush: segment=null docStoreSegment=null
> >>> >>> docStoreOffset=0
> >>> >>> > flushDocs=false flushDeletes=true flushDocStores=false numDocs=0
> >>> >>> > numBufDelTerms=0
> >>> >>> > IW 1 [Indexer]:   index before flush
> >>> >>> > IW 1 [Indexer]: now start transaction
> >>> >>> > IW 1 [Indexer]: LMP: findMerges: 2 segments
> >>> >>> > IW 1 [Indexer]: LMP:   level 8.774518 to 9.524518: 1 segments
> >>> >>> > IW 1 [Indexer]: LMP:   level 6.2973914 to 7.0473914: 1 segments
> >>> >>> > IW 1 [Indexer]: CMS: now merge
> >>> >>> > IW 1 [Indexer]: CMS:   index: _7:Cx1075533->_0** _8:Cx2795**
> >>> >>> > IW 1 [Indexer]: CMS:   no more merges pending; now return
> >>> >>> > IW 1 [Indexer]: add merge to pendingMerges: _7:Cx1075533->_0
> [total
> >>> 1
> >>> >>> > pending]
> >>> >>> > IW 1 [Indexer]: now merge
> >>> >>> >  merge=_7:Cx1075533->_0 into _0 [mergeDocStores]
> >>> >>> >  merge=org.apache.lucene.index.MergePolicy$OneMerge@4d480ea
> >>> >>> >  index=_7:Cx1075533->_0** _8:Cx2795**
> >>> >>> > IW 1 [Indexer]: merging _7:Cx1075533->_0 into _0 [mergeDocStores]
> >>> >>> > IW 1 [Indexer]: merge: total 1074388 docs
> >>> >>> > IW 1 [Indexer]: commitMerge: _7:Cx1075533->_0 into _0
> >>> [mergeDocStores]
> >>> >>> > index=_7:Cx1075533->_0** _8:Cx2795**
> >>> >>> > IW 1 [Indexer]: commitMergeDeletes _7:Cx1075533->_0 into _0
> >>> >>> [mergeDocStores]
> >>> >>> > IFD [Indexer]: now checkpoint "segments_1" [2 segments ; isCommit
> =
> >>> >>> false]
> >>> >>> > IW 1 [Indexer]: LMP: findMerges: 2 segments
> >>> >>> > IW 1 [Indexer]: LMP:   level 8.864886 to 9.614886: 1 segments
> >>> >>> > IW 1 [Indexer]: LMP:   level 6.2973914 to 7.0473914: 1 segments
> >>> >>> > IW 1 [Indexer]: add merge to pendingMerges: _8:Cx2795 [total 1
> >>> pending]
> >>> >>> > IW 1 [Indexer]: now merge
> >>> >>> >  merge=_8:Cx2795 into _1 [mergeDocStores]
> >>> >>> >  merge=org.apache.lucene.index.MergePolicy$OneMerge@606f8b2b
> >>> >>> >  index=_0:C1074388 _8:Cx2795**
> >>> >>> > IW 1 [Indexer]: merging _8:Cx2795 into _1 [mergeDocStores]
> >>> >>> > IW 1 [Indexer]: merge: total 2795 docs
> >>> >>> > IW 1 [Indexer]: handleMergeException: merge=_8:Cx2795 into _1
> >>> >>> > [mergeDocStores] exc=java.io.IOException: There is not enough
> space
> >>> on
> >>> >>> the
> >>> >>> > disk
> >>> >>> > IW 1 [Indexer]: hit exception during merge
> >>> >>> > IFD [Indexer]: refresh [prefix=_1]: removing newly created
> >>> unreferenced
> >>> >>> file
> >>> >>> > "_1.fdt"
> >>> >>> > IFD [Indexer]: delete "_1.fdt"
> >>> >>> > IFD [Indexer]: refresh [prefix=_1]: removing newly created
> >>> unreferenced
> >>> >>> file
> >>> >>> > "_1.fdx"
> >>> >>> > IFD [Indexer]: delete "_1.fdx"
> >>> >>> > IFD [Indexer]: refresh [prefix=_1]: removing newly created
> >>> unreferenced
> >>> >>> file
> >>> >>> > "_1.fnm"
> >>> >>> > IFD [Indexer]: delete "_1.fnm"
> >>> >>> > IW 1 [Indexer]: now rollback transaction
> >>> >>> > IW 1 [Indexer]: all running merges have aborted
> >>> >>> > IFD [Indexer]: now checkpoint "segments_1" [0 segments ; isCommit
> =
> >>> >>> false]
> >>> >>> > IFD [Indexer]: delete "_0.nrm"
> >>> >>> > IFD [Indexer]: delete "_0.tis"
> >>> >>> > IFD [Indexer]: delete "_0.fnm"
> >>> >>> > IFD [Indexer]: delete "_0.tii"
> >>> >>> > IFD [Indexer]: delete "_0.frq"
> >>> >>> > IFD [Indexer]: delete "_0.fdx"
> >>> >>> > IFD [Indexer]: delete "_0.prx"
> >>> >>> > IFD [Indexer]: delete "_0.fdt"
> >>> >>> >
> >>> >>> >
> >>> >>> > Peter
> >>> >>> >
> >>> >>> > On Mon, Oct 26, 2009 at 3:59 PM, Peter Keegan <
> >>> peterlkeegan@gmail.com
> >>> >>> >wrote:
> >>> >>> >
> >>> >>> >>
> >>> >>> >>
> >>> >>> >> On Mon, Oct 26, 2009 at 3:00 PM, Michael McCandless <
> >>> >>> >> lucene@mikemccandless.com> wrote:
> >>> >>> >>
> >>> >>> >>> On Mon, Oct 26, 2009 at 2:55 PM, Peter Keegan <
> >>> peterlkeegan@gmail.com
> >>> >>> >
> >>> >>> >>> wrote:
> >>> >>> >>> > On Mon, Oct 26, 2009 at 2:50 PM, Michael McCandless <
> >>> >>> >>> > lucene@mikemccandless.com> wrote:
> >>> >>> >>> >
> >>> >>> >>> >> On Mon, Oct 26, 2009 at 10:44 AM, Peter Keegan <
> >>> >>> peterlkeegan@gmail.com
> >>> >>> >>> >
> >>> >>> >>> >> wrote:
> >>> >>> >>> >> > Even running in console mode, the exception is difficult
> to
> >>> >>> >>> interpret.
> >>> >>> >>> >> > Here's an exception that I think occurred during an add
> >>> document,
> >>> >>> >>> commit
> >>> >>> >>> >> or
> >>> >>> >>> >> > close:
> >>> >>> >>> >> > doc counts differ for segment _g: field Reader shows 137
> but
> >>> >>> >>> segmentInfo
> >>> >>> >>> >> > shows 5777
> >>> >>> >>> >>
> >>> >>> >>> >> That's spooky.  Do you have the full exception for this one?
> >>>  What
> >>> >>> IO
> >>> >>> >>> >> system are you running on?  (Is it just a local drive on
> your
> >>> >>> windows
> >>> >>> >>> >> computer?) It's almost as if the IO system is not generating
> an
> >>> >>> >>> >> IOException to Java when disk fills up.
> >>> >>> >>> >>
> >>> >>> >>> >
> >>> >>> >>> > Index and code are all on a local drive. There is no other
> >>> exception
> >>> >>> >>> coming
> >>> >>> >>> > back - just what I reported.
> >>> >>> >>>
> >>> >>> >>> But, you didn't report a traceback for this first one?
> >>> >>> >>>
> >>> >>> >>
> >>> >>> >> Yes, I need to add some more printStackTrace calls.
> >>> >>> >>
> >>> >>> >>
> >>> >>> >>>
> >>> >>> >>> >> > I ensured that the disk space was low before updating the
> >>> index.
> >>> >>> >>> >>
> >>> >>> >>> >> You mean, to intentionally test the disk-full case?
> >>> >>> >>> >>
> >>> >>> >>> >
> >>> >>> >>> > Yes, that's right.
> >>> >>> >>>
> >>> >>> >>> OK.  Can you turn on IndexWriter's infoStream, get this disk
> full
> >>> /
> >>> >>> >>> corruption to happen again, and post back the resulting output?
> >>>  Make
> >>> >>> >>> sure your index first passes CheckIndex before starting (so we
> >>> don't
> >>> >>> >>> begin the test w/ any pre-existing index corruption).
> >>> >>> >>>
> >>> >>> >>
> >>> >>> >> Good point about CheckIndex.  I've already found 2 bad ones. I
> will
> >>> >>> build
> >>> >>> >> new indexes from scratch. This will take a while.
> >>> >>> >>
> >>> >>> >>
> >>> >>> >>> >> > On another occasion, the exception was:
> >>> >>> >>> >> > background merge hit exception: _0:C1080260 _1:C139
> _2:C123
> >>> >>> _3:C107
> >>> >>> >>> >> _4:C126
> >>> >>> >>> >> > _5:C121 _6:C126 _7:C711 _8:C116 into _9 [optimize]
> >>> >>> [mergeDocStores]
> >>> >>> >>> >>
> >>> >>> >>> >> In this case, the SegmentMerger was trying to open this
> >>> segment,
> >>> >>> but
> >>> >>> >>> >> on attempting to read the first int from the fdx (fields
> index)
> >>> >>> file
> >>> >>> >>> >> for one of the segments, it hit EOF.
> >>> >>> >>> >>
> >>> >>> >>> >> This is also spooky -- this looks like index corruption,
> which
> >>> >>> should
> >>> >>> >>> >> never happen on hitting disk full.
> >>> >>> >>> >>
> >>> >>> >>> >
> >>> >>> >>> > That's what I thought, too. Could Lucene be catching the
> >>> IOException
> >>> >>> and
> >>> >>> >>> > turning it into a different exception?
> >>> >>> >>>
> >>> >>> >>> I think that's unlikely, but I guess possible.  We have "disk
> >>> full"
> >>> >>> >>> tests in the unit tests, that throw an IOException at different
> >>> times.
> >>> >>> >>>
> >>> >>> >>> What exact windows version are you using?  The local drive is
> >>> NTFS?
> >>> >>> >>>
> >>> >>> >>
> >>> >>> >> Windows Server 2003 Enterprise x64 SP2. Local drive is NTFS
> >>> >>> >>
> >>> >>> >>
> >>> >>> >>>
> >>> >>> >>> Mike
> >>> >>> >>>
> >>> >>> >>>
> >>> ---------------------------------------------------------------------
> >>> >>> >>> To unsubscribe, e-mail:
> java-user-unsubscribe@lucene.apache.org
> >>> >>> >>> For additional commands, e-mail:
> java-user-help@lucene.apache.org
> >>> >>> >>>
> >>> >>> >>>
> >>> >>> >>
> >>> >>> >
> >>> >>>
> >>> >>>
> ---------------------------------------------------------------------
> >>> >>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> >>> >>> For additional commands, e-mail: java-user-help@lucene.apache.org
> >>> >>>
> >>> >>>
> >>> >>
> >>> >
> >>>
> >>> ---------------------------------------------------------------------
> >>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> >>> For additional commands, e-mail: java-user-help@lucene.apache.org
> >>>
> >>>
> >>
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> For additional commands, e-mail: java-user-help@lucene.apache.org
>
>

Re: IO exception during merge/optimize

Posted by Michael McCandless <lu...@mikemccandless.com>.
The unit tests do test multi-segment indexes (though we could always
use deeper testing, here), but, don't test big-ish indexes, like this,
very well.

Are you also using JDK 1.6.0_16 when running CheckIndex?  If you run
CheckIndex on the same index several times in a row, does it report
precisely the same problems?

This exception in CheckIndex is very odd: it checks whether the
docFreq reported for each term in the TermEnum actually matches the
number of docs that it was able to iterate through using the
TermPositions.  The reason why it's very odd is that under the hood
TermPositions is supposed to be using the very same source of docFreq,
to figure out how many docs it's supposed to read.  It may not be
index corruption but rather a bug somewhere in CheckIndex, or, in the
APIs its using.

What settings are you using in your IndexWriter (that differ from its
defaults)?  If you eg increase the frequency of flushing, can you get
this error to happen with a smaller number of docs added?  Just trying
to box the issue in...

Also, what does Lucene version "2.9 exported - 2009-10-27 15:31:52" mean?

Mike

On Tue, Oct 27, 2009 at 4:10 PM, Peter Keegan <pe...@gmail.com> wrote:
> Without the optimize, it looks like there are errors on all segments except
> the first:
>
> Opening index @ D:\mnsavs\lresumes1\lresumes1.luc\lresumes1.search.main.2
>
> Segments file=segments_2 numSegments=3 version=FORMAT_DIAGNOSTICS [Lucene
> 2.9]
>  1 of 3: name=_0 docCount=413557
>    compound=false
>    hasProx=true
>    numFiles=8
>    size (MB)=1,148.795
>    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
> exported - 2009-10-27 15:31:52, source=flush, os.arch=amd64,
> java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
>    docStoreOffset=0
>    docStoreSegment=_0
>    docStoreIsCompoundFile=false
>    no deletions
>    test: open reader.........OK
>    test: fields..............OK [33 fields]
>    test: field norms.........OK [33 fields]
>    test: terms, freq, prox...OK [7704599 terms; 180318249 terms/docs pairs;
> 340258711 tokens]
>    test: stored fields.......OK [1240671 total field count; avg 3 fields
> per doc]
>    test: term vectors........OK [0 total vector count; avg 0 term/freq
> vector fields per doc]
>
>  2 of 3: name=_1 docCount=359203
>    compound=false
>    hasProx=true
>    numFiles=8
>    size (MB)=1,125.103
>    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
> exported - 2009-10-27 15:31:52, source=flush, os.arch=amd64,
> java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
>    docStoreOffset=413557
>    docStoreSegment=_0
>    docStoreIsCompoundFile=false
>    no deletions
>    test: open reader.........OK
>    test: fields..............OK [33 fields]
>    test: field norms.........OK [33 fields]
>    test: terms, freq, prox...ERROR [term literals:cfid196$ docFreq=43 !=
> num docs seen 4 + num docs deleted 0]
> java.lang.RuntimeException: term literals:cfid196$ docFreq=43 != num docs
> seen 4 + num docs deleted 0
>    at org.apache.lucene.index.CheckIndex.testTermIndex(CheckIndex.java:675)
>    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:530)
>    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
>    test: stored fields.......OK [1077609 total field count; avg 3 fields
> per doc]
>    test: term vectors........OK [0 total vector count; avg 0 term/freq
> vector fields per doc]
> FAILED
>    WARNING: fixIndex() would remove reference to this segment; full
> exception:
> java.lang.RuntimeException: Term Index test failed
>    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:543)
>    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
>
>  3 of 3: name=_2 docCount=304659
>    compound=false
>    hasProx=true
>    numFiles=8
>    size (MB)=961.764
>    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
> exported - 2009-10-27 15:31:52, source=flush, os.arch=amd64,
> java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
>    docStoreOffset=772760
>    docStoreSegment=_0
>    docStoreIsCompoundFile=false
>    no deletions
>    test: open reader.........OK
>    test: fields..............OK [33 fields]
>    test: field norms.........OK [33 fields]
>    test: terms, freq, prox...ERROR [term contents:? docFreq=1 != num docs
> seen 245 + num docs deleted 0]
> java.lang.RuntimeException: term contents:? docFreq=1 != num docs seen 245 +
> num docs deleted 0
>    at org.apache.lucene.index.CheckIndex.testTermIndex(CheckIndex.java:675)
>    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:530)
>    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
>    test: stored fields.......OK [913977 total field count; avg 3 fields per
> doc]
>    test: term vectors........OK [0 total vector count; avg 0 term/freq
> vector fields per doc]
> FAILED
>    WARNING: fixIndex() would remove reference to this segment; full
> exception:
> java.lang.RuntimeException: Term Index test failed
>    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:543)
>    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
>
> WARNING: 2 broken segments (containing 663862 documents) detected
> WARNING: would write new segments file, and 663862 documents would be lost,
> if -fix were specified
>
>
> Do the unit tests create multi-segment indexes?
>
> Peter
>
> On Tue, Oct 27, 2009 at 3:08 PM, Peter Keegan <pe...@gmail.com>wrote:
>
>> It's reproducible with a large no. of docs (>1 million), but not with 100K
>> docs.
>> I got same error with jvm 1.6.0_16.
>> The index was optimized after all docs are added. I'll try removing the
>> optimize.
>>
>> Peter
>>
>>
>> On Tue, Oct 27, 2009 at 2:57 PM, Michael McCandless <
>> lucene@mikemccandless.com> wrote:
>>
>>> This is odd -- is it reproducible?
>>>
>>> Can you narrow it down to a small set of docs that when indexed
>>> produce a corrupted index?
>>>
>>> If you attempt to optimize the index, does it fail?
>>>
>>> Mike
>>>
>>> On Tue, Oct 27, 2009 at 1:40 PM, Peter Keegan <pe...@gmail.com>
>>> wrote:
>>> > It seems the index is corrupted immediately after the initial build
>>> (ample
>>> > disk space was provided):
>>> >
>>> > Output from CheckIndex:
>>> >
>>> > Opening index @
>>> D:\mnsavs\lresumes1\lresumes1.luc\lresumes1.search.main.2
>>> >
>>> > Segments file=segments_3 numSegments=1 version=FORMAT_DIAGNOSTICS
>>> [Lucene
>>> > 2.9]
>>> >  1 of 1: name=_7 docCount=1077025
>>> >    compound=false
>>> >    hasProx=true
>>> >    numFiles=8
>>> >    size (MB)=3,201.196
>>> >    diagnostics = {optimize=true, mergeFactor=7, os.version=5.2,
>>> os=Windows
>>> > 2003, mergeDocStores=false, lucene.version=2.9 exported - 2009-10-26
>>> > 07:58:55, source=merge, os.arch=amd64, java.version=1.6.0_03,
>>> > java.vendor=Sun Microsystems Inc.}
>>> >    docStoreOffset=0
>>> >    docStoreSegment=_0
>>> >    docStoreIsCompoundFile=false
>>> >    no deletions
>>> >    test: open reader.........OK
>>> >    test: fields..............OK [33 fields]
>>> >    test: field norms.........OK [33 fields]
>>> >    test: terms, freq, prox...ERROR [term contents:? docFreq=1 != num
>>> docs
>>> > seen 482 + num docs deleted 0]
>>> > java.lang.RuntimeException: term contents:? docFreq=1 != num docs seen
>>> 482 +
>>> > num docs deleted 0
>>> >    at
>>> org.apache.lucene.index.CheckIndex.testTermIndex(CheckIndex.java:675)
>>> >    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:530)
>>> >    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
>>> >    test: stored fields.......OK [3231075 total field count; avg 3 fields
>>> > per doc]
>>> >    test: term vectors........OK [0 total vector count; avg 0 term/freq
>>> > vector fields per doc]
>>> > FAILED
>>> >    WARNING: fixIndex() would remove reference to this segment; full
>>> > exception:
>>> > java.lang.RuntimeException: Term Index test failed
>>> >    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:543)
>>> >    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
>>> >
>>> > WARNING: 1 broken segments (containing 1077025 documents) detected
>>> > WARNING: would write new segments file, and 1077025 documents would be
>>> lost,
>>> > if -fix were specified
>>> >
>>> > Searching on this index seems to be fine, though.
>>> >
>>> > Here is the IndexWriter log from the build:
>>> >
>>> > IFD [Indexer]: setInfoStream
>>> >
>>> deletionPolicy=org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy@2a9cfec1
>>> > IW 0 [Indexer]: setInfoStream:
>>> > dir=org.apache.lucene.store.SimpleFSDirectory@D
>>> :\mnsavs\lresumes1\lresumes1.luc\lresumes1.update.main.2
>>> > autoCommit=false
>>> >
>>> mergePolicy=org.apache.lucene.index.LogByteSizeMergePolicy@291946c2mergeScheduler
>>> =org.apache.lucene.index.ConcurrentMergeScheduler@3a747fa2ramBufferSizeMB
>>> =16.0
>>> > maxBufferedDocs=-1 maxBuffereDeleteTerms=-1
>>> > maxFieldLength=2147483647 index=
>>> > IW 0 [Indexer]: setRAMBufferSizeMB 910.25
>>> > IW 0 [Indexer]: setMaxBufferedDocs 1000000
>>> > IW 0 [Indexer]: flush at getReader
>>> > IW 0 [Indexer]:   flush: segment=null docStoreSegment=null
>>> docStoreOffset=0
>>> > flushDocs=false flushDeletes=true flushDocStores=false numDocs=0
>>> > numBufDelTerms=0
>>> > IW 0 [Indexer]:   index before flush
>>> > IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=886.463
>>> allocMB=886.463
>>> > deletesMB=23.803 triggerMB=910.25
>>> > IW 0 [UpdWriterBuild]:   flush: segment=_0 docStoreSegment=_0
>>> > docStoreOffset=0 flushDocs=true flushDeletes=false flushDocStores=false
>>> > numDocs=171638 numBufDelTerms=171638
>>> > IW 0 [UpdWriterBuild]:   index before flush
>>> > IW 0 [UpdWriterBuild]: DW: flush postings as segment _0 numDocs=171638
>>> > IW 0 [UpdWriterBuild]: DW:   oldRAMSize=929523712
>>> newFlushedSize=573198529
>>> > docs/MB=313.985 new/old=61.666%
>>> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [1 segments ; isCommit
>>> =
>>> > false]
>>> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [1 segments ; isCommit
>>> =
>>> > false]
>>> > IW 0 [UpdWriterBuild]: LMP: findMerges: 1 segments
>>> > IW 0 [UpdWriterBuild]: LMP:   level 8.008305 to 8.758305: 1 segments
>>> > IW 0 [UpdWriterBuild]: CMS: now merge
>>> > IW 0 [UpdWriterBuild]: CMS:   index: _0:C171638->_0
>>> > IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
>>> > IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=857.977
>>> allocMB=901.32
>>> > deletesMB=52.274 triggerMB=910.25
>>> > IW 0 [UpdWriterBuild]:   flush: segment=_1 docStoreSegment=_0
>>> > docStoreOffset=171638 flushDocs=true flushDeletes=false
>>> flushDocStores=false
>>> > numDocs=204995 numBufDelTerms=204995
>>> > IW 0 [UpdWriterBuild]:   index before flush _0:C171638->_0
>>> > IW 0 [UpdWriterBuild]: DW: flush postings as segment _1 numDocs=204995
>>> > IW 0 [UpdWriterBuild]: DW:   oldRAMSize=899653632
>>> newFlushedSize=544283851
>>> > docs/MB=394.928 new/old=60.499%
>>> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [2 segments ; isCommit
>>> =
>>> > false]
>>> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [2 segments ; isCommit
>>> =
>>> > false]
>>> > IW 0 [UpdWriterBuild]: LMP: findMerges: 2 segments
>>> > IW 0 [UpdWriterBuild]: LMP:   level 8.008305 to 8.758305: 2 segments
>>> > IW 0 [UpdWriterBuild]: CMS: now merge
>>> > IW 0 [UpdWriterBuild]: CMS:   index: _0:C171638->_0 _1:C204995->_0
>>> > IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
>>> > IW 0 [UpdWriterBuild]: DW:   RAM: now balance allocations:
>>> usedMB=834.645 vs
>>> > trigger=910.25 allocMB=901.32 deletesMB=75.627 vs trigger=955.762
>>> > byteBlockFree=35.938 charBlockFree=8.938
>>> > IW 0 [UpdWriterBuild]: DW:     nothing to free
>>> > IW 0 [UpdWriterBuild]: DW:     after free: freedMB=66.613 usedMB=910.272
>>> > allocMB=834.707
>>> > IW 0 [UpdWriterBuild]:   flush: segment=_2 docStoreSegment=_0
>>> > docStoreOffset=376633 flushDocs=true flushDeletes=false
>>> flushDocStores=false
>>> > numDocs=168236 numBufDelTerms=168236
>>> > IW 0 [UpdWriterBuild]:   index before flush _0:C171638->_0
>>> _1:C204995->_0
>>> > IW 0 [UpdWriterBuild]: DW: flush postings as segment _2 numDocs=168236
>>> > IW 0 [UpdWriterBuild]: DW:   oldRAMSize=875188224
>>> newFlushedSize=530720464
>>> > docs/MB=332.394 new/old=60.641%
>>> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [3 segments ; isCommit
>>> =
>>> > false]
>>> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [3 segments ; isCommit
>>> =
>>> > false]
>>> > IW 0 [UpdWriterBuild]: LMP: findMerges: 3 segments
>>> > IW 0 [UpdWriterBuild]: LMP:   level 8.008305 to 8.758305: 3 segments
>>> > IW 0 [UpdWriterBuild]: CMS: now merge
>>> > IW 0 [UpdWriterBuild]: CMS:   index: _0:C171638->_0 _1:C204995->_0
>>> > _2:C168236->_0
>>> > IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
>>> > IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=814.282
>>> allocMB=835.832
>>> > deletesMB=95.997 triggerMB=910.25
>>> > IW 0 [UpdWriterBuild]:   flush: segment=_3 docStoreSegment=_0
>>> > docStoreOffset=544869 flushDocs=true flushDeletes=false
>>> flushDocStores=false
>>> > numDocs=146894 numBufDelTerms=146894
>>> > IW 0 [UpdWriterBuild]:   index before flush _0:C171638->_0
>>> _1:C204995->_0
>>> > _2:C168236->_0
>>> > IW 0 [UpdWriterBuild]: DW: flush postings as segment _3 numDocs=146894
>>> > IW 0 [UpdWriterBuild]: DW:   oldRAMSize=853836800
>>> newFlushedSize=522388771
>>> > docs/MB=294.856 new/old=61.181%
>>> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [4 segments ; isCommit
>>> =
>>> > false]
>>> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [4 segments ; isCommit
>>> =
>>> > false]
>>> > IW 0 [UpdWriterBuild]: LMP: findMerges: 4 segments
>>> > IW 0 [UpdWriterBuild]: LMP:   level 8.008305 to 8.758305: 4 segments
>>> > IW 0 [UpdWriterBuild]: CMS: now merge
>>> > IW 0 [UpdWriterBuild]: CMS:   index: _0:C171638->_0 _1:C204995->_0
>>> > _2:C168236->_0 _3:C146894->_0
>>> > IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
>>> > IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=791.724
>>> allocMB=835.832
>>> > deletesMB=118.535 triggerMB=910.25
>>> > IW 0 [UpdWriterBuild]:   flush: segment=_4 docStoreSegment=_0
>>> > docStoreOffset=691763 flushDocs=true flushDeletes=false
>>> flushDocStores=false
>>> > numDocs=162034 numBufDelTerms=162034
>>> > IW 0 [UpdWriterBuild]:   index before flush _0:C171638->_0
>>> _1:C204995->_0
>>> > _2:C168236->_0 _3:C146894->_0
>>> > IW 0 [UpdWriterBuild]: DW: flush postings as segment _4 numDocs=162034
>>> > IW 0 [UpdWriterBuild]: DW:   oldRAMSize=830182400
>>> newFlushedSize=498741034
>>> > docs/MB=340.668 new/old=60.076%
>>> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [5 segments ; isCommit
>>> =
>>> > false]
>>> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [5 segments ; isCommit
>>> =
>>> > false]
>>> > IW 0 [UpdWriterBuild]: LMP: findMerges: 5 segments
>>> > IW 0 [UpdWriterBuild]: LMP:   level 8.008305 to 8.758305: 5 segments
>>> > IW 0 [UpdWriterBuild]: CMS: now merge
>>> > IW 0 [UpdWriterBuild]: CMS:   index: _0:C171638->_0 _1:C204995->_0
>>> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0
>>> > IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
>>> > IW 0 [UpdWriterBuild]: DW:   RAM: now balance allocations:
>>> usedMB=771.396 vs
>>> > trigger=910.25 allocMB=835.832 deletesMB=138.875 vs trigger=955.762
>>> > byteBlockFree=39.688 charBlockFree=7.188
>>> > IW 0 [UpdWriterBuild]: DW:     nothing to free
>>> > IW 0 [UpdWriterBuild]: DW:     after free: freedMB=64.374 usedMB=910.271
>>> > allocMB=771.458
>>> > IW 0 [UpdWriterBuild]:   flush: segment=_5 docStoreSegment=_0
>>> > docStoreOffset=853797 flushDocs=true flushDeletes=false
>>> flushDocStores=false
>>> > numDocs=146250 numBufDelTerms=146250
>>> > IW 0 [UpdWriterBuild]:   index before flush _0:C171638->_0
>>> _1:C204995->_0
>>> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0
>>> > IW 0 [UpdWriterBuild]: DW: flush postings as segment _5 numDocs=146250
>>> > IW 0 [UpdWriterBuild]: DW:   oldRAMSize=808866816
>>> newFlushedSize=485212402
>>> > docs/MB=316.056 new/old=59.987%
>>> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [6 segments ; isCommit
>>> =
>>> > false]
>>> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [6 segments ; isCommit
>>> =
>>> > false]
>>> > IW 0 [UpdWriterBuild]: LMP: findMerges: 6 segments
>>> > IW 0 [UpdWriterBuild]: LMP:   level 8.008305 to 8.758305: 6 segments
>>> > IW 0 [UpdWriterBuild]: CMS: now merge
>>> > IW 0 [UpdWriterBuild]: CMS:   index: _0:C171638->_0 _1:C204995->_0
>>> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0
>>> > IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
>>> > IW 0 [Indexer]: commit: start
>>> > IW 0 [Indexer]: commit: now prepare
>>> > IW 0 [Indexer]: prepareCommit: flush
>>> > IW 0 [Indexer]:   flush: segment=_6 docStoreSegment=_0
>>> > docStoreOffset=1000047 flushDocs=true flushDeletes=true
>>> flushDocStores=true
>>> > numDocs=76978 numBufDelTerms=76978
>>> > IW 0 [Indexer]:   index before flush _0:C171638->_0 _1:C204995->_0
>>> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0
>>> > IW 0 [Indexer]:   flush shared docStore segment _0
>>> > IW 0 [Indexer]: DW: closeDocStore: 2 files to flush to segment _0
>>> > numDocs=1077025
>>> > IW 0 [Indexer]: DW: flush postings as segment _6 numDocs=76978
>>> > IW 0 [Indexer]: DW:   oldRAMSize=486968320 newFlushedSize=273168136
>>> > docs/MB=295.486 new/old=56.096%
>>> > IFD [Indexer]: now checkpoint "segments_1" [7 segments ; isCommit =
>>> false]
>>> > IW 0 [Indexer]: DW: apply 1077025 buffered deleted terms and 0 deleted
>>> > docIDs and 0 deleted queries on 7 segments.
>>> > IFD [Indexer]: now checkpoint "segments_1" [7 segments ; isCommit =
>>> false]
>>> > IW 0 [Indexer]: LMP: findMerges: 7 segments
>>> > IW 0 [Indexer]: LMP:   level 8.008305 to 8.758305: 7 segments
>>> > IW 0 [Indexer]: CMS: now merge
>>> > IW 0 [Indexer]: CMS:   index: _0:C171638->_0 _1:C204995->_0
>>> _2:C168236->_0
>>> > _3:C146894->_0 _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
>>> > IW 0 [Indexer]: CMS:   no more merges pending; now return
>>> > IW 0 [Indexer]: startCommit(): start sizeInBytes=0
>>> > IW 0 [Indexer]: startCommit index=_0:C171638->_0 _1:C204995->_0
>>> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0
>>> _6:C76978->_0
>>> > changeCount=21
>>> > IW 0 [Indexer]: now sync _0.tis
>>> > IW 0 [Indexer]: now sync _5.prx
>>> > IW 0 [Indexer]: now sync _3.frq
>>> > IW 0 [Indexer]: now sync _3.tii
>>> > IW 0 [Indexer]: now sync _1.frq
>>> > IW 0 [Indexer]: now sync _6.frq
>>> > IW 0 [Indexer]: now sync _4.prx
>>> > IW 0 [Indexer]: now sync _4.fnm
>>> > IW 0 [Indexer]: now sync _2.tii
>>> > IW 0 [Indexer]: now sync _3.fnm
>>> > IW 0 [Indexer]: now sync _1.fnm
>>> > IW 0 [Indexer]: now sync _6.tis
>>> > IW 0 [Indexer]: now sync _4.frq
>>> > IW 0 [Indexer]: now sync _5.nrm
>>> > IW 0 [Indexer]: now sync _5.tis
>>> > IW 0 [Indexer]: now sync _1.tii
>>> > IW 0 [Indexer]: now sync _4.tis
>>> > IW 0 [Indexer]: now sync _0.prx
>>> > IW 0 [Indexer]: now sync _3.nrm
>>> > IW 0 [Indexer]: now sync _4.tii
>>> > IW 0 [Indexer]: now sync _0.nrm
>>> > IW 0 [Indexer]: now sync _5.fnm
>>> > IW 0 [Indexer]: now sync _1.tis
>>> > IW 0 [Indexer]: now sync _0.fnm
>>> > IW 0 [Indexer]: now sync _2.prx
>>> > IW 0 [Indexer]: now sync _6.tii
>>> > IW 0 [Indexer]: now sync _4.nrm
>>> > IW 0 [Indexer]: now sync _2.frq
>>> > IW 0 [Indexer]: now sync _5.frq
>>> > IW 0 [Indexer]: now sync _3.prx
>>> > IW 0 [Indexer]: now sync _5.tii
>>> > IW 0 [Indexer]: now sync _2.fnm
>>> > IW 0 [Indexer]: now sync _1.prx
>>> > IW 0 [Indexer]: now sync _2.tis
>>> > IW 0 [Indexer]: now sync _0.tii
>>> > IW 0 [Indexer]: now sync _6.prx
>>> > IW 0 [Indexer]: now sync _0.frq
>>> > IW 0 [Indexer]: now sync _6.fnm
>>> > IW 0 [Indexer]: now sync _0.fdx
>>> > IW 0 [Indexer]: now sync _6.nrm
>>> > IW 0 [Indexer]: now sync _0.fdt
>>> > IW 0 [Indexer]: now sync _1.nrm
>>> > IW 0 [Indexer]: now sync _2.nrm
>>> > IW 0 [Indexer]: now sync _3.tis
>>> > IW 0 [Indexer]: done all syncs
>>> > IW 0 [Indexer]: commit: pendingCommit != null
>>> > IW 0 [Indexer]: commit: wrote segments file "segments_2"
>>> > IFD [Indexer]: now checkpoint "segments_2" [7 segments ; isCommit =
>>> true]
>>> > IFD [Indexer]: deleteCommits: now decRef commit "segments_1"
>>> > IFD [Indexer]: delete "segments_1"
>>> > IW 0 [Indexer]: commit: done
>>> > IW 0 [Indexer]: optimize: index now _0:C171638->_0 _1:C204995->_0
>>> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0
>>> _6:C76978->_0
>>> > IW 0 [Indexer]:   flush: segment=null docStoreSegment=_6
>>> docStoreOffset=0
>>> > flushDocs=false flushDeletes=true flushDocStores=false numDocs=0
>>> > numBufDelTerms=0
>>> > IW 0 [Indexer]:   index before flush _0:C171638->_0 _1:C204995->_0
>>> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0
>>> _6:C76978->_0
>>> > IW 0 [Indexer]: add merge to pendingMerges: _0:C171638->_0
>>> _1:C204995->_0
>>> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0
>>> _6:C76978->_0
>>> > [optimize] [total 1 pending]
>>> > IW 0 [Indexer]: CMS: now merge
>>> > IW 0 [Indexer]: CMS:   index: _0:C171638->_0 _1:C204995->_0
>>> _2:C168236->_0
>>> > _3:C146894->_0 _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
>>> > IW 0 [Indexer]: CMS:   consider merge _0:C171638->_0 _1:C204995->_0
>>> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0
>>> _6:C76978->_0
>>> > into _7 [optimize]
>>> > IW 0 [Indexer]: CMS:     launch new thread [Lucene Merge Thread #0]
>>> > IW 0 [Indexer]: CMS:   no more merges pending; now return
>>> > IW 0 [Lucene Merge Thread #0]: CMS:   merge thread: start
>>> > IW 0 [Lucene Merge Thread #0]: now merge
>>> >  merge=_0:C171638->_0 _1:C204995->_0 _2:C168236->_0 _3:C146894->_0
>>> > _4:C162034->_0 _5:C146250->_0 _6:C76978->_0 into _7 [optimize]
>>> >  merge=org.apache.lucene.index.MergePolicy$OneMerge@78688954
>>> >  index=_0:C171638->_0 _1:C204995->_0 _2:C168236->_0 _3:C146894->_0
>>> > _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
>>> > IW 0 [Lucene Merge Thread #0]: merging _0:C171638->_0 _1:C204995->_0
>>> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0
>>> _6:C76978->_0
>>> > into _7 [optimize]
>>> > IW 0 [Lucene Merge Thread #0]: merge: total 1077025 docs
>>> > IW 0 [Lucene Merge Thread #0]: commitMerge: _0:C171638->_0
>>> _1:C204995->_0
>>> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0
>>> _6:C76978->_0
>>> > into _7 [optimize] index=_0:C171638->_0 _1:C204995->_0 _2:C168236->_0
>>> > _3:C146894->_0 _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
>>> > IW 0 [Lucene Merge Thread #0]: commitMergeDeletes _0:C171638->_0
>>> > _1:C204995->_0 _2:C168236->_0 _3:C146894->_0 _4:C162034->_0
>>> _5:C146250->_0
>>> > _6:C76978->_0 into _7 [optimize]
>>> > IFD [Lucene Merge Thread #0]: now checkpoint "segments_2" [1 segments ;
>>> > isCommit = false]
>>> > IW 0 [Lucene Merge Thread #0]: CMS:   merge thread: done
>>> > IW 0 [Indexer]: now flush at close
>>> > IW 0 [Indexer]:   flush: segment=null docStoreSegment=_6
>>> docStoreOffset=0
>>> > flushDocs=false flushDeletes=true flushDocStores=true numDocs=0
>>> > numBufDelTerms=0
>>> > IW 0 [Indexer]:   index before flush _7:C1077025->_0
>>> > IW 0 [Indexer]:   flush shared docStore segment _6
>>> > IW 0 [Indexer]: DW: closeDocStore: 0 files to flush to segment _6
>>> numDocs=0
>>> > IW 0 [Indexer]: CMS: now merge
>>> > IW 0 [Indexer]: CMS:   index: _7:C1077025->_0
>>> > IW 0 [Indexer]: CMS:   no more merges pending; now return
>>> > IW 0 [Indexer]: now call final commit()
>>> > IW 0 [Indexer]: startCommit(): start sizeInBytes=0
>>> > IW 0 [Indexer]: startCommit index=_7:C1077025->_0 changeCount=23
>>> > IW 0 [Indexer]: now sync _7.prx
>>> > IW 0 [Indexer]: now sync _7.fnm
>>> > IW 0 [Indexer]: now sync _7.tis
>>> > IW 0 [Indexer]: now sync _7.nrm
>>> > IW 0 [Indexer]: now sync _7.tii
>>> > IW 0 [Indexer]: now sync _7.frq
>>> > IW 0 [Indexer]: done all syncs
>>> > IW 0 [Indexer]: commit: pendingCommit != null
>>> > IW 0 [Indexer]: commit: wrote segments file "segments_3"
>>> > IFD [Indexer]: now checkpoint "segments_3" [1 segments ; isCommit =
>>> true]
>>> > IFD [Indexer]: deleteCommits: now decRef commit "segments_2"
>>> > IFD [Indexer]: delete "_0.tis"
>>> > IFD [Indexer]: delete "_5.prx"
>>> > IFD [Indexer]: delete "_3.tii"
>>> > IFD [Indexer]: delete "_3.frq"
>>> > IFD [Indexer]: delete "_1.frq"
>>> > IFD [Indexer]: delete "_6.frq"
>>> > IFD [Indexer]: delete "_4.prx"
>>> > IFD [Indexer]: delete "_4.fnm"
>>> > IFD [Indexer]: delete "_2.tii"
>>> > IFD [Indexer]: delete "_3.fnm"
>>> > IFD [Indexer]: delete "_1.fnm"
>>> > IFD [Indexer]: delete "_6.tis"
>>> > IFD [Indexer]: delete "_4.frq"
>>> > IFD [Indexer]: delete "_5.nrm"
>>> > IFD [Indexer]: delete "_5.tis"
>>> > IFD [Indexer]: delete "_1.tii"
>>> > IFD [Indexer]: delete "_4.tis"
>>> > IFD [Indexer]: delete "_0.prx"
>>> > IFD [Indexer]: delete "_3.nrm"
>>> > IFD [Indexer]: delete "_4.tii"
>>> > IFD [Indexer]: delete "_0.nrm"
>>> > IFD [Indexer]: delete "_5.fnm"
>>> > IFD [Indexer]: delete "_1.tis"
>>> > IFD [Indexer]: delete "_0.fnm"
>>> > IFD [Indexer]: delete "_2.prx"
>>> > IFD [Indexer]: delete "_6.tii"
>>> > IFD [Indexer]: delete "_4.nrm"
>>> > IFD [Indexer]: delete "_2.frq"
>>> > IFD [Indexer]: delete "_5.frq"
>>> > IFD [Indexer]: delete "_3.prx"
>>> > IFD [Indexer]: delete "_5.tii"
>>> > IFD [Indexer]: delete "_2.fnm"
>>> > IFD [Indexer]: delete "_1.prx"
>>> > IFD [Indexer]: delete "_2.tis"
>>> > IFD [Indexer]: delete "_0.tii"
>>> > IFD [Indexer]: delete "_6.prx"
>>> > IFD [Indexer]: delete "_0.frq"
>>> > IFD [Indexer]: delete "segments_2"
>>> > IFD [Indexer]: delete "_6.fnm"
>>> > IFD [Indexer]: delete "_6.nrm"
>>> > IFD [Indexer]: delete "_1.nrm"
>>> > IFD [Indexer]: delete "_2.nrm"
>>> > IFD [Indexer]: delete "_3.tis"
>>> > IW 0 [Indexer]: commit: done
>>> > IW 0 [Indexer]: at close: _7:C1077025->_0
>>> >
>>> > I see no errors.
>>> > Peter
>>> >
>>> >
>>> > On Tue, Oct 27, 2009 at 10:44 AM, Peter Keegan <peterlkeegan@gmail.com
>>> >wrote:
>>> >
>>> >>
>>> >>
>>> >> On Tue, Oct 27, 2009 at 10:37 AM, Michael McCandless <
>>> >> lucene@mikemccandless.com> wrote:
>>> >>
>>> >>> OK that exception looks more reasonable, for a disk full event.
>>> >>>
>>> >>> But, I can't tell from your followon emails: did this lead to index
>>> >>> corruption?
>>> >>>
>>> >>
>>> >> Yes, but this may be caused by the application ignoring a Lucene
>>> exception
>>> >> somewhere else. I will chase this down.
>>> >>
>>> >>>
>>> >>> Also, I noticed you're using a rather old 1.6.0 JRE (1.6.0_03) -- you
>>> >>> really should upgrade that to the latest 1.6.0 -- there's at least one
>>> >>> known problem with Lucene and early 1.6.0 JREs.
>>> >>>
>>> >>
>>> >> Yes, I remember this problem - that's why we stayed at _03
>>> >> Thanks.
>>> >>
>>> >>>
>>> >>> Mike
>>> >>>
>>> >>> On Tue, Oct 27, 2009 at 10:00 AM, Peter Keegan <
>>> peterlkeegan@gmail.com>
>>> >>> wrote:
>>> >>> > After rebuilding the corrupted indexes, the low disk space exception
>>> is
>>> >>> now
>>> >>> > occurring as expected. Sorry for the distraction.
>>> >>> >
>>> >>> > fyi, here are the details:
>>> >>> >
>>> >>> >  java.io.IOException: There is not enough space on the disk
>>> >>> >    at java.io.RandomAccessFile.writeBytes(Native Method)
>>> >>> >    at java.io.RandomAccessFile.write(Unknown Source)
>>> >>> >    at
>>> >>> >
>>> >>>
>>> org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexOutput.flushBuffer(SimpleFSDirectory.java:192)
>>> >>> >    at
>>> >>> >
>>> >>>
>>> org.apache.lucene.store.BufferedIndexOutput.flushBuffer(BufferedIndexOutput.java:96)
>>> >>> >    at
>>> >>> >
>>> >>>
>>> org.apache.lucene.store.BufferedIndexOutput.flush(BufferedIndexOutput.java:85)
>>> >>> >    at
>>> >>> >
>>> >>>
>>> org.apache.lucene.store.BufferedIndexOutput.close(BufferedIndexOutput.java:109)
>>> >>> >    at
>>> >>> >
>>> >>>
>>> org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexOutput.close(SimpleFSDirectory.java:199)
>>> >>> >    at
>>> org.apache.lucene.index.FieldsWriter.close(FieldsWriter.java:144)
>>> >>> >    at
>>> >>> >
>>> >>>
>>> org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:357)
>>> >>> >    at
>>> >>> org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:153)
>>> >>> >    at
>>> >>> >
>>> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:5011)
>>> >>> >    at
>>> org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4596)
>>> >>> >    at
>>> >>> >
>>> >>>
>>> org.apache.lucene.index.IndexWriter.resolveExternalSegments(IndexWriter.java:3786)
>>> >>> >    at
>>> >>> >
>>> >>>
>>> org.apache.lucene.index.IndexWriter.addIndexesNoOptimize(IndexWriter.java:3695)
>>> >>> >
>>> >>> >
>>> >>> > And the corresponding index info log:
>>> >>> >
>>> >>> > IFD [Indexer]: setInfoStream
>>> >>> >
>>> >>>
>>> deletionPolicy=org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy@256ef705
>>> >>> > IW 1 [Indexer]: setInfoStream:
>>> >>> > dir=org.apache.lucene.store.SimpleFSDirectory@D
>>> >>> :\mnsavs\lresumes3\lresumes3.luc\lresumes3.update.main.4
>>> >>> > autoCommit=false
>>> >>> >
>>> >>>
>>> mergePolicy=org.apache.lucene.index.LogByteSizeMergePolicy@181b7c76mergeScheduler
>>> >>>
>>> =org.apache.lucene.index.ConcurrentMergeScheduler@34883357ramBufferSizeMB
>>> >>> =16.0
>>> >>> > maxBufferedDocs=-1 maxBuffereDeleteTerms=-1
>>> >>> > maxFieldLength=2147483647 index=
>>> >>> > IW 1 [Indexer]: flush at addIndexesNoOptimize
>>> >>> > IW 1 [Indexer]:   flush: segment=null docStoreSegment=null
>>> >>> docStoreOffset=0
>>> >>> > flushDocs=false flushDeletes=true flushDocStores=false numDocs=0
>>> >>> > numBufDelTerms=0
>>> >>> > IW 1 [Indexer]:   index before flush
>>> >>> > IW 1 [Indexer]: now start transaction
>>> >>> > IW 1 [Indexer]: LMP: findMerges: 2 segments
>>> >>> > IW 1 [Indexer]: LMP:   level 8.774518 to 9.524518: 1 segments
>>> >>> > IW 1 [Indexer]: LMP:   level 6.2973914 to 7.0473914: 1 segments
>>> >>> > IW 1 [Indexer]: CMS: now merge
>>> >>> > IW 1 [Indexer]: CMS:   index: _7:Cx1075533->_0** _8:Cx2795**
>>> >>> > IW 1 [Indexer]: CMS:   no more merges pending; now return
>>> >>> > IW 1 [Indexer]: add merge to pendingMerges: _7:Cx1075533->_0 [total
>>> 1
>>> >>> > pending]
>>> >>> > IW 1 [Indexer]: now merge
>>> >>> >  merge=_7:Cx1075533->_0 into _0 [mergeDocStores]
>>> >>> >  merge=org.apache.lucene.index.MergePolicy$OneMerge@4d480ea
>>> >>> >  index=_7:Cx1075533->_0** _8:Cx2795**
>>> >>> > IW 1 [Indexer]: merging _7:Cx1075533->_0 into _0 [mergeDocStores]
>>> >>> > IW 1 [Indexer]: merge: total 1074388 docs
>>> >>> > IW 1 [Indexer]: commitMerge: _7:Cx1075533->_0 into _0
>>> [mergeDocStores]
>>> >>> > index=_7:Cx1075533->_0** _8:Cx2795**
>>> >>> > IW 1 [Indexer]: commitMergeDeletes _7:Cx1075533->_0 into _0
>>> >>> [mergeDocStores]
>>> >>> > IFD [Indexer]: now checkpoint "segments_1" [2 segments ; isCommit =
>>> >>> false]
>>> >>> > IW 1 [Indexer]: LMP: findMerges: 2 segments
>>> >>> > IW 1 [Indexer]: LMP:   level 8.864886 to 9.614886: 1 segments
>>> >>> > IW 1 [Indexer]: LMP:   level 6.2973914 to 7.0473914: 1 segments
>>> >>> > IW 1 [Indexer]: add merge to pendingMerges: _8:Cx2795 [total 1
>>> pending]
>>> >>> > IW 1 [Indexer]: now merge
>>> >>> >  merge=_8:Cx2795 into _1 [mergeDocStores]
>>> >>> >  merge=org.apache.lucene.index.MergePolicy$OneMerge@606f8b2b
>>> >>> >  index=_0:C1074388 _8:Cx2795**
>>> >>> > IW 1 [Indexer]: merging _8:Cx2795 into _1 [mergeDocStores]
>>> >>> > IW 1 [Indexer]: merge: total 2795 docs
>>> >>> > IW 1 [Indexer]: handleMergeException: merge=_8:Cx2795 into _1
>>> >>> > [mergeDocStores] exc=java.io.IOException: There is not enough space
>>> on
>>> >>> the
>>> >>> > disk
>>> >>> > IW 1 [Indexer]: hit exception during merge
>>> >>> > IFD [Indexer]: refresh [prefix=_1]: removing newly created
>>> unreferenced
>>> >>> file
>>> >>> > "_1.fdt"
>>> >>> > IFD [Indexer]: delete "_1.fdt"
>>> >>> > IFD [Indexer]: refresh [prefix=_1]: removing newly created
>>> unreferenced
>>> >>> file
>>> >>> > "_1.fdx"
>>> >>> > IFD [Indexer]: delete "_1.fdx"
>>> >>> > IFD [Indexer]: refresh [prefix=_1]: removing newly created
>>> unreferenced
>>> >>> file
>>> >>> > "_1.fnm"
>>> >>> > IFD [Indexer]: delete "_1.fnm"
>>> >>> > IW 1 [Indexer]: now rollback transaction
>>> >>> > IW 1 [Indexer]: all running merges have aborted
>>> >>> > IFD [Indexer]: now checkpoint "segments_1" [0 segments ; isCommit =
>>> >>> false]
>>> >>> > IFD [Indexer]: delete "_0.nrm"
>>> >>> > IFD [Indexer]: delete "_0.tis"
>>> >>> > IFD [Indexer]: delete "_0.fnm"
>>> >>> > IFD [Indexer]: delete "_0.tii"
>>> >>> > IFD [Indexer]: delete "_0.frq"
>>> >>> > IFD [Indexer]: delete "_0.fdx"
>>> >>> > IFD [Indexer]: delete "_0.prx"
>>> >>> > IFD [Indexer]: delete "_0.fdt"
>>> >>> >
>>> >>> >
>>> >>> > Peter
>>> >>> >
>>> >>> > On Mon, Oct 26, 2009 at 3:59 PM, Peter Keegan <
>>> peterlkeegan@gmail.com
>>> >>> >wrote:
>>> >>> >
>>> >>> >>
>>> >>> >>
>>> >>> >> On Mon, Oct 26, 2009 at 3:00 PM, Michael McCandless <
>>> >>> >> lucene@mikemccandless.com> wrote:
>>> >>> >>
>>> >>> >>> On Mon, Oct 26, 2009 at 2:55 PM, Peter Keegan <
>>> peterlkeegan@gmail.com
>>> >>> >
>>> >>> >>> wrote:
>>> >>> >>> > On Mon, Oct 26, 2009 at 2:50 PM, Michael McCandless <
>>> >>> >>> > lucene@mikemccandless.com> wrote:
>>> >>> >>> >
>>> >>> >>> >> On Mon, Oct 26, 2009 at 10:44 AM, Peter Keegan <
>>> >>> peterlkeegan@gmail.com
>>> >>> >>> >
>>> >>> >>> >> wrote:
>>> >>> >>> >> > Even running in console mode, the exception is difficult to
>>> >>> >>> interpret.
>>> >>> >>> >> > Here's an exception that I think occurred during an add
>>> document,
>>> >>> >>> commit
>>> >>> >>> >> or
>>> >>> >>> >> > close:
>>> >>> >>> >> > doc counts differ for segment _g: field Reader shows 137 but
>>> >>> >>> segmentInfo
>>> >>> >>> >> > shows 5777
>>> >>> >>> >>
>>> >>> >>> >> That's spooky.  Do you have the full exception for this one?
>>>  What
>>> >>> IO
>>> >>> >>> >> system are you running on?  (Is it just a local drive on your
>>> >>> windows
>>> >>> >>> >> computer?) It's almost as if the IO system is not generating an
>>> >>> >>> >> IOException to Java when disk fills up.
>>> >>> >>> >>
>>> >>> >>> >
>>> >>> >>> > Index and code are all on a local drive. There is no other
>>> exception
>>> >>> >>> coming
>>> >>> >>> > back - just what I reported.
>>> >>> >>>
>>> >>> >>> But, you didn't report a traceback for this first one?
>>> >>> >>>
>>> >>> >>
>>> >>> >> Yes, I need to add some more printStackTrace calls.
>>> >>> >>
>>> >>> >>
>>> >>> >>>
>>> >>> >>> >> > I ensured that the disk space was low before updating the
>>> index.
>>> >>> >>> >>
>>> >>> >>> >> You mean, to intentionally test the disk-full case?
>>> >>> >>> >>
>>> >>> >>> >
>>> >>> >>> > Yes, that's right.
>>> >>> >>>
>>> >>> >>> OK.  Can you turn on IndexWriter's infoStream, get this disk full
>>> /
>>> >>> >>> corruption to happen again, and post back the resulting output?
>>>  Make
>>> >>> >>> sure your index first passes CheckIndex before starting (so we
>>> don't
>>> >>> >>> begin the test w/ any pre-existing index corruption).
>>> >>> >>>
>>> >>> >>
>>> >>> >> Good point about CheckIndex.  I've already found 2 bad ones. I will
>>> >>> build
>>> >>> >> new indexes from scratch. This will take a while.
>>> >>> >>
>>> >>> >>
>>> >>> >>> >> > On another occasion, the exception was:
>>> >>> >>> >> > background merge hit exception: _0:C1080260 _1:C139 _2:C123
>>> >>> _3:C107
>>> >>> >>> >> _4:C126
>>> >>> >>> >> > _5:C121 _6:C126 _7:C711 _8:C116 into _9 [optimize]
>>> >>> [mergeDocStores]
>>> >>> >>> >>
>>> >>> >>> >> In this case, the SegmentMerger was trying to open this
>>> segment,
>>> >>> but
>>> >>> >>> >> on attempting to read the first int from the fdx (fields index)
>>> >>> file
>>> >>> >>> >> for one of the segments, it hit EOF.
>>> >>> >>> >>
>>> >>> >>> >> This is also spooky -- this looks like index corruption, which
>>> >>> should
>>> >>> >>> >> never happen on hitting disk full.
>>> >>> >>> >>
>>> >>> >>> >
>>> >>> >>> > That's what I thought, too. Could Lucene be catching the
>>> IOException
>>> >>> and
>>> >>> >>> > turning it into a different exception?
>>> >>> >>>
>>> >>> >>> I think that's unlikely, but I guess possible.  We have "disk
>>> full"
>>> >>> >>> tests in the unit tests, that throw an IOException at different
>>> times.
>>> >>> >>>
>>> >>> >>> What exact windows version are you using?  The local drive is
>>> NTFS?
>>> >>> >>>
>>> >>> >>
>>> >>> >> Windows Server 2003 Enterprise x64 SP2. Local drive is NTFS
>>> >>> >>
>>> >>> >>
>>> >>> >>>
>>> >>> >>> Mike
>>> >>> >>>
>>> >>> >>>
>>> ---------------------------------------------------------------------
>>> >>> >>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>>> >>> >>> For additional commands, e-mail: java-user-help@lucene.apache.org
>>> >>> >>>
>>> >>> >>>
>>> >>> >>
>>> >>> >
>>> >>>
>>> >>> ---------------------------------------------------------------------
>>> >>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>>> >>> For additional commands, e-mail: java-user-help@lucene.apache.org
>>> >>>
>>> >>>
>>> >>
>>> >
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>>> For additional commands, e-mail: java-user-help@lucene.apache.org
>>>
>>>
>>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: IO exception during merge/optimize

Posted by Peter Keegan <pe...@gmail.com>.
Without the optimize, it looks like there are errors on all segments except
the first:

Opening index @ D:\mnsavs\lresumes1\lresumes1.luc\lresumes1.search.main.2

Segments file=segments_2 numSegments=3 version=FORMAT_DIAGNOSTICS [Lucene
2.9]
  1 of 3: name=_0 docCount=413557
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=1,148.795
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-27 15:31:52, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=0
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [7704599 terms; 180318249 terms/docs pairs;
340258711 tokens]
    test: stored fields.......OK [1240671 total field count; avg 3 fields
per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]

  2 of 3: name=_1 docCount=359203
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=1,125.103
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-27 15:31:52, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=413557
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...ERROR [term literals:cfid196$ docFreq=43 !=
num docs seen 4 + num docs deleted 0]
java.lang.RuntimeException: term literals:cfid196$ docFreq=43 != num docs
seen 4 + num docs deleted 0
    at org.apache.lucene.index.CheckIndex.testTermIndex(CheckIndex.java:675)
    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:530)
    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
    test: stored fields.......OK [1077609 total field count; avg 3 fields
per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]
FAILED
    WARNING: fixIndex() would remove reference to this segment; full
exception:
java.lang.RuntimeException: Term Index test failed
    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:543)
    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)

  3 of 3: name=_2 docCount=304659
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=961.764
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported - 2009-10-27 15:31:52, source=flush, os.arch=amd64,
java.version=1.6.0_16, java.vendor=Sun Microsystems Inc.}
    docStoreOffset=772760
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...ERROR [term contents:? docFreq=1 != num docs
seen 245 + num docs deleted 0]
java.lang.RuntimeException: term contents:? docFreq=1 != num docs seen 245 +
num docs deleted 0
    at org.apache.lucene.index.CheckIndex.testTermIndex(CheckIndex.java:675)
    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:530)
    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
    test: stored fields.......OK [913977 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]
FAILED
    WARNING: fixIndex() would remove reference to this segment; full
exception:
java.lang.RuntimeException: Term Index test failed
    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:543)
    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)

WARNING: 2 broken segments (containing 663862 documents) detected
WARNING: would write new segments file, and 663862 documents would be lost,
if -fix were specified


Do the unit tests create multi-segment indexes?

Peter

On Tue, Oct 27, 2009 at 3:08 PM, Peter Keegan <pe...@gmail.com>wrote:

> It's reproducible with a large no. of docs (>1 million), but not with 100K
> docs.
> I got same error with jvm 1.6.0_16.
> The index was optimized after all docs are added. I'll try removing the
> optimize.
>
> Peter
>
>
> On Tue, Oct 27, 2009 at 2:57 PM, Michael McCandless <
> lucene@mikemccandless.com> wrote:
>
>> This is odd -- is it reproducible?
>>
>> Can you narrow it down to a small set of docs that when indexed
>> produce a corrupted index?
>>
>> If you attempt to optimize the index, does it fail?
>>
>> Mike
>>
>> On Tue, Oct 27, 2009 at 1:40 PM, Peter Keegan <pe...@gmail.com>
>> wrote:
>> > It seems the index is corrupted immediately after the initial build
>> (ample
>> > disk space was provided):
>> >
>> > Output from CheckIndex:
>> >
>> > Opening index @
>> D:\mnsavs\lresumes1\lresumes1.luc\lresumes1.search.main.2
>> >
>> > Segments file=segments_3 numSegments=1 version=FORMAT_DIAGNOSTICS
>> [Lucene
>> > 2.9]
>> >  1 of 1: name=_7 docCount=1077025
>> >    compound=false
>> >    hasProx=true
>> >    numFiles=8
>> >    size (MB)=3,201.196
>> >    diagnostics = {optimize=true, mergeFactor=7, os.version=5.2,
>> os=Windows
>> > 2003, mergeDocStores=false, lucene.version=2.9 exported - 2009-10-26
>> > 07:58:55, source=merge, os.arch=amd64, java.version=1.6.0_03,
>> > java.vendor=Sun Microsystems Inc.}
>> >    docStoreOffset=0
>> >    docStoreSegment=_0
>> >    docStoreIsCompoundFile=false
>> >    no deletions
>> >    test: open reader.........OK
>> >    test: fields..............OK [33 fields]
>> >    test: field norms.........OK [33 fields]
>> >    test: terms, freq, prox...ERROR [term contents:? docFreq=1 != num
>> docs
>> > seen 482 + num docs deleted 0]
>> > java.lang.RuntimeException: term contents:? docFreq=1 != num docs seen
>> 482 +
>> > num docs deleted 0
>> >    at
>> org.apache.lucene.index.CheckIndex.testTermIndex(CheckIndex.java:675)
>> >    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:530)
>> >    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
>> >    test: stored fields.......OK [3231075 total field count; avg 3 fields
>> > per doc]
>> >    test: term vectors........OK [0 total vector count; avg 0 term/freq
>> > vector fields per doc]
>> > FAILED
>> >    WARNING: fixIndex() would remove reference to this segment; full
>> > exception:
>> > java.lang.RuntimeException: Term Index test failed
>> >    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:543)
>> >    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
>> >
>> > WARNING: 1 broken segments (containing 1077025 documents) detected
>> > WARNING: would write new segments file, and 1077025 documents would be
>> lost,
>> > if -fix were specified
>> >
>> > Searching on this index seems to be fine, though.
>> >
>> > Here is the IndexWriter log from the build:
>> >
>> > IFD [Indexer]: setInfoStream
>> >
>> deletionPolicy=org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy@2a9cfec1
>> > IW 0 [Indexer]: setInfoStream:
>> > dir=org.apache.lucene.store.SimpleFSDirectory@D
>> :\mnsavs\lresumes1\lresumes1.luc\lresumes1.update.main.2
>> > autoCommit=false
>> >
>> mergePolicy=org.apache.lucene.index.LogByteSizeMergePolicy@291946c2mergeScheduler
>> =org.apache.lucene.index.ConcurrentMergeScheduler@3a747fa2ramBufferSizeMB
>> =16.0
>> > maxBufferedDocs=-1 maxBuffereDeleteTerms=-1
>> > maxFieldLength=2147483647 index=
>> > IW 0 [Indexer]: setRAMBufferSizeMB 910.25
>> > IW 0 [Indexer]: setMaxBufferedDocs 1000000
>> > IW 0 [Indexer]: flush at getReader
>> > IW 0 [Indexer]:   flush: segment=null docStoreSegment=null
>> docStoreOffset=0
>> > flushDocs=false flushDeletes=true flushDocStores=false numDocs=0
>> > numBufDelTerms=0
>> > IW 0 [Indexer]:   index before flush
>> > IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=886.463
>> allocMB=886.463
>> > deletesMB=23.803 triggerMB=910.25
>> > IW 0 [UpdWriterBuild]:   flush: segment=_0 docStoreSegment=_0
>> > docStoreOffset=0 flushDocs=true flushDeletes=false flushDocStores=false
>> > numDocs=171638 numBufDelTerms=171638
>> > IW 0 [UpdWriterBuild]:   index before flush
>> > IW 0 [UpdWriterBuild]: DW: flush postings as segment _0 numDocs=171638
>> > IW 0 [UpdWriterBuild]: DW:   oldRAMSize=929523712
>> newFlushedSize=573198529
>> > docs/MB=313.985 new/old=61.666%
>> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [1 segments ; isCommit
>> =
>> > false]
>> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [1 segments ; isCommit
>> =
>> > false]
>> > IW 0 [UpdWriterBuild]: LMP: findMerges: 1 segments
>> > IW 0 [UpdWriterBuild]: LMP:   level 8.008305 to 8.758305: 1 segments
>> > IW 0 [UpdWriterBuild]: CMS: now merge
>> > IW 0 [UpdWriterBuild]: CMS:   index: _0:C171638->_0
>> > IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
>> > IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=857.977
>> allocMB=901.32
>> > deletesMB=52.274 triggerMB=910.25
>> > IW 0 [UpdWriterBuild]:   flush: segment=_1 docStoreSegment=_0
>> > docStoreOffset=171638 flushDocs=true flushDeletes=false
>> flushDocStores=false
>> > numDocs=204995 numBufDelTerms=204995
>> > IW 0 [UpdWriterBuild]:   index before flush _0:C171638->_0
>> > IW 0 [UpdWriterBuild]: DW: flush postings as segment _1 numDocs=204995
>> > IW 0 [UpdWriterBuild]: DW:   oldRAMSize=899653632
>> newFlushedSize=544283851
>> > docs/MB=394.928 new/old=60.499%
>> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [2 segments ; isCommit
>> =
>> > false]
>> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [2 segments ; isCommit
>> =
>> > false]
>> > IW 0 [UpdWriterBuild]: LMP: findMerges: 2 segments
>> > IW 0 [UpdWriterBuild]: LMP:   level 8.008305 to 8.758305: 2 segments
>> > IW 0 [UpdWriterBuild]: CMS: now merge
>> > IW 0 [UpdWriterBuild]: CMS:   index: _0:C171638->_0 _1:C204995->_0
>> > IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
>> > IW 0 [UpdWriterBuild]: DW:   RAM: now balance allocations:
>> usedMB=834.645 vs
>> > trigger=910.25 allocMB=901.32 deletesMB=75.627 vs trigger=955.762
>> > byteBlockFree=35.938 charBlockFree=8.938
>> > IW 0 [UpdWriterBuild]: DW:     nothing to free
>> > IW 0 [UpdWriterBuild]: DW:     after free: freedMB=66.613 usedMB=910.272
>> > allocMB=834.707
>> > IW 0 [UpdWriterBuild]:   flush: segment=_2 docStoreSegment=_0
>> > docStoreOffset=376633 flushDocs=true flushDeletes=false
>> flushDocStores=false
>> > numDocs=168236 numBufDelTerms=168236
>> > IW 0 [UpdWriterBuild]:   index before flush _0:C171638->_0
>> _1:C204995->_0
>> > IW 0 [UpdWriterBuild]: DW: flush postings as segment _2 numDocs=168236
>> > IW 0 [UpdWriterBuild]: DW:   oldRAMSize=875188224
>> newFlushedSize=530720464
>> > docs/MB=332.394 new/old=60.641%
>> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [3 segments ; isCommit
>> =
>> > false]
>> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [3 segments ; isCommit
>> =
>> > false]
>> > IW 0 [UpdWriterBuild]: LMP: findMerges: 3 segments
>> > IW 0 [UpdWriterBuild]: LMP:   level 8.008305 to 8.758305: 3 segments
>> > IW 0 [UpdWriterBuild]: CMS: now merge
>> > IW 0 [UpdWriterBuild]: CMS:   index: _0:C171638->_0 _1:C204995->_0
>> > _2:C168236->_0
>> > IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
>> > IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=814.282
>> allocMB=835.832
>> > deletesMB=95.997 triggerMB=910.25
>> > IW 0 [UpdWriterBuild]:   flush: segment=_3 docStoreSegment=_0
>> > docStoreOffset=544869 flushDocs=true flushDeletes=false
>> flushDocStores=false
>> > numDocs=146894 numBufDelTerms=146894
>> > IW 0 [UpdWriterBuild]:   index before flush _0:C171638->_0
>> _1:C204995->_0
>> > _2:C168236->_0
>> > IW 0 [UpdWriterBuild]: DW: flush postings as segment _3 numDocs=146894
>> > IW 0 [UpdWriterBuild]: DW:   oldRAMSize=853836800
>> newFlushedSize=522388771
>> > docs/MB=294.856 new/old=61.181%
>> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [4 segments ; isCommit
>> =
>> > false]
>> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [4 segments ; isCommit
>> =
>> > false]
>> > IW 0 [UpdWriterBuild]: LMP: findMerges: 4 segments
>> > IW 0 [UpdWriterBuild]: LMP:   level 8.008305 to 8.758305: 4 segments
>> > IW 0 [UpdWriterBuild]: CMS: now merge
>> > IW 0 [UpdWriterBuild]: CMS:   index: _0:C171638->_0 _1:C204995->_0
>> > _2:C168236->_0 _3:C146894->_0
>> > IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
>> > IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=791.724
>> allocMB=835.832
>> > deletesMB=118.535 triggerMB=910.25
>> > IW 0 [UpdWriterBuild]:   flush: segment=_4 docStoreSegment=_0
>> > docStoreOffset=691763 flushDocs=true flushDeletes=false
>> flushDocStores=false
>> > numDocs=162034 numBufDelTerms=162034
>> > IW 0 [UpdWriterBuild]:   index before flush _0:C171638->_0
>> _1:C204995->_0
>> > _2:C168236->_0 _3:C146894->_0
>> > IW 0 [UpdWriterBuild]: DW: flush postings as segment _4 numDocs=162034
>> > IW 0 [UpdWriterBuild]: DW:   oldRAMSize=830182400
>> newFlushedSize=498741034
>> > docs/MB=340.668 new/old=60.076%
>> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [5 segments ; isCommit
>> =
>> > false]
>> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [5 segments ; isCommit
>> =
>> > false]
>> > IW 0 [UpdWriterBuild]: LMP: findMerges: 5 segments
>> > IW 0 [UpdWriterBuild]: LMP:   level 8.008305 to 8.758305: 5 segments
>> > IW 0 [UpdWriterBuild]: CMS: now merge
>> > IW 0 [UpdWriterBuild]: CMS:   index: _0:C171638->_0 _1:C204995->_0
>> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0
>> > IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
>> > IW 0 [UpdWriterBuild]: DW:   RAM: now balance allocations:
>> usedMB=771.396 vs
>> > trigger=910.25 allocMB=835.832 deletesMB=138.875 vs trigger=955.762
>> > byteBlockFree=39.688 charBlockFree=7.188
>> > IW 0 [UpdWriterBuild]: DW:     nothing to free
>> > IW 0 [UpdWriterBuild]: DW:     after free: freedMB=64.374 usedMB=910.271
>> > allocMB=771.458
>> > IW 0 [UpdWriterBuild]:   flush: segment=_5 docStoreSegment=_0
>> > docStoreOffset=853797 flushDocs=true flushDeletes=false
>> flushDocStores=false
>> > numDocs=146250 numBufDelTerms=146250
>> > IW 0 [UpdWriterBuild]:   index before flush _0:C171638->_0
>> _1:C204995->_0
>> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0
>> > IW 0 [UpdWriterBuild]: DW: flush postings as segment _5 numDocs=146250
>> > IW 0 [UpdWriterBuild]: DW:   oldRAMSize=808866816
>> newFlushedSize=485212402
>> > docs/MB=316.056 new/old=59.987%
>> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [6 segments ; isCommit
>> =
>> > false]
>> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [6 segments ; isCommit
>> =
>> > false]
>> > IW 0 [UpdWriterBuild]: LMP: findMerges: 6 segments
>> > IW 0 [UpdWriterBuild]: LMP:   level 8.008305 to 8.758305: 6 segments
>> > IW 0 [UpdWriterBuild]: CMS: now merge
>> > IW 0 [UpdWriterBuild]: CMS:   index: _0:C171638->_0 _1:C204995->_0
>> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0
>> > IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
>> > IW 0 [Indexer]: commit: start
>> > IW 0 [Indexer]: commit: now prepare
>> > IW 0 [Indexer]: prepareCommit: flush
>> > IW 0 [Indexer]:   flush: segment=_6 docStoreSegment=_0
>> > docStoreOffset=1000047 flushDocs=true flushDeletes=true
>> flushDocStores=true
>> > numDocs=76978 numBufDelTerms=76978
>> > IW 0 [Indexer]:   index before flush _0:C171638->_0 _1:C204995->_0
>> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0
>> > IW 0 [Indexer]:   flush shared docStore segment _0
>> > IW 0 [Indexer]: DW: closeDocStore: 2 files to flush to segment _0
>> > numDocs=1077025
>> > IW 0 [Indexer]: DW: flush postings as segment _6 numDocs=76978
>> > IW 0 [Indexer]: DW:   oldRAMSize=486968320 newFlushedSize=273168136
>> > docs/MB=295.486 new/old=56.096%
>> > IFD [Indexer]: now checkpoint "segments_1" [7 segments ; isCommit =
>> false]
>> > IW 0 [Indexer]: DW: apply 1077025 buffered deleted terms and 0 deleted
>> > docIDs and 0 deleted queries on 7 segments.
>> > IFD [Indexer]: now checkpoint "segments_1" [7 segments ; isCommit =
>> false]
>> > IW 0 [Indexer]: LMP: findMerges: 7 segments
>> > IW 0 [Indexer]: LMP:   level 8.008305 to 8.758305: 7 segments
>> > IW 0 [Indexer]: CMS: now merge
>> > IW 0 [Indexer]: CMS:   index: _0:C171638->_0 _1:C204995->_0
>> _2:C168236->_0
>> > _3:C146894->_0 _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
>> > IW 0 [Indexer]: CMS:   no more merges pending; now return
>> > IW 0 [Indexer]: startCommit(): start sizeInBytes=0
>> > IW 0 [Indexer]: startCommit index=_0:C171638->_0 _1:C204995->_0
>> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0
>> _6:C76978->_0
>> > changeCount=21
>> > IW 0 [Indexer]: now sync _0.tis
>> > IW 0 [Indexer]: now sync _5.prx
>> > IW 0 [Indexer]: now sync _3.frq
>> > IW 0 [Indexer]: now sync _3.tii
>> > IW 0 [Indexer]: now sync _1.frq
>> > IW 0 [Indexer]: now sync _6.frq
>> > IW 0 [Indexer]: now sync _4.prx
>> > IW 0 [Indexer]: now sync _4.fnm
>> > IW 0 [Indexer]: now sync _2.tii
>> > IW 0 [Indexer]: now sync _3.fnm
>> > IW 0 [Indexer]: now sync _1.fnm
>> > IW 0 [Indexer]: now sync _6.tis
>> > IW 0 [Indexer]: now sync _4.frq
>> > IW 0 [Indexer]: now sync _5.nrm
>> > IW 0 [Indexer]: now sync _5.tis
>> > IW 0 [Indexer]: now sync _1.tii
>> > IW 0 [Indexer]: now sync _4.tis
>> > IW 0 [Indexer]: now sync _0.prx
>> > IW 0 [Indexer]: now sync _3.nrm
>> > IW 0 [Indexer]: now sync _4.tii
>> > IW 0 [Indexer]: now sync _0.nrm
>> > IW 0 [Indexer]: now sync _5.fnm
>> > IW 0 [Indexer]: now sync _1.tis
>> > IW 0 [Indexer]: now sync _0.fnm
>> > IW 0 [Indexer]: now sync _2.prx
>> > IW 0 [Indexer]: now sync _6.tii
>> > IW 0 [Indexer]: now sync _4.nrm
>> > IW 0 [Indexer]: now sync _2.frq
>> > IW 0 [Indexer]: now sync _5.frq
>> > IW 0 [Indexer]: now sync _3.prx
>> > IW 0 [Indexer]: now sync _5.tii
>> > IW 0 [Indexer]: now sync _2.fnm
>> > IW 0 [Indexer]: now sync _1.prx
>> > IW 0 [Indexer]: now sync _2.tis
>> > IW 0 [Indexer]: now sync _0.tii
>> > IW 0 [Indexer]: now sync _6.prx
>> > IW 0 [Indexer]: now sync _0.frq
>> > IW 0 [Indexer]: now sync _6.fnm
>> > IW 0 [Indexer]: now sync _0.fdx
>> > IW 0 [Indexer]: now sync _6.nrm
>> > IW 0 [Indexer]: now sync _0.fdt
>> > IW 0 [Indexer]: now sync _1.nrm
>> > IW 0 [Indexer]: now sync _2.nrm
>> > IW 0 [Indexer]: now sync _3.tis
>> > IW 0 [Indexer]: done all syncs
>> > IW 0 [Indexer]: commit: pendingCommit != null
>> > IW 0 [Indexer]: commit: wrote segments file "segments_2"
>> > IFD [Indexer]: now checkpoint "segments_2" [7 segments ; isCommit =
>> true]
>> > IFD [Indexer]: deleteCommits: now decRef commit "segments_1"
>> > IFD [Indexer]: delete "segments_1"
>> > IW 0 [Indexer]: commit: done
>> > IW 0 [Indexer]: optimize: index now _0:C171638->_0 _1:C204995->_0
>> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0
>> _6:C76978->_0
>> > IW 0 [Indexer]:   flush: segment=null docStoreSegment=_6
>> docStoreOffset=0
>> > flushDocs=false flushDeletes=true flushDocStores=false numDocs=0
>> > numBufDelTerms=0
>> > IW 0 [Indexer]:   index before flush _0:C171638->_0 _1:C204995->_0
>> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0
>> _6:C76978->_0
>> > IW 0 [Indexer]: add merge to pendingMerges: _0:C171638->_0
>> _1:C204995->_0
>> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0
>> _6:C76978->_0
>> > [optimize] [total 1 pending]
>> > IW 0 [Indexer]: CMS: now merge
>> > IW 0 [Indexer]: CMS:   index: _0:C171638->_0 _1:C204995->_0
>> _2:C168236->_0
>> > _3:C146894->_0 _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
>> > IW 0 [Indexer]: CMS:   consider merge _0:C171638->_0 _1:C204995->_0
>> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0
>> _6:C76978->_0
>> > into _7 [optimize]
>> > IW 0 [Indexer]: CMS:     launch new thread [Lucene Merge Thread #0]
>> > IW 0 [Indexer]: CMS:   no more merges pending; now return
>> > IW 0 [Lucene Merge Thread #0]: CMS:   merge thread: start
>> > IW 0 [Lucene Merge Thread #0]: now merge
>> >  merge=_0:C171638->_0 _1:C204995->_0 _2:C168236->_0 _3:C146894->_0
>> > _4:C162034->_0 _5:C146250->_0 _6:C76978->_0 into _7 [optimize]
>> >  merge=org.apache.lucene.index.MergePolicy$OneMerge@78688954
>> >  index=_0:C171638->_0 _1:C204995->_0 _2:C168236->_0 _3:C146894->_0
>> > _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
>> > IW 0 [Lucene Merge Thread #0]: merging _0:C171638->_0 _1:C204995->_0
>> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0
>> _6:C76978->_0
>> > into _7 [optimize]
>> > IW 0 [Lucene Merge Thread #0]: merge: total 1077025 docs
>> > IW 0 [Lucene Merge Thread #0]: commitMerge: _0:C171638->_0
>> _1:C204995->_0
>> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0
>> _6:C76978->_0
>> > into _7 [optimize] index=_0:C171638->_0 _1:C204995->_0 _2:C168236->_0
>> > _3:C146894->_0 _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
>> > IW 0 [Lucene Merge Thread #0]: commitMergeDeletes _0:C171638->_0
>> > _1:C204995->_0 _2:C168236->_0 _3:C146894->_0 _4:C162034->_0
>> _5:C146250->_0
>> > _6:C76978->_0 into _7 [optimize]
>> > IFD [Lucene Merge Thread #0]: now checkpoint "segments_2" [1 segments ;
>> > isCommit = false]
>> > IW 0 [Lucene Merge Thread #0]: CMS:   merge thread: done
>> > IW 0 [Indexer]: now flush at close
>> > IW 0 [Indexer]:   flush: segment=null docStoreSegment=_6
>> docStoreOffset=0
>> > flushDocs=false flushDeletes=true flushDocStores=true numDocs=0
>> > numBufDelTerms=0
>> > IW 0 [Indexer]:   index before flush _7:C1077025->_0
>> > IW 0 [Indexer]:   flush shared docStore segment _6
>> > IW 0 [Indexer]: DW: closeDocStore: 0 files to flush to segment _6
>> numDocs=0
>> > IW 0 [Indexer]: CMS: now merge
>> > IW 0 [Indexer]: CMS:   index: _7:C1077025->_0
>> > IW 0 [Indexer]: CMS:   no more merges pending; now return
>> > IW 0 [Indexer]: now call final commit()
>> > IW 0 [Indexer]: startCommit(): start sizeInBytes=0
>> > IW 0 [Indexer]: startCommit index=_7:C1077025->_0 changeCount=23
>> > IW 0 [Indexer]: now sync _7.prx
>> > IW 0 [Indexer]: now sync _7.fnm
>> > IW 0 [Indexer]: now sync _7.tis
>> > IW 0 [Indexer]: now sync _7.nrm
>> > IW 0 [Indexer]: now sync _7.tii
>> > IW 0 [Indexer]: now sync _7.frq
>> > IW 0 [Indexer]: done all syncs
>> > IW 0 [Indexer]: commit: pendingCommit != null
>> > IW 0 [Indexer]: commit: wrote segments file "segments_3"
>> > IFD [Indexer]: now checkpoint "segments_3" [1 segments ; isCommit =
>> true]
>> > IFD [Indexer]: deleteCommits: now decRef commit "segments_2"
>> > IFD [Indexer]: delete "_0.tis"
>> > IFD [Indexer]: delete "_5.prx"
>> > IFD [Indexer]: delete "_3.tii"
>> > IFD [Indexer]: delete "_3.frq"
>> > IFD [Indexer]: delete "_1.frq"
>> > IFD [Indexer]: delete "_6.frq"
>> > IFD [Indexer]: delete "_4.prx"
>> > IFD [Indexer]: delete "_4.fnm"
>> > IFD [Indexer]: delete "_2.tii"
>> > IFD [Indexer]: delete "_3.fnm"
>> > IFD [Indexer]: delete "_1.fnm"
>> > IFD [Indexer]: delete "_6.tis"
>> > IFD [Indexer]: delete "_4.frq"
>> > IFD [Indexer]: delete "_5.nrm"
>> > IFD [Indexer]: delete "_5.tis"
>> > IFD [Indexer]: delete "_1.tii"
>> > IFD [Indexer]: delete "_4.tis"
>> > IFD [Indexer]: delete "_0.prx"
>> > IFD [Indexer]: delete "_3.nrm"
>> > IFD [Indexer]: delete "_4.tii"
>> > IFD [Indexer]: delete "_0.nrm"
>> > IFD [Indexer]: delete "_5.fnm"
>> > IFD [Indexer]: delete "_1.tis"
>> > IFD [Indexer]: delete "_0.fnm"
>> > IFD [Indexer]: delete "_2.prx"
>> > IFD [Indexer]: delete "_6.tii"
>> > IFD [Indexer]: delete "_4.nrm"
>> > IFD [Indexer]: delete "_2.frq"
>> > IFD [Indexer]: delete "_5.frq"
>> > IFD [Indexer]: delete "_3.prx"
>> > IFD [Indexer]: delete "_5.tii"
>> > IFD [Indexer]: delete "_2.fnm"
>> > IFD [Indexer]: delete "_1.prx"
>> > IFD [Indexer]: delete "_2.tis"
>> > IFD [Indexer]: delete "_0.tii"
>> > IFD [Indexer]: delete "_6.prx"
>> > IFD [Indexer]: delete "_0.frq"
>> > IFD [Indexer]: delete "segments_2"
>> > IFD [Indexer]: delete "_6.fnm"
>> > IFD [Indexer]: delete "_6.nrm"
>> > IFD [Indexer]: delete "_1.nrm"
>> > IFD [Indexer]: delete "_2.nrm"
>> > IFD [Indexer]: delete "_3.tis"
>> > IW 0 [Indexer]: commit: done
>> > IW 0 [Indexer]: at close: _7:C1077025->_0
>> >
>> > I see no errors.
>> > Peter
>> >
>> >
>> > On Tue, Oct 27, 2009 at 10:44 AM, Peter Keegan <peterlkeegan@gmail.com
>> >wrote:
>> >
>> >>
>> >>
>> >> On Tue, Oct 27, 2009 at 10:37 AM, Michael McCandless <
>> >> lucene@mikemccandless.com> wrote:
>> >>
>> >>> OK that exception looks more reasonable, for a disk full event.
>> >>>
>> >>> But, I can't tell from your followon emails: did this lead to index
>> >>> corruption?
>> >>>
>> >>
>> >> Yes, but this may be caused by the application ignoring a Lucene
>> exception
>> >> somewhere else. I will chase this down.
>> >>
>> >>>
>> >>> Also, I noticed you're using a rather old 1.6.0 JRE (1.6.0_03) -- you
>> >>> really should upgrade that to the latest 1.6.0 -- there's at least one
>> >>> known problem with Lucene and early 1.6.0 JREs.
>> >>>
>> >>
>> >> Yes, I remember this problem - that's why we stayed at _03
>> >> Thanks.
>> >>
>> >>>
>> >>> Mike
>> >>>
>> >>> On Tue, Oct 27, 2009 at 10:00 AM, Peter Keegan <
>> peterlkeegan@gmail.com>
>> >>> wrote:
>> >>> > After rebuilding the corrupted indexes, the low disk space exception
>> is
>> >>> now
>> >>> > occurring as expected. Sorry for the distraction.
>> >>> >
>> >>> > fyi, here are the details:
>> >>> >
>> >>> >  java.io.IOException: There is not enough space on the disk
>> >>> >    at java.io.RandomAccessFile.writeBytes(Native Method)
>> >>> >    at java.io.RandomAccessFile.write(Unknown Source)
>> >>> >    at
>> >>> >
>> >>>
>> org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexOutput.flushBuffer(SimpleFSDirectory.java:192)
>> >>> >    at
>> >>> >
>> >>>
>> org.apache.lucene.store.BufferedIndexOutput.flushBuffer(BufferedIndexOutput.java:96)
>> >>> >    at
>> >>> >
>> >>>
>> org.apache.lucene.store.BufferedIndexOutput.flush(BufferedIndexOutput.java:85)
>> >>> >    at
>> >>> >
>> >>>
>> org.apache.lucene.store.BufferedIndexOutput.close(BufferedIndexOutput.java:109)
>> >>> >    at
>> >>> >
>> >>>
>> org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexOutput.close(SimpleFSDirectory.java:199)
>> >>> >    at
>> org.apache.lucene.index.FieldsWriter.close(FieldsWriter.java:144)
>> >>> >    at
>> >>> >
>> >>>
>> org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:357)
>> >>> >    at
>> >>> org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:153)
>> >>> >    at
>> >>> >
>> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:5011)
>> >>> >    at
>> org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4596)
>> >>> >    at
>> >>> >
>> >>>
>> org.apache.lucene.index.IndexWriter.resolveExternalSegments(IndexWriter.java:3786)
>> >>> >    at
>> >>> >
>> >>>
>> org.apache.lucene.index.IndexWriter.addIndexesNoOptimize(IndexWriter.java:3695)
>> >>> >
>> >>> >
>> >>> > And the corresponding index info log:
>> >>> >
>> >>> > IFD [Indexer]: setInfoStream
>> >>> >
>> >>>
>> deletionPolicy=org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy@256ef705
>> >>> > IW 1 [Indexer]: setInfoStream:
>> >>> > dir=org.apache.lucene.store.SimpleFSDirectory@D
>> >>> :\mnsavs\lresumes3\lresumes3.luc\lresumes3.update.main.4
>> >>> > autoCommit=false
>> >>> >
>> >>>
>> mergePolicy=org.apache.lucene.index.LogByteSizeMergePolicy@181b7c76mergeScheduler
>> >>>
>> =org.apache.lucene.index.ConcurrentMergeScheduler@34883357ramBufferSizeMB
>> >>> =16.0
>> >>> > maxBufferedDocs=-1 maxBuffereDeleteTerms=-1
>> >>> > maxFieldLength=2147483647 index=
>> >>> > IW 1 [Indexer]: flush at addIndexesNoOptimize
>> >>> > IW 1 [Indexer]:   flush: segment=null docStoreSegment=null
>> >>> docStoreOffset=0
>> >>> > flushDocs=false flushDeletes=true flushDocStores=false numDocs=0
>> >>> > numBufDelTerms=0
>> >>> > IW 1 [Indexer]:   index before flush
>> >>> > IW 1 [Indexer]: now start transaction
>> >>> > IW 1 [Indexer]: LMP: findMerges: 2 segments
>> >>> > IW 1 [Indexer]: LMP:   level 8.774518 to 9.524518: 1 segments
>> >>> > IW 1 [Indexer]: LMP:   level 6.2973914 to 7.0473914: 1 segments
>> >>> > IW 1 [Indexer]: CMS: now merge
>> >>> > IW 1 [Indexer]: CMS:   index: _7:Cx1075533->_0** _8:Cx2795**
>> >>> > IW 1 [Indexer]: CMS:   no more merges pending; now return
>> >>> > IW 1 [Indexer]: add merge to pendingMerges: _7:Cx1075533->_0 [total
>> 1
>> >>> > pending]
>> >>> > IW 1 [Indexer]: now merge
>> >>> >  merge=_7:Cx1075533->_0 into _0 [mergeDocStores]
>> >>> >  merge=org.apache.lucene.index.MergePolicy$OneMerge@4d480ea
>> >>> >  index=_7:Cx1075533->_0** _8:Cx2795**
>> >>> > IW 1 [Indexer]: merging _7:Cx1075533->_0 into _0 [mergeDocStores]
>> >>> > IW 1 [Indexer]: merge: total 1074388 docs
>> >>> > IW 1 [Indexer]: commitMerge: _7:Cx1075533->_0 into _0
>> [mergeDocStores]
>> >>> > index=_7:Cx1075533->_0** _8:Cx2795**
>> >>> > IW 1 [Indexer]: commitMergeDeletes _7:Cx1075533->_0 into _0
>> >>> [mergeDocStores]
>> >>> > IFD [Indexer]: now checkpoint "segments_1" [2 segments ; isCommit =
>> >>> false]
>> >>> > IW 1 [Indexer]: LMP: findMerges: 2 segments
>> >>> > IW 1 [Indexer]: LMP:   level 8.864886 to 9.614886: 1 segments
>> >>> > IW 1 [Indexer]: LMP:   level 6.2973914 to 7.0473914: 1 segments
>> >>> > IW 1 [Indexer]: add merge to pendingMerges: _8:Cx2795 [total 1
>> pending]
>> >>> > IW 1 [Indexer]: now merge
>> >>> >  merge=_8:Cx2795 into _1 [mergeDocStores]
>> >>> >  merge=org.apache.lucene.index.MergePolicy$OneMerge@606f8b2b
>> >>> >  index=_0:C1074388 _8:Cx2795**
>> >>> > IW 1 [Indexer]: merging _8:Cx2795 into _1 [mergeDocStores]
>> >>> > IW 1 [Indexer]: merge: total 2795 docs
>> >>> > IW 1 [Indexer]: handleMergeException: merge=_8:Cx2795 into _1
>> >>> > [mergeDocStores] exc=java.io.IOException: There is not enough space
>> on
>> >>> the
>> >>> > disk
>> >>> > IW 1 [Indexer]: hit exception during merge
>> >>> > IFD [Indexer]: refresh [prefix=_1]: removing newly created
>> unreferenced
>> >>> file
>> >>> > "_1.fdt"
>> >>> > IFD [Indexer]: delete "_1.fdt"
>> >>> > IFD [Indexer]: refresh [prefix=_1]: removing newly created
>> unreferenced
>> >>> file
>> >>> > "_1.fdx"
>> >>> > IFD [Indexer]: delete "_1.fdx"
>> >>> > IFD [Indexer]: refresh [prefix=_1]: removing newly created
>> unreferenced
>> >>> file
>> >>> > "_1.fnm"
>> >>> > IFD [Indexer]: delete "_1.fnm"
>> >>> > IW 1 [Indexer]: now rollback transaction
>> >>> > IW 1 [Indexer]: all running merges have aborted
>> >>> > IFD [Indexer]: now checkpoint "segments_1" [0 segments ; isCommit =
>> >>> false]
>> >>> > IFD [Indexer]: delete "_0.nrm"
>> >>> > IFD [Indexer]: delete "_0.tis"
>> >>> > IFD [Indexer]: delete "_0.fnm"
>> >>> > IFD [Indexer]: delete "_0.tii"
>> >>> > IFD [Indexer]: delete "_0.frq"
>> >>> > IFD [Indexer]: delete "_0.fdx"
>> >>> > IFD [Indexer]: delete "_0.prx"
>> >>> > IFD [Indexer]: delete "_0.fdt"
>> >>> >
>> >>> >
>> >>> > Peter
>> >>> >
>> >>> > On Mon, Oct 26, 2009 at 3:59 PM, Peter Keegan <
>> peterlkeegan@gmail.com
>> >>> >wrote:
>> >>> >
>> >>> >>
>> >>> >>
>> >>> >> On Mon, Oct 26, 2009 at 3:00 PM, Michael McCandless <
>> >>> >> lucene@mikemccandless.com> wrote:
>> >>> >>
>> >>> >>> On Mon, Oct 26, 2009 at 2:55 PM, Peter Keegan <
>> peterlkeegan@gmail.com
>> >>> >
>> >>> >>> wrote:
>> >>> >>> > On Mon, Oct 26, 2009 at 2:50 PM, Michael McCandless <
>> >>> >>> > lucene@mikemccandless.com> wrote:
>> >>> >>> >
>> >>> >>> >> On Mon, Oct 26, 2009 at 10:44 AM, Peter Keegan <
>> >>> peterlkeegan@gmail.com
>> >>> >>> >
>> >>> >>> >> wrote:
>> >>> >>> >> > Even running in console mode, the exception is difficult to
>> >>> >>> interpret.
>> >>> >>> >> > Here's an exception that I think occurred during an add
>> document,
>> >>> >>> commit
>> >>> >>> >> or
>> >>> >>> >> > close:
>> >>> >>> >> > doc counts differ for segment _g: field Reader shows 137 but
>> >>> >>> segmentInfo
>> >>> >>> >> > shows 5777
>> >>> >>> >>
>> >>> >>> >> That's spooky.  Do you have the full exception for this one?
>>  What
>> >>> IO
>> >>> >>> >> system are you running on?  (Is it just a local drive on your
>> >>> windows
>> >>> >>> >> computer?) It's almost as if the IO system is not generating an
>> >>> >>> >> IOException to Java when disk fills up.
>> >>> >>> >>
>> >>> >>> >
>> >>> >>> > Index and code are all on a local drive. There is no other
>> exception
>> >>> >>> coming
>> >>> >>> > back - just what I reported.
>> >>> >>>
>> >>> >>> But, you didn't report a traceback for this first one?
>> >>> >>>
>> >>> >>
>> >>> >> Yes, I need to add some more printStackTrace calls.
>> >>> >>
>> >>> >>
>> >>> >>>
>> >>> >>> >> > I ensured that the disk space was low before updating the
>> index.
>> >>> >>> >>
>> >>> >>> >> You mean, to intentionally test the disk-full case?
>> >>> >>> >>
>> >>> >>> >
>> >>> >>> > Yes, that's right.
>> >>> >>>
>> >>> >>> OK.  Can you turn on IndexWriter's infoStream, get this disk full
>> /
>> >>> >>> corruption to happen again, and post back the resulting output?
>>  Make
>> >>> >>> sure your index first passes CheckIndex before starting (so we
>> don't
>> >>> >>> begin the test w/ any pre-existing index corruption).
>> >>> >>>
>> >>> >>
>> >>> >> Good point about CheckIndex.  I've already found 2 bad ones. I will
>> >>> build
>> >>> >> new indexes from scratch. This will take a while.
>> >>> >>
>> >>> >>
>> >>> >>> >> > On another occasion, the exception was:
>> >>> >>> >> > background merge hit exception: _0:C1080260 _1:C139 _2:C123
>> >>> _3:C107
>> >>> >>> >> _4:C126
>> >>> >>> >> > _5:C121 _6:C126 _7:C711 _8:C116 into _9 [optimize]
>> >>> [mergeDocStores]
>> >>> >>> >>
>> >>> >>> >> In this case, the SegmentMerger was trying to open this
>> segment,
>> >>> but
>> >>> >>> >> on attempting to read the first int from the fdx (fields index)
>> >>> file
>> >>> >>> >> for one of the segments, it hit EOF.
>> >>> >>> >>
>> >>> >>> >> This is also spooky -- this looks like index corruption, which
>> >>> should
>> >>> >>> >> never happen on hitting disk full.
>> >>> >>> >>
>> >>> >>> >
>> >>> >>> > That's what I thought, too. Could Lucene be catching the
>> IOException
>> >>> and
>> >>> >>> > turning it into a different exception?
>> >>> >>>
>> >>> >>> I think that's unlikely, but I guess possible.  We have "disk
>> full"
>> >>> >>> tests in the unit tests, that throw an IOException at different
>> times.
>> >>> >>>
>> >>> >>> What exact windows version are you using?  The local drive is
>> NTFS?
>> >>> >>>
>> >>> >>
>> >>> >> Windows Server 2003 Enterprise x64 SP2. Local drive is NTFS
>> >>> >>
>> >>> >>
>> >>> >>>
>> >>> >>> Mike
>> >>> >>>
>> >>> >>>
>> ---------------------------------------------------------------------
>> >>> >>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>> >>> >>> For additional commands, e-mail: java-user-help@lucene.apache.org
>> >>> >>>
>> >>> >>>
>> >>> >>
>> >>> >
>> >>>
>> >>> ---------------------------------------------------------------------
>> >>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>> >>> For additional commands, e-mail: java-user-help@lucene.apache.org
>> >>>
>> >>>
>> >>
>> >
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>> For additional commands, e-mail: java-user-help@lucene.apache.org
>>
>>
>

Re: IO exception during merge/optimize

Posted by Peter Keegan <pe...@gmail.com>.
It's reproducible with a large no. of docs (>1 million), but not with 100K
docs.
I got same error with jvm 1.6.0_16.
The index was optimized after all docs are added. I'll try removing the
optimize.

Peter

On Tue, Oct 27, 2009 at 2:57 PM, Michael McCandless <
lucene@mikemccandless.com> wrote:

> This is odd -- is it reproducible?
>
> Can you narrow it down to a small set of docs that when indexed
> produce a corrupted index?
>
> If you attempt to optimize the index, does it fail?
>
> Mike
>
> On Tue, Oct 27, 2009 at 1:40 PM, Peter Keegan <pe...@gmail.com>
> wrote:
> > It seems the index is corrupted immediately after the initial build
> (ample
> > disk space was provided):
> >
> > Output from CheckIndex:
> >
> > Opening index @ D:\mnsavs\lresumes1\lresumes1.luc\lresumes1.search.main.2
> >
> > Segments file=segments_3 numSegments=1 version=FORMAT_DIAGNOSTICS [Lucene
> > 2.9]
> >  1 of 1: name=_7 docCount=1077025
> >    compound=false
> >    hasProx=true
> >    numFiles=8
> >    size (MB)=3,201.196
> >    diagnostics = {optimize=true, mergeFactor=7, os.version=5.2,
> os=Windows
> > 2003, mergeDocStores=false, lucene.version=2.9 exported - 2009-10-26
> > 07:58:55, source=merge, os.arch=amd64, java.version=1.6.0_03,
> > java.vendor=Sun Microsystems Inc.}
> >    docStoreOffset=0
> >    docStoreSegment=_0
> >    docStoreIsCompoundFile=false
> >    no deletions
> >    test: open reader.........OK
> >    test: fields..............OK [33 fields]
> >    test: field norms.........OK [33 fields]
> >    test: terms, freq, prox...ERROR [term contents:? docFreq=1 != num docs
> > seen 482 + num docs deleted 0]
> > java.lang.RuntimeException: term contents:? docFreq=1 != num docs seen
> 482 +
> > num docs deleted 0
> >    at
> org.apache.lucene.index.CheckIndex.testTermIndex(CheckIndex.java:675)
> >    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:530)
> >    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
> >    test: stored fields.......OK [3231075 total field count; avg 3 fields
> > per doc]
> >    test: term vectors........OK [0 total vector count; avg 0 term/freq
> > vector fields per doc]
> > FAILED
> >    WARNING: fixIndex() would remove reference to this segment; full
> > exception:
> > java.lang.RuntimeException: Term Index test failed
> >    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:543)
> >    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
> >
> > WARNING: 1 broken segments (containing 1077025 documents) detected
> > WARNING: would write new segments file, and 1077025 documents would be
> lost,
> > if -fix were specified
> >
> > Searching on this index seems to be fine, though.
> >
> > Here is the IndexWriter log from the build:
> >
> > IFD [Indexer]: setInfoStream
> >
> deletionPolicy=org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy@2a9cfec1
> > IW 0 [Indexer]: setInfoStream:
> > dir=org.apache.lucene.store.SimpleFSDirectory@D
> :\mnsavs\lresumes1\lresumes1.luc\lresumes1.update.main.2
> > autoCommit=false
> >
> mergePolicy=org.apache.lucene.index.LogByteSizeMergePolicy@291946c2mergeScheduler
> =org.apache.lucene.index.ConcurrentMergeScheduler@3a747fa2ramBufferSizeMB
> =16.0
> > maxBufferedDocs=-1 maxBuffereDeleteTerms=-1
> > maxFieldLength=2147483647 index=
> > IW 0 [Indexer]: setRAMBufferSizeMB 910.25
> > IW 0 [Indexer]: setMaxBufferedDocs 1000000
> > IW 0 [Indexer]: flush at getReader
> > IW 0 [Indexer]:   flush: segment=null docStoreSegment=null
> docStoreOffset=0
> > flushDocs=false flushDeletes=true flushDocStores=false numDocs=0
> > numBufDelTerms=0
> > IW 0 [Indexer]:   index before flush
> > IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=886.463
> allocMB=886.463
> > deletesMB=23.803 triggerMB=910.25
> > IW 0 [UpdWriterBuild]:   flush: segment=_0 docStoreSegment=_0
> > docStoreOffset=0 flushDocs=true flushDeletes=false flushDocStores=false
> > numDocs=171638 numBufDelTerms=171638
> > IW 0 [UpdWriterBuild]:   index before flush
> > IW 0 [UpdWriterBuild]: DW: flush postings as segment _0 numDocs=171638
> > IW 0 [UpdWriterBuild]: DW:   oldRAMSize=929523712
> newFlushedSize=573198529
> > docs/MB=313.985 new/old=61.666%
> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [1 segments ; isCommit
> =
> > false]
> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [1 segments ; isCommit
> =
> > false]
> > IW 0 [UpdWriterBuild]: LMP: findMerges: 1 segments
> > IW 0 [UpdWriterBuild]: LMP:   level 8.008305 to 8.758305: 1 segments
> > IW 0 [UpdWriterBuild]: CMS: now merge
> > IW 0 [UpdWriterBuild]: CMS:   index: _0:C171638->_0
> > IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
> > IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=857.977
> allocMB=901.32
> > deletesMB=52.274 triggerMB=910.25
> > IW 0 [UpdWriterBuild]:   flush: segment=_1 docStoreSegment=_0
> > docStoreOffset=171638 flushDocs=true flushDeletes=false
> flushDocStores=false
> > numDocs=204995 numBufDelTerms=204995
> > IW 0 [UpdWriterBuild]:   index before flush _0:C171638->_0
> > IW 0 [UpdWriterBuild]: DW: flush postings as segment _1 numDocs=204995
> > IW 0 [UpdWriterBuild]: DW:   oldRAMSize=899653632
> newFlushedSize=544283851
> > docs/MB=394.928 new/old=60.499%
> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [2 segments ; isCommit
> =
> > false]
> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [2 segments ; isCommit
> =
> > false]
> > IW 0 [UpdWriterBuild]: LMP: findMerges: 2 segments
> > IW 0 [UpdWriterBuild]: LMP:   level 8.008305 to 8.758305: 2 segments
> > IW 0 [UpdWriterBuild]: CMS: now merge
> > IW 0 [UpdWriterBuild]: CMS:   index: _0:C171638->_0 _1:C204995->_0
> > IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
> > IW 0 [UpdWriterBuild]: DW:   RAM: now balance allocations: usedMB=834.645
> vs
> > trigger=910.25 allocMB=901.32 deletesMB=75.627 vs trigger=955.762
> > byteBlockFree=35.938 charBlockFree=8.938
> > IW 0 [UpdWriterBuild]: DW:     nothing to free
> > IW 0 [UpdWriterBuild]: DW:     after free: freedMB=66.613 usedMB=910.272
> > allocMB=834.707
> > IW 0 [UpdWriterBuild]:   flush: segment=_2 docStoreSegment=_0
> > docStoreOffset=376633 flushDocs=true flushDeletes=false
> flushDocStores=false
> > numDocs=168236 numBufDelTerms=168236
> > IW 0 [UpdWriterBuild]:   index before flush _0:C171638->_0 _1:C204995->_0
> > IW 0 [UpdWriterBuild]: DW: flush postings as segment _2 numDocs=168236
> > IW 0 [UpdWriterBuild]: DW:   oldRAMSize=875188224
> newFlushedSize=530720464
> > docs/MB=332.394 new/old=60.641%
> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [3 segments ; isCommit
> =
> > false]
> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [3 segments ; isCommit
> =
> > false]
> > IW 0 [UpdWriterBuild]: LMP: findMerges: 3 segments
> > IW 0 [UpdWriterBuild]: LMP:   level 8.008305 to 8.758305: 3 segments
> > IW 0 [UpdWriterBuild]: CMS: now merge
> > IW 0 [UpdWriterBuild]: CMS:   index: _0:C171638->_0 _1:C204995->_0
> > _2:C168236->_0
> > IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
> > IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=814.282
> allocMB=835.832
> > deletesMB=95.997 triggerMB=910.25
> > IW 0 [UpdWriterBuild]:   flush: segment=_3 docStoreSegment=_0
> > docStoreOffset=544869 flushDocs=true flushDeletes=false
> flushDocStores=false
> > numDocs=146894 numBufDelTerms=146894
> > IW 0 [UpdWriterBuild]:   index before flush _0:C171638->_0 _1:C204995->_0
> > _2:C168236->_0
> > IW 0 [UpdWriterBuild]: DW: flush postings as segment _3 numDocs=146894
> > IW 0 [UpdWriterBuild]: DW:   oldRAMSize=853836800
> newFlushedSize=522388771
> > docs/MB=294.856 new/old=61.181%
> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [4 segments ; isCommit
> =
> > false]
> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [4 segments ; isCommit
> =
> > false]
> > IW 0 [UpdWriterBuild]: LMP: findMerges: 4 segments
> > IW 0 [UpdWriterBuild]: LMP:   level 8.008305 to 8.758305: 4 segments
> > IW 0 [UpdWriterBuild]: CMS: now merge
> > IW 0 [UpdWriterBuild]: CMS:   index: _0:C171638->_0 _1:C204995->_0
> > _2:C168236->_0 _3:C146894->_0
> > IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
> > IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=791.724
> allocMB=835.832
> > deletesMB=118.535 triggerMB=910.25
> > IW 0 [UpdWriterBuild]:   flush: segment=_4 docStoreSegment=_0
> > docStoreOffset=691763 flushDocs=true flushDeletes=false
> flushDocStores=false
> > numDocs=162034 numBufDelTerms=162034
> > IW 0 [UpdWriterBuild]:   index before flush _0:C171638->_0 _1:C204995->_0
> > _2:C168236->_0 _3:C146894->_0
> > IW 0 [UpdWriterBuild]: DW: flush postings as segment _4 numDocs=162034
> > IW 0 [UpdWriterBuild]: DW:   oldRAMSize=830182400
> newFlushedSize=498741034
> > docs/MB=340.668 new/old=60.076%
> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [5 segments ; isCommit
> =
> > false]
> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [5 segments ; isCommit
> =
> > false]
> > IW 0 [UpdWriterBuild]: LMP: findMerges: 5 segments
> > IW 0 [UpdWriterBuild]: LMP:   level 8.008305 to 8.758305: 5 segments
> > IW 0 [UpdWriterBuild]: CMS: now merge
> > IW 0 [UpdWriterBuild]: CMS:   index: _0:C171638->_0 _1:C204995->_0
> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0
> > IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
> > IW 0 [UpdWriterBuild]: DW:   RAM: now balance allocations: usedMB=771.396
> vs
> > trigger=910.25 allocMB=835.832 deletesMB=138.875 vs trigger=955.762
> > byteBlockFree=39.688 charBlockFree=7.188
> > IW 0 [UpdWriterBuild]: DW:     nothing to free
> > IW 0 [UpdWriterBuild]: DW:     after free: freedMB=64.374 usedMB=910.271
> > allocMB=771.458
> > IW 0 [UpdWriterBuild]:   flush: segment=_5 docStoreSegment=_0
> > docStoreOffset=853797 flushDocs=true flushDeletes=false
> flushDocStores=false
> > numDocs=146250 numBufDelTerms=146250
> > IW 0 [UpdWriterBuild]:   index before flush _0:C171638->_0 _1:C204995->_0
> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0
> > IW 0 [UpdWriterBuild]: DW: flush postings as segment _5 numDocs=146250
> > IW 0 [UpdWriterBuild]: DW:   oldRAMSize=808866816
> newFlushedSize=485212402
> > docs/MB=316.056 new/old=59.987%
> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [6 segments ; isCommit
> =
> > false]
> > IFD [UpdWriterBuild]: now checkpoint "segments_1" [6 segments ; isCommit
> =
> > false]
> > IW 0 [UpdWriterBuild]: LMP: findMerges: 6 segments
> > IW 0 [UpdWriterBuild]: LMP:   level 8.008305 to 8.758305: 6 segments
> > IW 0 [UpdWriterBuild]: CMS: now merge
> > IW 0 [UpdWriterBuild]: CMS:   index: _0:C171638->_0 _1:C204995->_0
> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0
> > IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
> > IW 0 [Indexer]: commit: start
> > IW 0 [Indexer]: commit: now prepare
> > IW 0 [Indexer]: prepareCommit: flush
> > IW 0 [Indexer]:   flush: segment=_6 docStoreSegment=_0
> > docStoreOffset=1000047 flushDocs=true flushDeletes=true
> flushDocStores=true
> > numDocs=76978 numBufDelTerms=76978
> > IW 0 [Indexer]:   index before flush _0:C171638->_0 _1:C204995->_0
> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0
> > IW 0 [Indexer]:   flush shared docStore segment _0
> > IW 0 [Indexer]: DW: closeDocStore: 2 files to flush to segment _0
> > numDocs=1077025
> > IW 0 [Indexer]: DW: flush postings as segment _6 numDocs=76978
> > IW 0 [Indexer]: DW:   oldRAMSize=486968320 newFlushedSize=273168136
> > docs/MB=295.486 new/old=56.096%
> > IFD [Indexer]: now checkpoint "segments_1" [7 segments ; isCommit =
> false]
> > IW 0 [Indexer]: DW: apply 1077025 buffered deleted terms and 0 deleted
> > docIDs and 0 deleted queries on 7 segments.
> > IFD [Indexer]: now checkpoint "segments_1" [7 segments ; isCommit =
> false]
> > IW 0 [Indexer]: LMP: findMerges: 7 segments
> > IW 0 [Indexer]: LMP:   level 8.008305 to 8.758305: 7 segments
> > IW 0 [Indexer]: CMS: now merge
> > IW 0 [Indexer]: CMS:   index: _0:C171638->_0 _1:C204995->_0
> _2:C168236->_0
> > _3:C146894->_0 _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
> > IW 0 [Indexer]: CMS:   no more merges pending; now return
> > IW 0 [Indexer]: startCommit(): start sizeInBytes=0
> > IW 0 [Indexer]: startCommit index=_0:C171638->_0 _1:C204995->_0
> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
> > changeCount=21
> > IW 0 [Indexer]: now sync _0.tis
> > IW 0 [Indexer]: now sync _5.prx
> > IW 0 [Indexer]: now sync _3.frq
> > IW 0 [Indexer]: now sync _3.tii
> > IW 0 [Indexer]: now sync _1.frq
> > IW 0 [Indexer]: now sync _6.frq
> > IW 0 [Indexer]: now sync _4.prx
> > IW 0 [Indexer]: now sync _4.fnm
> > IW 0 [Indexer]: now sync _2.tii
> > IW 0 [Indexer]: now sync _3.fnm
> > IW 0 [Indexer]: now sync _1.fnm
> > IW 0 [Indexer]: now sync _6.tis
> > IW 0 [Indexer]: now sync _4.frq
> > IW 0 [Indexer]: now sync _5.nrm
> > IW 0 [Indexer]: now sync _5.tis
> > IW 0 [Indexer]: now sync _1.tii
> > IW 0 [Indexer]: now sync _4.tis
> > IW 0 [Indexer]: now sync _0.prx
> > IW 0 [Indexer]: now sync _3.nrm
> > IW 0 [Indexer]: now sync _4.tii
> > IW 0 [Indexer]: now sync _0.nrm
> > IW 0 [Indexer]: now sync _5.fnm
> > IW 0 [Indexer]: now sync _1.tis
> > IW 0 [Indexer]: now sync _0.fnm
> > IW 0 [Indexer]: now sync _2.prx
> > IW 0 [Indexer]: now sync _6.tii
> > IW 0 [Indexer]: now sync _4.nrm
> > IW 0 [Indexer]: now sync _2.frq
> > IW 0 [Indexer]: now sync _5.frq
> > IW 0 [Indexer]: now sync _3.prx
> > IW 0 [Indexer]: now sync _5.tii
> > IW 0 [Indexer]: now sync _2.fnm
> > IW 0 [Indexer]: now sync _1.prx
> > IW 0 [Indexer]: now sync _2.tis
> > IW 0 [Indexer]: now sync _0.tii
> > IW 0 [Indexer]: now sync _6.prx
> > IW 0 [Indexer]: now sync _0.frq
> > IW 0 [Indexer]: now sync _6.fnm
> > IW 0 [Indexer]: now sync _0.fdx
> > IW 0 [Indexer]: now sync _6.nrm
> > IW 0 [Indexer]: now sync _0.fdt
> > IW 0 [Indexer]: now sync _1.nrm
> > IW 0 [Indexer]: now sync _2.nrm
> > IW 0 [Indexer]: now sync _3.tis
> > IW 0 [Indexer]: done all syncs
> > IW 0 [Indexer]: commit: pendingCommit != null
> > IW 0 [Indexer]: commit: wrote segments file "segments_2"
> > IFD [Indexer]: now checkpoint "segments_2" [7 segments ; isCommit = true]
> > IFD [Indexer]: deleteCommits: now decRef commit "segments_1"
> > IFD [Indexer]: delete "segments_1"
> > IW 0 [Indexer]: commit: done
> > IW 0 [Indexer]: optimize: index now _0:C171638->_0 _1:C204995->_0
> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
> > IW 0 [Indexer]:   flush: segment=null docStoreSegment=_6 docStoreOffset=0
> > flushDocs=false flushDeletes=true flushDocStores=false numDocs=0
> > numBufDelTerms=0
> > IW 0 [Indexer]:   index before flush _0:C171638->_0 _1:C204995->_0
> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
> > IW 0 [Indexer]: add merge to pendingMerges: _0:C171638->_0 _1:C204995->_0
> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
> > [optimize] [total 1 pending]
> > IW 0 [Indexer]: CMS: now merge
> > IW 0 [Indexer]: CMS:   index: _0:C171638->_0 _1:C204995->_0
> _2:C168236->_0
> > _3:C146894->_0 _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
> > IW 0 [Indexer]: CMS:   consider merge _0:C171638->_0 _1:C204995->_0
> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
> > into _7 [optimize]
> > IW 0 [Indexer]: CMS:     launch new thread [Lucene Merge Thread #0]
> > IW 0 [Indexer]: CMS:   no more merges pending; now return
> > IW 0 [Lucene Merge Thread #0]: CMS:   merge thread: start
> > IW 0 [Lucene Merge Thread #0]: now merge
> >  merge=_0:C171638->_0 _1:C204995->_0 _2:C168236->_0 _3:C146894->_0
> > _4:C162034->_0 _5:C146250->_0 _6:C76978->_0 into _7 [optimize]
> >  merge=org.apache.lucene.index.MergePolicy$OneMerge@78688954
> >  index=_0:C171638->_0 _1:C204995->_0 _2:C168236->_0 _3:C146894->_0
> > _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
> > IW 0 [Lucene Merge Thread #0]: merging _0:C171638->_0 _1:C204995->_0
> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
> > into _7 [optimize]
> > IW 0 [Lucene Merge Thread #0]: merge: total 1077025 docs
> > IW 0 [Lucene Merge Thread #0]: commitMerge: _0:C171638->_0 _1:C204995->_0
> > _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
> > into _7 [optimize] index=_0:C171638->_0 _1:C204995->_0 _2:C168236->_0
> > _3:C146894->_0 _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
> > IW 0 [Lucene Merge Thread #0]: commitMergeDeletes _0:C171638->_0
> > _1:C204995->_0 _2:C168236->_0 _3:C146894->_0 _4:C162034->_0
> _5:C146250->_0
> > _6:C76978->_0 into _7 [optimize]
> > IFD [Lucene Merge Thread #0]: now checkpoint "segments_2" [1 segments ;
> > isCommit = false]
> > IW 0 [Lucene Merge Thread #0]: CMS:   merge thread: done
> > IW 0 [Indexer]: now flush at close
> > IW 0 [Indexer]:   flush: segment=null docStoreSegment=_6 docStoreOffset=0
> > flushDocs=false flushDeletes=true flushDocStores=true numDocs=0
> > numBufDelTerms=0
> > IW 0 [Indexer]:   index before flush _7:C1077025->_0
> > IW 0 [Indexer]:   flush shared docStore segment _6
> > IW 0 [Indexer]: DW: closeDocStore: 0 files to flush to segment _6
> numDocs=0
> > IW 0 [Indexer]: CMS: now merge
> > IW 0 [Indexer]: CMS:   index: _7:C1077025->_0
> > IW 0 [Indexer]: CMS:   no more merges pending; now return
> > IW 0 [Indexer]: now call final commit()
> > IW 0 [Indexer]: startCommit(): start sizeInBytes=0
> > IW 0 [Indexer]: startCommit index=_7:C1077025->_0 changeCount=23
> > IW 0 [Indexer]: now sync _7.prx
> > IW 0 [Indexer]: now sync _7.fnm
> > IW 0 [Indexer]: now sync _7.tis
> > IW 0 [Indexer]: now sync _7.nrm
> > IW 0 [Indexer]: now sync _7.tii
> > IW 0 [Indexer]: now sync _7.frq
> > IW 0 [Indexer]: done all syncs
> > IW 0 [Indexer]: commit: pendingCommit != null
> > IW 0 [Indexer]: commit: wrote segments file "segments_3"
> > IFD [Indexer]: now checkpoint "segments_3" [1 segments ; isCommit = true]
> > IFD [Indexer]: deleteCommits: now decRef commit "segments_2"
> > IFD [Indexer]: delete "_0.tis"
> > IFD [Indexer]: delete "_5.prx"
> > IFD [Indexer]: delete "_3.tii"
> > IFD [Indexer]: delete "_3.frq"
> > IFD [Indexer]: delete "_1.frq"
> > IFD [Indexer]: delete "_6.frq"
> > IFD [Indexer]: delete "_4.prx"
> > IFD [Indexer]: delete "_4.fnm"
> > IFD [Indexer]: delete "_2.tii"
> > IFD [Indexer]: delete "_3.fnm"
> > IFD [Indexer]: delete "_1.fnm"
> > IFD [Indexer]: delete "_6.tis"
> > IFD [Indexer]: delete "_4.frq"
> > IFD [Indexer]: delete "_5.nrm"
> > IFD [Indexer]: delete "_5.tis"
> > IFD [Indexer]: delete "_1.tii"
> > IFD [Indexer]: delete "_4.tis"
> > IFD [Indexer]: delete "_0.prx"
> > IFD [Indexer]: delete "_3.nrm"
> > IFD [Indexer]: delete "_4.tii"
> > IFD [Indexer]: delete "_0.nrm"
> > IFD [Indexer]: delete "_5.fnm"
> > IFD [Indexer]: delete "_1.tis"
> > IFD [Indexer]: delete "_0.fnm"
> > IFD [Indexer]: delete "_2.prx"
> > IFD [Indexer]: delete "_6.tii"
> > IFD [Indexer]: delete "_4.nrm"
> > IFD [Indexer]: delete "_2.frq"
> > IFD [Indexer]: delete "_5.frq"
> > IFD [Indexer]: delete "_3.prx"
> > IFD [Indexer]: delete "_5.tii"
> > IFD [Indexer]: delete "_2.fnm"
> > IFD [Indexer]: delete "_1.prx"
> > IFD [Indexer]: delete "_2.tis"
> > IFD [Indexer]: delete "_0.tii"
> > IFD [Indexer]: delete "_6.prx"
> > IFD [Indexer]: delete "_0.frq"
> > IFD [Indexer]: delete "segments_2"
> > IFD [Indexer]: delete "_6.fnm"
> > IFD [Indexer]: delete "_6.nrm"
> > IFD [Indexer]: delete "_1.nrm"
> > IFD [Indexer]: delete "_2.nrm"
> > IFD [Indexer]: delete "_3.tis"
> > IW 0 [Indexer]: commit: done
> > IW 0 [Indexer]: at close: _7:C1077025->_0
> >
> > I see no errors.
> > Peter
> >
> >
> > On Tue, Oct 27, 2009 at 10:44 AM, Peter Keegan <peterlkeegan@gmail.com
> >wrote:
> >
> >>
> >>
> >> On Tue, Oct 27, 2009 at 10:37 AM, Michael McCandless <
> >> lucene@mikemccandless.com> wrote:
> >>
> >>> OK that exception looks more reasonable, for a disk full event.
> >>>
> >>> But, I can't tell from your followon emails: did this lead to index
> >>> corruption?
> >>>
> >>
> >> Yes, but this may be caused by the application ignoring a Lucene
> exception
> >> somewhere else. I will chase this down.
> >>
> >>>
> >>> Also, I noticed you're using a rather old 1.6.0 JRE (1.6.0_03) -- you
> >>> really should upgrade that to the latest 1.6.0 -- there's at least one
> >>> known problem with Lucene and early 1.6.0 JREs.
> >>>
> >>
> >> Yes, I remember this problem - that's why we stayed at _03
> >> Thanks.
> >>
> >>>
> >>> Mike
> >>>
> >>> On Tue, Oct 27, 2009 at 10:00 AM, Peter Keegan <peterlkeegan@gmail.com
> >
> >>> wrote:
> >>> > After rebuilding the corrupted indexes, the low disk space exception
> is
> >>> now
> >>> > occurring as expected. Sorry for the distraction.
> >>> >
> >>> > fyi, here are the details:
> >>> >
> >>> >  java.io.IOException: There is not enough space on the disk
> >>> >    at java.io.RandomAccessFile.writeBytes(Native Method)
> >>> >    at java.io.RandomAccessFile.write(Unknown Source)
> >>> >    at
> >>> >
> >>>
> org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexOutput.flushBuffer(SimpleFSDirectory.java:192)
> >>> >    at
> >>> >
> >>>
> org.apache.lucene.store.BufferedIndexOutput.flushBuffer(BufferedIndexOutput.java:96)
> >>> >    at
> >>> >
> >>>
> org.apache.lucene.store.BufferedIndexOutput.flush(BufferedIndexOutput.java:85)
> >>> >    at
> >>> >
> >>>
> org.apache.lucene.store.BufferedIndexOutput.close(BufferedIndexOutput.java:109)
> >>> >    at
> >>> >
> >>>
> org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexOutput.close(SimpleFSDirectory.java:199)
> >>> >    at
> org.apache.lucene.index.FieldsWriter.close(FieldsWriter.java:144)
> >>> >    at
> >>> >
> >>>
> org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:357)
> >>> >    at
> >>> org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:153)
> >>> >    at
> >>> >
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:5011)
> >>> >    at
> org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4596)
> >>> >    at
> >>> >
> >>>
> org.apache.lucene.index.IndexWriter.resolveExternalSegments(IndexWriter.java:3786)
> >>> >    at
> >>> >
> >>>
> org.apache.lucene.index.IndexWriter.addIndexesNoOptimize(IndexWriter.java:3695)
> >>> >
> >>> >
> >>> > And the corresponding index info log:
> >>> >
> >>> > IFD [Indexer]: setInfoStream
> >>> >
> >>>
> deletionPolicy=org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy@256ef705
> >>> > IW 1 [Indexer]: setInfoStream:
> >>> > dir=org.apache.lucene.store.SimpleFSDirectory@D
> >>> :\mnsavs\lresumes3\lresumes3.luc\lresumes3.update.main.4
> >>> > autoCommit=false
> >>> >
> >>>
> mergePolicy=org.apache.lucene.index.LogByteSizeMergePolicy@181b7c76mergeScheduler
> >>>
> =org.apache.lucene.index.ConcurrentMergeScheduler@34883357ramBufferSizeMB
> >>> =16.0
> >>> > maxBufferedDocs=-1 maxBuffereDeleteTerms=-1
> >>> > maxFieldLength=2147483647 index=
> >>> > IW 1 [Indexer]: flush at addIndexesNoOptimize
> >>> > IW 1 [Indexer]:   flush: segment=null docStoreSegment=null
> >>> docStoreOffset=0
> >>> > flushDocs=false flushDeletes=true flushDocStores=false numDocs=0
> >>> > numBufDelTerms=0
> >>> > IW 1 [Indexer]:   index before flush
> >>> > IW 1 [Indexer]: now start transaction
> >>> > IW 1 [Indexer]: LMP: findMerges: 2 segments
> >>> > IW 1 [Indexer]: LMP:   level 8.774518 to 9.524518: 1 segments
> >>> > IW 1 [Indexer]: LMP:   level 6.2973914 to 7.0473914: 1 segments
> >>> > IW 1 [Indexer]: CMS: now merge
> >>> > IW 1 [Indexer]: CMS:   index: _7:Cx1075533->_0** _8:Cx2795**
> >>> > IW 1 [Indexer]: CMS:   no more merges pending; now return
> >>> > IW 1 [Indexer]: add merge to pendingMerges: _7:Cx1075533->_0 [total 1
> >>> > pending]
> >>> > IW 1 [Indexer]: now merge
> >>> >  merge=_7:Cx1075533->_0 into _0 [mergeDocStores]
> >>> >  merge=org.apache.lucene.index.MergePolicy$OneMerge@4d480ea
> >>> >  index=_7:Cx1075533->_0** _8:Cx2795**
> >>> > IW 1 [Indexer]: merging _7:Cx1075533->_0 into _0 [mergeDocStores]
> >>> > IW 1 [Indexer]: merge: total 1074388 docs
> >>> > IW 1 [Indexer]: commitMerge: _7:Cx1075533->_0 into _0
> [mergeDocStores]
> >>> > index=_7:Cx1075533->_0** _8:Cx2795**
> >>> > IW 1 [Indexer]: commitMergeDeletes _7:Cx1075533->_0 into _0
> >>> [mergeDocStores]
> >>> > IFD [Indexer]: now checkpoint "segments_1" [2 segments ; isCommit =
> >>> false]
> >>> > IW 1 [Indexer]: LMP: findMerges: 2 segments
> >>> > IW 1 [Indexer]: LMP:   level 8.864886 to 9.614886: 1 segments
> >>> > IW 1 [Indexer]: LMP:   level 6.2973914 to 7.0473914: 1 segments
> >>> > IW 1 [Indexer]: add merge to pendingMerges: _8:Cx2795 [total 1
> pending]
> >>> > IW 1 [Indexer]: now merge
> >>> >  merge=_8:Cx2795 into _1 [mergeDocStores]
> >>> >  merge=org.apache.lucene.index.MergePolicy$OneMerge@606f8b2b
> >>> >  index=_0:C1074388 _8:Cx2795**
> >>> > IW 1 [Indexer]: merging _8:Cx2795 into _1 [mergeDocStores]
> >>> > IW 1 [Indexer]: merge: total 2795 docs
> >>> > IW 1 [Indexer]: handleMergeException: merge=_8:Cx2795 into _1
> >>> > [mergeDocStores] exc=java.io.IOException: There is not enough space
> on
> >>> the
> >>> > disk
> >>> > IW 1 [Indexer]: hit exception during merge
> >>> > IFD [Indexer]: refresh [prefix=_1]: removing newly created
> unreferenced
> >>> file
> >>> > "_1.fdt"
> >>> > IFD [Indexer]: delete "_1.fdt"
> >>> > IFD [Indexer]: refresh [prefix=_1]: removing newly created
> unreferenced
> >>> file
> >>> > "_1.fdx"
> >>> > IFD [Indexer]: delete "_1.fdx"
> >>> > IFD [Indexer]: refresh [prefix=_1]: removing newly created
> unreferenced
> >>> file
> >>> > "_1.fnm"
> >>> > IFD [Indexer]: delete "_1.fnm"
> >>> > IW 1 [Indexer]: now rollback transaction
> >>> > IW 1 [Indexer]: all running merges have aborted
> >>> > IFD [Indexer]: now checkpoint "segments_1" [0 segments ; isCommit =
> >>> false]
> >>> > IFD [Indexer]: delete "_0.nrm"
> >>> > IFD [Indexer]: delete "_0.tis"
> >>> > IFD [Indexer]: delete "_0.fnm"
> >>> > IFD [Indexer]: delete "_0.tii"
> >>> > IFD [Indexer]: delete "_0.frq"
> >>> > IFD [Indexer]: delete "_0.fdx"
> >>> > IFD [Indexer]: delete "_0.prx"
> >>> > IFD [Indexer]: delete "_0.fdt"
> >>> >
> >>> >
> >>> > Peter
> >>> >
> >>> > On Mon, Oct 26, 2009 at 3:59 PM, Peter Keegan <
> peterlkeegan@gmail.com
> >>> >wrote:
> >>> >
> >>> >>
> >>> >>
> >>> >> On Mon, Oct 26, 2009 at 3:00 PM, Michael McCandless <
> >>> >> lucene@mikemccandless.com> wrote:
> >>> >>
> >>> >>> On Mon, Oct 26, 2009 at 2:55 PM, Peter Keegan <
> peterlkeegan@gmail.com
> >>> >
> >>> >>> wrote:
> >>> >>> > On Mon, Oct 26, 2009 at 2:50 PM, Michael McCandless <
> >>> >>> > lucene@mikemccandless.com> wrote:
> >>> >>> >
> >>> >>> >> On Mon, Oct 26, 2009 at 10:44 AM, Peter Keegan <
> >>> peterlkeegan@gmail.com
> >>> >>> >
> >>> >>> >> wrote:
> >>> >>> >> > Even running in console mode, the exception is difficult to
> >>> >>> interpret.
> >>> >>> >> > Here's an exception that I think occurred during an add
> document,
> >>> >>> commit
> >>> >>> >> or
> >>> >>> >> > close:
> >>> >>> >> > doc counts differ for segment _g: field Reader shows 137 but
> >>> >>> segmentInfo
> >>> >>> >> > shows 5777
> >>> >>> >>
> >>> >>> >> That's spooky.  Do you have the full exception for this one?
>  What
> >>> IO
> >>> >>> >> system are you running on?  (Is it just a local drive on your
> >>> windows
> >>> >>> >> computer?) It's almost as if the IO system is not generating an
> >>> >>> >> IOException to Java when disk fills up.
> >>> >>> >>
> >>> >>> >
> >>> >>> > Index and code are all on a local drive. There is no other
> exception
> >>> >>> coming
> >>> >>> > back - just what I reported.
> >>> >>>
> >>> >>> But, you didn't report a traceback for this first one?
> >>> >>>
> >>> >>
> >>> >> Yes, I need to add some more printStackTrace calls.
> >>> >>
> >>> >>
> >>> >>>
> >>> >>> >> > I ensured that the disk space was low before updating the
> index.
> >>> >>> >>
> >>> >>> >> You mean, to intentionally test the disk-full case?
> >>> >>> >>
> >>> >>> >
> >>> >>> > Yes, that's right.
> >>> >>>
> >>> >>> OK.  Can you turn on IndexWriter's infoStream, get this disk full /
> >>> >>> corruption to happen again, and post back the resulting output?
>  Make
> >>> >>> sure your index first passes CheckIndex before starting (so we
> don't
> >>> >>> begin the test w/ any pre-existing index corruption).
> >>> >>>
> >>> >>
> >>> >> Good point about CheckIndex.  I've already found 2 bad ones. I will
> >>> build
> >>> >> new indexes from scratch. This will take a while.
> >>> >>
> >>> >>
> >>> >>> >> > On another occasion, the exception was:
> >>> >>> >> > background merge hit exception: _0:C1080260 _1:C139 _2:C123
> >>> _3:C107
> >>> >>> >> _4:C126
> >>> >>> >> > _5:C121 _6:C126 _7:C711 _8:C116 into _9 [optimize]
> >>> [mergeDocStores]
> >>> >>> >>
> >>> >>> >> In this case, the SegmentMerger was trying to open this segment,
> >>> but
> >>> >>> >> on attempting to read the first int from the fdx (fields index)
> >>> file
> >>> >>> >> for one of the segments, it hit EOF.
> >>> >>> >>
> >>> >>> >> This is also spooky -- this looks like index corruption, which
> >>> should
> >>> >>> >> never happen on hitting disk full.
> >>> >>> >>
> >>> >>> >
> >>> >>> > That's what I thought, too. Could Lucene be catching the
> IOException
> >>> and
> >>> >>> > turning it into a different exception?
> >>> >>>
> >>> >>> I think that's unlikely, but I guess possible.  We have "disk full"
> >>> >>> tests in the unit tests, that throw an IOException at different
> times.
> >>> >>>
> >>> >>> What exact windows version are you using?  The local drive is NTFS?
> >>> >>>
> >>> >>
> >>> >> Windows Server 2003 Enterprise x64 SP2. Local drive is NTFS
> >>> >>
> >>> >>
> >>> >>>
> >>> >>> Mike
> >>> >>>
> >>> >>>
> ---------------------------------------------------------------------
> >>> >>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> >>> >>> For additional commands, e-mail: java-user-help@lucene.apache.org
> >>> >>>
> >>> >>>
> >>> >>
> >>> >
> >>>
> >>> ---------------------------------------------------------------------
> >>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> >>> For additional commands, e-mail: java-user-help@lucene.apache.org
> >>>
> >>>
> >>
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> For additional commands, e-mail: java-user-help@lucene.apache.org
>
>

Re: IO exception during merge/optimize

Posted by Michael McCandless <lu...@mikemccandless.com>.
This is odd -- is it reproducible?

Can you narrow it down to a small set of docs that when indexed
produce a corrupted index?

If you attempt to optimize the index, does it fail?

Mike

On Tue, Oct 27, 2009 at 1:40 PM, Peter Keegan <pe...@gmail.com> wrote:
> It seems the index is corrupted immediately after the initial build (ample
> disk space was provided):
>
> Output from CheckIndex:
>
> Opening index @ D:\mnsavs\lresumes1\lresumes1.luc\lresumes1.search.main.2
>
> Segments file=segments_3 numSegments=1 version=FORMAT_DIAGNOSTICS [Lucene
> 2.9]
>  1 of 1: name=_7 docCount=1077025
>    compound=false
>    hasProx=true
>    numFiles=8
>    size (MB)=3,201.196
>    diagnostics = {optimize=true, mergeFactor=7, os.version=5.2, os=Windows
> 2003, mergeDocStores=false, lucene.version=2.9 exported - 2009-10-26
> 07:58:55, source=merge, os.arch=amd64, java.version=1.6.0_03,
> java.vendor=Sun Microsystems Inc.}
>    docStoreOffset=0
>    docStoreSegment=_0
>    docStoreIsCompoundFile=false
>    no deletions
>    test: open reader.........OK
>    test: fields..............OK [33 fields]
>    test: field norms.........OK [33 fields]
>    test: terms, freq, prox...ERROR [term contents:? docFreq=1 != num docs
> seen 482 + num docs deleted 0]
> java.lang.RuntimeException: term contents:? docFreq=1 != num docs seen 482 +
> num docs deleted 0
>    at org.apache.lucene.index.CheckIndex.testTermIndex(CheckIndex.java:675)
>    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:530)
>    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
>    test: stored fields.......OK [3231075 total field count; avg 3 fields
> per doc]
>    test: term vectors........OK [0 total vector count; avg 0 term/freq
> vector fields per doc]
> FAILED
>    WARNING: fixIndex() would remove reference to this segment; full
> exception:
> java.lang.RuntimeException: Term Index test failed
>    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:543)
>    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
>
> WARNING: 1 broken segments (containing 1077025 documents) detected
> WARNING: would write new segments file, and 1077025 documents would be lost,
> if -fix were specified
>
> Searching on this index seems to be fine, though.
>
> Here is the IndexWriter log from the build:
>
> IFD [Indexer]: setInfoStream
> deletionPolicy=org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy@2a9cfec1
> IW 0 [Indexer]: setInfoStream:
> dir=org.apache.lucene.store.SimpleFSDirectory@D:\mnsavs\lresumes1\lresumes1.luc\lresumes1.update.main.2
> autoCommit=false
> mergePolicy=org.apache.lucene.index.LogByteSizeMergePolicy@291946c2mergeScheduler=org.apache.lucene.index.ConcurrentMergeScheduler@3a747fa2ramBufferSizeMB=16.0
> maxBufferedDocs=-1 maxBuffereDeleteTerms=-1
> maxFieldLength=2147483647 index=
> IW 0 [Indexer]: setRAMBufferSizeMB 910.25
> IW 0 [Indexer]: setMaxBufferedDocs 1000000
> IW 0 [Indexer]: flush at getReader
> IW 0 [Indexer]:   flush: segment=null docStoreSegment=null docStoreOffset=0
> flushDocs=false flushDeletes=true flushDocStores=false numDocs=0
> numBufDelTerms=0
> IW 0 [Indexer]:   index before flush
> IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=886.463 allocMB=886.463
> deletesMB=23.803 triggerMB=910.25
> IW 0 [UpdWriterBuild]:   flush: segment=_0 docStoreSegment=_0
> docStoreOffset=0 flushDocs=true flushDeletes=false flushDocStores=false
> numDocs=171638 numBufDelTerms=171638
> IW 0 [UpdWriterBuild]:   index before flush
> IW 0 [UpdWriterBuild]: DW: flush postings as segment _0 numDocs=171638
> IW 0 [UpdWriterBuild]: DW:   oldRAMSize=929523712 newFlushedSize=573198529
> docs/MB=313.985 new/old=61.666%
> IFD [UpdWriterBuild]: now checkpoint "segments_1" [1 segments ; isCommit =
> false]
> IFD [UpdWriterBuild]: now checkpoint "segments_1" [1 segments ; isCommit =
> false]
> IW 0 [UpdWriterBuild]: LMP: findMerges: 1 segments
> IW 0 [UpdWriterBuild]: LMP:   level 8.008305 to 8.758305: 1 segments
> IW 0 [UpdWriterBuild]: CMS: now merge
> IW 0 [UpdWriterBuild]: CMS:   index: _0:C171638->_0
> IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
> IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=857.977 allocMB=901.32
> deletesMB=52.274 triggerMB=910.25
> IW 0 [UpdWriterBuild]:   flush: segment=_1 docStoreSegment=_0
> docStoreOffset=171638 flushDocs=true flushDeletes=false flushDocStores=false
> numDocs=204995 numBufDelTerms=204995
> IW 0 [UpdWriterBuild]:   index before flush _0:C171638->_0
> IW 0 [UpdWriterBuild]: DW: flush postings as segment _1 numDocs=204995
> IW 0 [UpdWriterBuild]: DW:   oldRAMSize=899653632 newFlushedSize=544283851
> docs/MB=394.928 new/old=60.499%
> IFD [UpdWriterBuild]: now checkpoint "segments_1" [2 segments ; isCommit =
> false]
> IFD [UpdWriterBuild]: now checkpoint "segments_1" [2 segments ; isCommit =
> false]
> IW 0 [UpdWriterBuild]: LMP: findMerges: 2 segments
> IW 0 [UpdWriterBuild]: LMP:   level 8.008305 to 8.758305: 2 segments
> IW 0 [UpdWriterBuild]: CMS: now merge
> IW 0 [UpdWriterBuild]: CMS:   index: _0:C171638->_0 _1:C204995->_0
> IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
> IW 0 [UpdWriterBuild]: DW:   RAM: now balance allocations: usedMB=834.645 vs
> trigger=910.25 allocMB=901.32 deletesMB=75.627 vs trigger=955.762
> byteBlockFree=35.938 charBlockFree=8.938
> IW 0 [UpdWriterBuild]: DW:     nothing to free
> IW 0 [UpdWriterBuild]: DW:     after free: freedMB=66.613 usedMB=910.272
> allocMB=834.707
> IW 0 [UpdWriterBuild]:   flush: segment=_2 docStoreSegment=_0
> docStoreOffset=376633 flushDocs=true flushDeletes=false flushDocStores=false
> numDocs=168236 numBufDelTerms=168236
> IW 0 [UpdWriterBuild]:   index before flush _0:C171638->_0 _1:C204995->_0
> IW 0 [UpdWriterBuild]: DW: flush postings as segment _2 numDocs=168236
> IW 0 [UpdWriterBuild]: DW:   oldRAMSize=875188224 newFlushedSize=530720464
> docs/MB=332.394 new/old=60.641%
> IFD [UpdWriterBuild]: now checkpoint "segments_1" [3 segments ; isCommit =
> false]
> IFD [UpdWriterBuild]: now checkpoint "segments_1" [3 segments ; isCommit =
> false]
> IW 0 [UpdWriterBuild]: LMP: findMerges: 3 segments
> IW 0 [UpdWriterBuild]: LMP:   level 8.008305 to 8.758305: 3 segments
> IW 0 [UpdWriterBuild]: CMS: now merge
> IW 0 [UpdWriterBuild]: CMS:   index: _0:C171638->_0 _1:C204995->_0
> _2:C168236->_0
> IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
> IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=814.282 allocMB=835.832
> deletesMB=95.997 triggerMB=910.25
> IW 0 [UpdWriterBuild]:   flush: segment=_3 docStoreSegment=_0
> docStoreOffset=544869 flushDocs=true flushDeletes=false flushDocStores=false
> numDocs=146894 numBufDelTerms=146894
> IW 0 [UpdWriterBuild]:   index before flush _0:C171638->_0 _1:C204995->_0
> _2:C168236->_0
> IW 0 [UpdWriterBuild]: DW: flush postings as segment _3 numDocs=146894
> IW 0 [UpdWriterBuild]: DW:   oldRAMSize=853836800 newFlushedSize=522388771
> docs/MB=294.856 new/old=61.181%
> IFD [UpdWriterBuild]: now checkpoint "segments_1" [4 segments ; isCommit =
> false]
> IFD [UpdWriterBuild]: now checkpoint "segments_1" [4 segments ; isCommit =
> false]
> IW 0 [UpdWriterBuild]: LMP: findMerges: 4 segments
> IW 0 [UpdWriterBuild]: LMP:   level 8.008305 to 8.758305: 4 segments
> IW 0 [UpdWriterBuild]: CMS: now merge
> IW 0 [UpdWriterBuild]: CMS:   index: _0:C171638->_0 _1:C204995->_0
> _2:C168236->_0 _3:C146894->_0
> IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
> IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=791.724 allocMB=835.832
> deletesMB=118.535 triggerMB=910.25
> IW 0 [UpdWriterBuild]:   flush: segment=_4 docStoreSegment=_0
> docStoreOffset=691763 flushDocs=true flushDeletes=false flushDocStores=false
> numDocs=162034 numBufDelTerms=162034
> IW 0 [UpdWriterBuild]:   index before flush _0:C171638->_0 _1:C204995->_0
> _2:C168236->_0 _3:C146894->_0
> IW 0 [UpdWriterBuild]: DW: flush postings as segment _4 numDocs=162034
> IW 0 [UpdWriterBuild]: DW:   oldRAMSize=830182400 newFlushedSize=498741034
> docs/MB=340.668 new/old=60.076%
> IFD [UpdWriterBuild]: now checkpoint "segments_1" [5 segments ; isCommit =
> false]
> IFD [UpdWriterBuild]: now checkpoint "segments_1" [5 segments ; isCommit =
> false]
> IW 0 [UpdWriterBuild]: LMP: findMerges: 5 segments
> IW 0 [UpdWriterBuild]: LMP:   level 8.008305 to 8.758305: 5 segments
> IW 0 [UpdWriterBuild]: CMS: now merge
> IW 0 [UpdWriterBuild]: CMS:   index: _0:C171638->_0 _1:C204995->_0
> _2:C168236->_0 _3:C146894->_0 _4:C162034->_0
> IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
> IW 0 [UpdWriterBuild]: DW:   RAM: now balance allocations: usedMB=771.396 vs
> trigger=910.25 allocMB=835.832 deletesMB=138.875 vs trigger=955.762
> byteBlockFree=39.688 charBlockFree=7.188
> IW 0 [UpdWriterBuild]: DW:     nothing to free
> IW 0 [UpdWriterBuild]: DW:     after free: freedMB=64.374 usedMB=910.271
> allocMB=771.458
> IW 0 [UpdWriterBuild]:   flush: segment=_5 docStoreSegment=_0
> docStoreOffset=853797 flushDocs=true flushDeletes=false flushDocStores=false
> numDocs=146250 numBufDelTerms=146250
> IW 0 [UpdWriterBuild]:   index before flush _0:C171638->_0 _1:C204995->_0
> _2:C168236->_0 _3:C146894->_0 _4:C162034->_0
> IW 0 [UpdWriterBuild]: DW: flush postings as segment _5 numDocs=146250
> IW 0 [UpdWriterBuild]: DW:   oldRAMSize=808866816 newFlushedSize=485212402
> docs/MB=316.056 new/old=59.987%
> IFD [UpdWriterBuild]: now checkpoint "segments_1" [6 segments ; isCommit =
> false]
> IFD [UpdWriterBuild]: now checkpoint "segments_1" [6 segments ; isCommit =
> false]
> IW 0 [UpdWriterBuild]: LMP: findMerges: 6 segments
> IW 0 [UpdWriterBuild]: LMP:   level 8.008305 to 8.758305: 6 segments
> IW 0 [UpdWriterBuild]: CMS: now merge
> IW 0 [UpdWriterBuild]: CMS:   index: _0:C171638->_0 _1:C204995->_0
> _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0
> IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
> IW 0 [Indexer]: commit: start
> IW 0 [Indexer]: commit: now prepare
> IW 0 [Indexer]: prepareCommit: flush
> IW 0 [Indexer]:   flush: segment=_6 docStoreSegment=_0
> docStoreOffset=1000047 flushDocs=true flushDeletes=true flushDocStores=true
> numDocs=76978 numBufDelTerms=76978
> IW 0 [Indexer]:   index before flush _0:C171638->_0 _1:C204995->_0
> _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0
> IW 0 [Indexer]:   flush shared docStore segment _0
> IW 0 [Indexer]: DW: closeDocStore: 2 files to flush to segment _0
> numDocs=1077025
> IW 0 [Indexer]: DW: flush postings as segment _6 numDocs=76978
> IW 0 [Indexer]: DW:   oldRAMSize=486968320 newFlushedSize=273168136
> docs/MB=295.486 new/old=56.096%
> IFD [Indexer]: now checkpoint "segments_1" [7 segments ; isCommit = false]
> IW 0 [Indexer]: DW: apply 1077025 buffered deleted terms and 0 deleted
> docIDs and 0 deleted queries on 7 segments.
> IFD [Indexer]: now checkpoint "segments_1" [7 segments ; isCommit = false]
> IW 0 [Indexer]: LMP: findMerges: 7 segments
> IW 0 [Indexer]: LMP:   level 8.008305 to 8.758305: 7 segments
> IW 0 [Indexer]: CMS: now merge
> IW 0 [Indexer]: CMS:   index: _0:C171638->_0 _1:C204995->_0 _2:C168236->_0
> _3:C146894->_0 _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
> IW 0 [Indexer]: CMS:   no more merges pending; now return
> IW 0 [Indexer]: startCommit(): start sizeInBytes=0
> IW 0 [Indexer]: startCommit index=_0:C171638->_0 _1:C204995->_0
> _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
> changeCount=21
> IW 0 [Indexer]: now sync _0.tis
> IW 0 [Indexer]: now sync _5.prx
> IW 0 [Indexer]: now sync _3.frq
> IW 0 [Indexer]: now sync _3.tii
> IW 0 [Indexer]: now sync _1.frq
> IW 0 [Indexer]: now sync _6.frq
> IW 0 [Indexer]: now sync _4.prx
> IW 0 [Indexer]: now sync _4.fnm
> IW 0 [Indexer]: now sync _2.tii
> IW 0 [Indexer]: now sync _3.fnm
> IW 0 [Indexer]: now sync _1.fnm
> IW 0 [Indexer]: now sync _6.tis
> IW 0 [Indexer]: now sync _4.frq
> IW 0 [Indexer]: now sync _5.nrm
> IW 0 [Indexer]: now sync _5.tis
> IW 0 [Indexer]: now sync _1.tii
> IW 0 [Indexer]: now sync _4.tis
> IW 0 [Indexer]: now sync _0.prx
> IW 0 [Indexer]: now sync _3.nrm
> IW 0 [Indexer]: now sync _4.tii
> IW 0 [Indexer]: now sync _0.nrm
> IW 0 [Indexer]: now sync _5.fnm
> IW 0 [Indexer]: now sync _1.tis
> IW 0 [Indexer]: now sync _0.fnm
> IW 0 [Indexer]: now sync _2.prx
> IW 0 [Indexer]: now sync _6.tii
> IW 0 [Indexer]: now sync _4.nrm
> IW 0 [Indexer]: now sync _2.frq
> IW 0 [Indexer]: now sync _5.frq
> IW 0 [Indexer]: now sync _3.prx
> IW 0 [Indexer]: now sync _5.tii
> IW 0 [Indexer]: now sync _2.fnm
> IW 0 [Indexer]: now sync _1.prx
> IW 0 [Indexer]: now sync _2.tis
> IW 0 [Indexer]: now sync _0.tii
> IW 0 [Indexer]: now sync _6.prx
> IW 0 [Indexer]: now sync _0.frq
> IW 0 [Indexer]: now sync _6.fnm
> IW 0 [Indexer]: now sync _0.fdx
> IW 0 [Indexer]: now sync _6.nrm
> IW 0 [Indexer]: now sync _0.fdt
> IW 0 [Indexer]: now sync _1.nrm
> IW 0 [Indexer]: now sync _2.nrm
> IW 0 [Indexer]: now sync _3.tis
> IW 0 [Indexer]: done all syncs
> IW 0 [Indexer]: commit: pendingCommit != null
> IW 0 [Indexer]: commit: wrote segments file "segments_2"
> IFD [Indexer]: now checkpoint "segments_2" [7 segments ; isCommit = true]
> IFD [Indexer]: deleteCommits: now decRef commit "segments_1"
> IFD [Indexer]: delete "segments_1"
> IW 0 [Indexer]: commit: done
> IW 0 [Indexer]: optimize: index now _0:C171638->_0 _1:C204995->_0
> _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
> IW 0 [Indexer]:   flush: segment=null docStoreSegment=_6 docStoreOffset=0
> flushDocs=false flushDeletes=true flushDocStores=false numDocs=0
> numBufDelTerms=0
> IW 0 [Indexer]:   index before flush _0:C171638->_0 _1:C204995->_0
> _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
> IW 0 [Indexer]: add merge to pendingMerges: _0:C171638->_0 _1:C204995->_0
> _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
> [optimize] [total 1 pending]
> IW 0 [Indexer]: CMS: now merge
> IW 0 [Indexer]: CMS:   index: _0:C171638->_0 _1:C204995->_0 _2:C168236->_0
> _3:C146894->_0 _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
> IW 0 [Indexer]: CMS:   consider merge _0:C171638->_0 _1:C204995->_0
> _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
> into _7 [optimize]
> IW 0 [Indexer]: CMS:     launch new thread [Lucene Merge Thread #0]
> IW 0 [Indexer]: CMS:   no more merges pending; now return
> IW 0 [Lucene Merge Thread #0]: CMS:   merge thread: start
> IW 0 [Lucene Merge Thread #0]: now merge
>  merge=_0:C171638->_0 _1:C204995->_0 _2:C168236->_0 _3:C146894->_0
> _4:C162034->_0 _5:C146250->_0 _6:C76978->_0 into _7 [optimize]
>  merge=org.apache.lucene.index.MergePolicy$OneMerge@78688954
>  index=_0:C171638->_0 _1:C204995->_0 _2:C168236->_0 _3:C146894->_0
> _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
> IW 0 [Lucene Merge Thread #0]: merging _0:C171638->_0 _1:C204995->_0
> _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
> into _7 [optimize]
> IW 0 [Lucene Merge Thread #0]: merge: total 1077025 docs
> IW 0 [Lucene Merge Thread #0]: commitMerge: _0:C171638->_0 _1:C204995->_0
> _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
> into _7 [optimize] index=_0:C171638->_0 _1:C204995->_0 _2:C168236->_0
> _3:C146894->_0 _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
> IW 0 [Lucene Merge Thread #0]: commitMergeDeletes _0:C171638->_0
> _1:C204995->_0 _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0
> _6:C76978->_0 into _7 [optimize]
> IFD [Lucene Merge Thread #0]: now checkpoint "segments_2" [1 segments ;
> isCommit = false]
> IW 0 [Lucene Merge Thread #0]: CMS:   merge thread: done
> IW 0 [Indexer]: now flush at close
> IW 0 [Indexer]:   flush: segment=null docStoreSegment=_6 docStoreOffset=0
> flushDocs=false flushDeletes=true flushDocStores=true numDocs=0
> numBufDelTerms=0
> IW 0 [Indexer]:   index before flush _7:C1077025->_0
> IW 0 [Indexer]:   flush shared docStore segment _6
> IW 0 [Indexer]: DW: closeDocStore: 0 files to flush to segment _6 numDocs=0
> IW 0 [Indexer]: CMS: now merge
> IW 0 [Indexer]: CMS:   index: _7:C1077025->_0
> IW 0 [Indexer]: CMS:   no more merges pending; now return
> IW 0 [Indexer]: now call final commit()
> IW 0 [Indexer]: startCommit(): start sizeInBytes=0
> IW 0 [Indexer]: startCommit index=_7:C1077025->_0 changeCount=23
> IW 0 [Indexer]: now sync _7.prx
> IW 0 [Indexer]: now sync _7.fnm
> IW 0 [Indexer]: now sync _7.tis
> IW 0 [Indexer]: now sync _7.nrm
> IW 0 [Indexer]: now sync _7.tii
> IW 0 [Indexer]: now sync _7.frq
> IW 0 [Indexer]: done all syncs
> IW 0 [Indexer]: commit: pendingCommit != null
> IW 0 [Indexer]: commit: wrote segments file "segments_3"
> IFD [Indexer]: now checkpoint "segments_3" [1 segments ; isCommit = true]
> IFD [Indexer]: deleteCommits: now decRef commit "segments_2"
> IFD [Indexer]: delete "_0.tis"
> IFD [Indexer]: delete "_5.prx"
> IFD [Indexer]: delete "_3.tii"
> IFD [Indexer]: delete "_3.frq"
> IFD [Indexer]: delete "_1.frq"
> IFD [Indexer]: delete "_6.frq"
> IFD [Indexer]: delete "_4.prx"
> IFD [Indexer]: delete "_4.fnm"
> IFD [Indexer]: delete "_2.tii"
> IFD [Indexer]: delete "_3.fnm"
> IFD [Indexer]: delete "_1.fnm"
> IFD [Indexer]: delete "_6.tis"
> IFD [Indexer]: delete "_4.frq"
> IFD [Indexer]: delete "_5.nrm"
> IFD [Indexer]: delete "_5.tis"
> IFD [Indexer]: delete "_1.tii"
> IFD [Indexer]: delete "_4.tis"
> IFD [Indexer]: delete "_0.prx"
> IFD [Indexer]: delete "_3.nrm"
> IFD [Indexer]: delete "_4.tii"
> IFD [Indexer]: delete "_0.nrm"
> IFD [Indexer]: delete "_5.fnm"
> IFD [Indexer]: delete "_1.tis"
> IFD [Indexer]: delete "_0.fnm"
> IFD [Indexer]: delete "_2.prx"
> IFD [Indexer]: delete "_6.tii"
> IFD [Indexer]: delete "_4.nrm"
> IFD [Indexer]: delete "_2.frq"
> IFD [Indexer]: delete "_5.frq"
> IFD [Indexer]: delete "_3.prx"
> IFD [Indexer]: delete "_5.tii"
> IFD [Indexer]: delete "_2.fnm"
> IFD [Indexer]: delete "_1.prx"
> IFD [Indexer]: delete "_2.tis"
> IFD [Indexer]: delete "_0.tii"
> IFD [Indexer]: delete "_6.prx"
> IFD [Indexer]: delete "_0.frq"
> IFD [Indexer]: delete "segments_2"
> IFD [Indexer]: delete "_6.fnm"
> IFD [Indexer]: delete "_6.nrm"
> IFD [Indexer]: delete "_1.nrm"
> IFD [Indexer]: delete "_2.nrm"
> IFD [Indexer]: delete "_3.tis"
> IW 0 [Indexer]: commit: done
> IW 0 [Indexer]: at close: _7:C1077025->_0
>
> I see no errors.
> Peter
>
>
> On Tue, Oct 27, 2009 at 10:44 AM, Peter Keegan <pe...@gmail.com>wrote:
>
>>
>>
>> On Tue, Oct 27, 2009 at 10:37 AM, Michael McCandless <
>> lucene@mikemccandless.com> wrote:
>>
>>> OK that exception looks more reasonable, for a disk full event.
>>>
>>> But, I can't tell from your followon emails: did this lead to index
>>> corruption?
>>>
>>
>> Yes, but this may be caused by the application ignoring a Lucene exception
>> somewhere else. I will chase this down.
>>
>>>
>>> Also, I noticed you're using a rather old 1.6.0 JRE (1.6.0_03) -- you
>>> really should upgrade that to the latest 1.6.0 -- there's at least one
>>> known problem with Lucene and early 1.6.0 JREs.
>>>
>>
>> Yes, I remember this problem - that's why we stayed at _03
>> Thanks.
>>
>>>
>>> Mike
>>>
>>> On Tue, Oct 27, 2009 at 10:00 AM, Peter Keegan <pe...@gmail.com>
>>> wrote:
>>> > After rebuilding the corrupted indexes, the low disk space exception is
>>> now
>>> > occurring as expected. Sorry for the distraction.
>>> >
>>> > fyi, here are the details:
>>> >
>>> >  java.io.IOException: There is not enough space on the disk
>>> >    at java.io.RandomAccessFile.writeBytes(Native Method)
>>> >    at java.io.RandomAccessFile.write(Unknown Source)
>>> >    at
>>> >
>>> org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexOutput.flushBuffer(SimpleFSDirectory.java:192)
>>> >    at
>>> >
>>> org.apache.lucene.store.BufferedIndexOutput.flushBuffer(BufferedIndexOutput.java:96)
>>> >    at
>>> >
>>> org.apache.lucene.store.BufferedIndexOutput.flush(BufferedIndexOutput.java:85)
>>> >    at
>>> >
>>> org.apache.lucene.store.BufferedIndexOutput.close(BufferedIndexOutput.java:109)
>>> >    at
>>> >
>>> org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexOutput.close(SimpleFSDirectory.java:199)
>>> >    at org.apache.lucene.index.FieldsWriter.close(FieldsWriter.java:144)
>>> >    at
>>> >
>>> org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:357)
>>> >    at
>>> org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:153)
>>> >    at
>>> > org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:5011)
>>> >    at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4596)
>>> >    at
>>> >
>>> org.apache.lucene.index.IndexWriter.resolveExternalSegments(IndexWriter.java:3786)
>>> >    at
>>> >
>>> org.apache.lucene.index.IndexWriter.addIndexesNoOptimize(IndexWriter.java:3695)
>>> >
>>> >
>>> > And the corresponding index info log:
>>> >
>>> > IFD [Indexer]: setInfoStream
>>> >
>>> deletionPolicy=org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy@256ef705
>>> > IW 1 [Indexer]: setInfoStream:
>>> > dir=org.apache.lucene.store.SimpleFSDirectory@D
>>> :\mnsavs\lresumes3\lresumes3.luc\lresumes3.update.main.4
>>> > autoCommit=false
>>> >
>>> mergePolicy=org.apache.lucene.index.LogByteSizeMergePolicy@181b7c76mergeScheduler
>>> =org.apache.lucene.index.ConcurrentMergeScheduler@34883357ramBufferSizeMB
>>> =16.0
>>> > maxBufferedDocs=-1 maxBuffereDeleteTerms=-1
>>> > maxFieldLength=2147483647 index=
>>> > IW 1 [Indexer]: flush at addIndexesNoOptimize
>>> > IW 1 [Indexer]:   flush: segment=null docStoreSegment=null
>>> docStoreOffset=0
>>> > flushDocs=false flushDeletes=true flushDocStores=false numDocs=0
>>> > numBufDelTerms=0
>>> > IW 1 [Indexer]:   index before flush
>>> > IW 1 [Indexer]: now start transaction
>>> > IW 1 [Indexer]: LMP: findMerges: 2 segments
>>> > IW 1 [Indexer]: LMP:   level 8.774518 to 9.524518: 1 segments
>>> > IW 1 [Indexer]: LMP:   level 6.2973914 to 7.0473914: 1 segments
>>> > IW 1 [Indexer]: CMS: now merge
>>> > IW 1 [Indexer]: CMS:   index: _7:Cx1075533->_0** _8:Cx2795**
>>> > IW 1 [Indexer]: CMS:   no more merges pending; now return
>>> > IW 1 [Indexer]: add merge to pendingMerges: _7:Cx1075533->_0 [total 1
>>> > pending]
>>> > IW 1 [Indexer]: now merge
>>> >  merge=_7:Cx1075533->_0 into _0 [mergeDocStores]
>>> >  merge=org.apache.lucene.index.MergePolicy$OneMerge@4d480ea
>>> >  index=_7:Cx1075533->_0** _8:Cx2795**
>>> > IW 1 [Indexer]: merging _7:Cx1075533->_0 into _0 [mergeDocStores]
>>> > IW 1 [Indexer]: merge: total 1074388 docs
>>> > IW 1 [Indexer]: commitMerge: _7:Cx1075533->_0 into _0 [mergeDocStores]
>>> > index=_7:Cx1075533->_0** _8:Cx2795**
>>> > IW 1 [Indexer]: commitMergeDeletes _7:Cx1075533->_0 into _0
>>> [mergeDocStores]
>>> > IFD [Indexer]: now checkpoint "segments_1" [2 segments ; isCommit =
>>> false]
>>> > IW 1 [Indexer]: LMP: findMerges: 2 segments
>>> > IW 1 [Indexer]: LMP:   level 8.864886 to 9.614886: 1 segments
>>> > IW 1 [Indexer]: LMP:   level 6.2973914 to 7.0473914: 1 segments
>>> > IW 1 [Indexer]: add merge to pendingMerges: _8:Cx2795 [total 1 pending]
>>> > IW 1 [Indexer]: now merge
>>> >  merge=_8:Cx2795 into _1 [mergeDocStores]
>>> >  merge=org.apache.lucene.index.MergePolicy$OneMerge@606f8b2b
>>> >  index=_0:C1074388 _8:Cx2795**
>>> > IW 1 [Indexer]: merging _8:Cx2795 into _1 [mergeDocStores]
>>> > IW 1 [Indexer]: merge: total 2795 docs
>>> > IW 1 [Indexer]: handleMergeException: merge=_8:Cx2795 into _1
>>> > [mergeDocStores] exc=java.io.IOException: There is not enough space on
>>> the
>>> > disk
>>> > IW 1 [Indexer]: hit exception during merge
>>> > IFD [Indexer]: refresh [prefix=_1]: removing newly created unreferenced
>>> file
>>> > "_1.fdt"
>>> > IFD [Indexer]: delete "_1.fdt"
>>> > IFD [Indexer]: refresh [prefix=_1]: removing newly created unreferenced
>>> file
>>> > "_1.fdx"
>>> > IFD [Indexer]: delete "_1.fdx"
>>> > IFD [Indexer]: refresh [prefix=_1]: removing newly created unreferenced
>>> file
>>> > "_1.fnm"
>>> > IFD [Indexer]: delete "_1.fnm"
>>> > IW 1 [Indexer]: now rollback transaction
>>> > IW 1 [Indexer]: all running merges have aborted
>>> > IFD [Indexer]: now checkpoint "segments_1" [0 segments ; isCommit =
>>> false]
>>> > IFD [Indexer]: delete "_0.nrm"
>>> > IFD [Indexer]: delete "_0.tis"
>>> > IFD [Indexer]: delete "_0.fnm"
>>> > IFD [Indexer]: delete "_0.tii"
>>> > IFD [Indexer]: delete "_0.frq"
>>> > IFD [Indexer]: delete "_0.fdx"
>>> > IFD [Indexer]: delete "_0.prx"
>>> > IFD [Indexer]: delete "_0.fdt"
>>> >
>>> >
>>> > Peter
>>> >
>>> > On Mon, Oct 26, 2009 at 3:59 PM, Peter Keegan <peterlkeegan@gmail.com
>>> >wrote:
>>> >
>>> >>
>>> >>
>>> >> On Mon, Oct 26, 2009 at 3:00 PM, Michael McCandless <
>>> >> lucene@mikemccandless.com> wrote:
>>> >>
>>> >>> On Mon, Oct 26, 2009 at 2:55 PM, Peter Keegan <peterlkeegan@gmail.com
>>> >
>>> >>> wrote:
>>> >>> > On Mon, Oct 26, 2009 at 2:50 PM, Michael McCandless <
>>> >>> > lucene@mikemccandless.com> wrote:
>>> >>> >
>>> >>> >> On Mon, Oct 26, 2009 at 10:44 AM, Peter Keegan <
>>> peterlkeegan@gmail.com
>>> >>> >
>>> >>> >> wrote:
>>> >>> >> > Even running in console mode, the exception is difficult to
>>> >>> interpret.
>>> >>> >> > Here's an exception that I think occurred during an add document,
>>> >>> commit
>>> >>> >> or
>>> >>> >> > close:
>>> >>> >> > doc counts differ for segment _g: field Reader shows 137 but
>>> >>> segmentInfo
>>> >>> >> > shows 5777
>>> >>> >>
>>> >>> >> That's spooky.  Do you have the full exception for this one?  What
>>> IO
>>> >>> >> system are you running on?  (Is it just a local drive on your
>>> windows
>>> >>> >> computer?) It's almost as if the IO system is not generating an
>>> >>> >> IOException to Java when disk fills up.
>>> >>> >>
>>> >>> >
>>> >>> > Index and code are all on a local drive. There is no other exception
>>> >>> coming
>>> >>> > back - just what I reported.
>>> >>>
>>> >>> But, you didn't report a traceback for this first one?
>>> >>>
>>> >>
>>> >> Yes, I need to add some more printStackTrace calls.
>>> >>
>>> >>
>>> >>>
>>> >>> >> > I ensured that the disk space was low before updating the index.
>>> >>> >>
>>> >>> >> You mean, to intentionally test the disk-full case?
>>> >>> >>
>>> >>> >
>>> >>> > Yes, that's right.
>>> >>>
>>> >>> OK.  Can you turn on IndexWriter's infoStream, get this disk full /
>>> >>> corruption to happen again, and post back the resulting output?  Make
>>> >>> sure your index first passes CheckIndex before starting (so we don't
>>> >>> begin the test w/ any pre-existing index corruption).
>>> >>>
>>> >>
>>> >> Good point about CheckIndex.  I've already found 2 bad ones. I will
>>> build
>>> >> new indexes from scratch. This will take a while.
>>> >>
>>> >>
>>> >>> >> > On another occasion, the exception was:
>>> >>> >> > background merge hit exception: _0:C1080260 _1:C139 _2:C123
>>> _3:C107
>>> >>> >> _4:C126
>>> >>> >> > _5:C121 _6:C126 _7:C711 _8:C116 into _9 [optimize]
>>> [mergeDocStores]
>>> >>> >>
>>> >>> >> In this case, the SegmentMerger was trying to open this segment,
>>> but
>>> >>> >> on attempting to read the first int from the fdx (fields index)
>>> file
>>> >>> >> for one of the segments, it hit EOF.
>>> >>> >>
>>> >>> >> This is also spooky -- this looks like index corruption, which
>>> should
>>> >>> >> never happen on hitting disk full.
>>> >>> >>
>>> >>> >
>>> >>> > That's what I thought, too. Could Lucene be catching the IOException
>>> and
>>> >>> > turning it into a different exception?
>>> >>>
>>> >>> I think that's unlikely, but I guess possible.  We have "disk full"
>>> >>> tests in the unit tests, that throw an IOException at different times.
>>> >>>
>>> >>> What exact windows version are you using?  The local drive is NTFS?
>>> >>>
>>> >>
>>> >> Windows Server 2003 Enterprise x64 SP2. Local drive is NTFS
>>> >>
>>> >>
>>> >>>
>>> >>> Mike
>>> >>>
>>> >>> ---------------------------------------------------------------------
>>> >>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>>> >>> For additional commands, e-mail: java-user-help@lucene.apache.org
>>> >>>
>>> >>>
>>> >>
>>> >
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>>> For additional commands, e-mail: java-user-help@lucene.apache.org
>>>
>>>
>>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: IO exception during merge/optimize

Posted by Peter Keegan <pe...@gmail.com>.
It seems the index is corrupted immediately after the initial build (ample
disk space was provided):

Output from CheckIndex:

Opening index @ D:\mnsavs\lresumes1\lresumes1.luc\lresumes1.search.main.2

Segments file=segments_3 numSegments=1 version=FORMAT_DIAGNOSTICS [Lucene
2.9]
  1 of 1: name=_7 docCount=1077025
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=3,201.196
    diagnostics = {optimize=true, mergeFactor=7, os.version=5.2, os=Windows
2003, mergeDocStores=false, lucene.version=2.9 exported - 2009-10-26
07:58:55, source=merge, os.arch=amd64, java.version=1.6.0_03,
java.vendor=Sun Microsystems Inc.}
    docStoreOffset=0
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...ERROR [term contents:? docFreq=1 != num docs
seen 482 + num docs deleted 0]
java.lang.RuntimeException: term contents:? docFreq=1 != num docs seen 482 +
num docs deleted 0
    at org.apache.lucene.index.CheckIndex.testTermIndex(CheckIndex.java:675)
    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:530)
    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
    test: stored fields.......OK [3231075 total field count; avg 3 fields
per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]
FAILED
    WARNING: fixIndex() would remove reference to this segment; full
exception:
java.lang.RuntimeException: Term Index test failed
    at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:543)
    at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)

WARNING: 1 broken segments (containing 1077025 documents) detected
WARNING: would write new segments file, and 1077025 documents would be lost,
if -fix were specified

Searching on this index seems to be fine, though.

Here is the IndexWriter log from the build:

IFD [Indexer]: setInfoStream
deletionPolicy=org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy@2a9cfec1
IW 0 [Indexer]: setInfoStream:
dir=org.apache.lucene.store.SimpleFSDirectory@D:\mnsavs\lresumes1\lresumes1.luc\lresumes1.update.main.2
autoCommit=false
mergePolicy=org.apache.lucene.index.LogByteSizeMergePolicy@291946c2mergeScheduler=org.apache.lucene.index.ConcurrentMergeScheduler@3a747fa2ramBufferSizeMB=16.0
maxBufferedDocs=-1 maxBuffereDeleteTerms=-1
maxFieldLength=2147483647 index=
IW 0 [Indexer]: setRAMBufferSizeMB 910.25
IW 0 [Indexer]: setMaxBufferedDocs 1000000
IW 0 [Indexer]: flush at getReader
IW 0 [Indexer]:   flush: segment=null docStoreSegment=null docStoreOffset=0
flushDocs=false flushDeletes=true flushDocStores=false numDocs=0
numBufDelTerms=0
IW 0 [Indexer]:   index before flush
IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=886.463 allocMB=886.463
deletesMB=23.803 triggerMB=910.25
IW 0 [UpdWriterBuild]:   flush: segment=_0 docStoreSegment=_0
docStoreOffset=0 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=171638 numBufDelTerms=171638
IW 0 [UpdWriterBuild]:   index before flush
IW 0 [UpdWriterBuild]: DW: flush postings as segment _0 numDocs=171638
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=929523712 newFlushedSize=573198529
docs/MB=313.985 new/old=61.666%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [1 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [1 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 1 segments
IW 0 [UpdWriterBuild]: LMP:   level 8.008305 to 8.758305: 1 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C171638->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=857.977 allocMB=901.32
deletesMB=52.274 triggerMB=910.25
IW 0 [UpdWriterBuild]:   flush: segment=_1 docStoreSegment=_0
docStoreOffset=171638 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=204995 numBufDelTerms=204995
IW 0 [UpdWriterBuild]:   index before flush _0:C171638->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _1 numDocs=204995
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=899653632 newFlushedSize=544283851
docs/MB=394.928 new/old=60.499%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [2 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [2 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 2 segments
IW 0 [UpdWriterBuild]: LMP:   level 8.008305 to 8.758305: 2 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C171638->_0 _1:C204995->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now balance allocations: usedMB=834.645 vs
trigger=910.25 allocMB=901.32 deletesMB=75.627 vs trigger=955.762
byteBlockFree=35.938 charBlockFree=8.938
IW 0 [UpdWriterBuild]: DW:     nothing to free
IW 0 [UpdWriterBuild]: DW:     after free: freedMB=66.613 usedMB=910.272
allocMB=834.707
IW 0 [UpdWriterBuild]:   flush: segment=_2 docStoreSegment=_0
docStoreOffset=376633 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=168236 numBufDelTerms=168236
IW 0 [UpdWriterBuild]:   index before flush _0:C171638->_0 _1:C204995->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _2 numDocs=168236
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=875188224 newFlushedSize=530720464
docs/MB=332.394 new/old=60.641%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [3 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [3 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 3 segments
IW 0 [UpdWriterBuild]: LMP:   level 8.008305 to 8.758305: 3 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C171638->_0 _1:C204995->_0
_2:C168236->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=814.282 allocMB=835.832
deletesMB=95.997 triggerMB=910.25
IW 0 [UpdWriterBuild]:   flush: segment=_3 docStoreSegment=_0
docStoreOffset=544869 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=146894 numBufDelTerms=146894
IW 0 [UpdWriterBuild]:   index before flush _0:C171638->_0 _1:C204995->_0
_2:C168236->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _3 numDocs=146894
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=853836800 newFlushedSize=522388771
docs/MB=294.856 new/old=61.181%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [4 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [4 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 4 segments
IW 0 [UpdWriterBuild]: LMP:   level 8.008305 to 8.758305: 4 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C171638->_0 _1:C204995->_0
_2:C168236->_0 _3:C146894->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now flush @ usedMB=791.724 allocMB=835.832
deletesMB=118.535 triggerMB=910.25
IW 0 [UpdWriterBuild]:   flush: segment=_4 docStoreSegment=_0
docStoreOffset=691763 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=162034 numBufDelTerms=162034
IW 0 [UpdWriterBuild]:   index before flush _0:C171638->_0 _1:C204995->_0
_2:C168236->_0 _3:C146894->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _4 numDocs=162034
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=830182400 newFlushedSize=498741034
docs/MB=340.668 new/old=60.076%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [5 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [5 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 5 segments
IW 0 [UpdWriterBuild]: LMP:   level 8.008305 to 8.758305: 5 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C171638->_0 _1:C204995->_0
_2:C168236->_0 _3:C146894->_0 _4:C162034->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [UpdWriterBuild]: DW:   RAM: now balance allocations: usedMB=771.396 vs
trigger=910.25 allocMB=835.832 deletesMB=138.875 vs trigger=955.762
byteBlockFree=39.688 charBlockFree=7.188
IW 0 [UpdWriterBuild]: DW:     nothing to free
IW 0 [UpdWriterBuild]: DW:     after free: freedMB=64.374 usedMB=910.271
allocMB=771.458
IW 0 [UpdWriterBuild]:   flush: segment=_5 docStoreSegment=_0
docStoreOffset=853797 flushDocs=true flushDeletes=false flushDocStores=false
numDocs=146250 numBufDelTerms=146250
IW 0 [UpdWriterBuild]:   index before flush _0:C171638->_0 _1:C204995->_0
_2:C168236->_0 _3:C146894->_0 _4:C162034->_0
IW 0 [UpdWriterBuild]: DW: flush postings as segment _5 numDocs=146250
IW 0 [UpdWriterBuild]: DW:   oldRAMSize=808866816 newFlushedSize=485212402
docs/MB=316.056 new/old=59.987%
IFD [UpdWriterBuild]: now checkpoint "segments_1" [6 segments ; isCommit =
false]
IFD [UpdWriterBuild]: now checkpoint "segments_1" [6 segments ; isCommit =
false]
IW 0 [UpdWriterBuild]: LMP: findMerges: 6 segments
IW 0 [UpdWriterBuild]: LMP:   level 8.008305 to 8.758305: 6 segments
IW 0 [UpdWriterBuild]: CMS: now merge
IW 0 [UpdWriterBuild]: CMS:   index: _0:C171638->_0 _1:C204995->_0
_2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0
IW 0 [UpdWriterBuild]: CMS:   no more merges pending; now return
IW 0 [Indexer]: commit: start
IW 0 [Indexer]: commit: now prepare
IW 0 [Indexer]: prepareCommit: flush
IW 0 [Indexer]:   flush: segment=_6 docStoreSegment=_0
docStoreOffset=1000047 flushDocs=true flushDeletes=true flushDocStores=true
numDocs=76978 numBufDelTerms=76978
IW 0 [Indexer]:   index before flush _0:C171638->_0 _1:C204995->_0
_2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0
IW 0 [Indexer]:   flush shared docStore segment _0
IW 0 [Indexer]: DW: closeDocStore: 2 files to flush to segment _0
numDocs=1077025
IW 0 [Indexer]: DW: flush postings as segment _6 numDocs=76978
IW 0 [Indexer]: DW:   oldRAMSize=486968320 newFlushedSize=273168136
docs/MB=295.486 new/old=56.096%
IFD [Indexer]: now checkpoint "segments_1" [7 segments ; isCommit = false]
IW 0 [Indexer]: DW: apply 1077025 buffered deleted terms and 0 deleted
docIDs and 0 deleted queries on 7 segments.
IFD [Indexer]: now checkpoint "segments_1" [7 segments ; isCommit = false]
IW 0 [Indexer]: LMP: findMerges: 7 segments
IW 0 [Indexer]: LMP:   level 8.008305 to 8.758305: 7 segments
IW 0 [Indexer]: CMS: now merge
IW 0 [Indexer]: CMS:   index: _0:C171638->_0 _1:C204995->_0 _2:C168236->_0
_3:C146894->_0 _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
IW 0 [Indexer]: CMS:   no more merges pending; now return
IW 0 [Indexer]: startCommit(): start sizeInBytes=0
IW 0 [Indexer]: startCommit index=_0:C171638->_0 _1:C204995->_0
_2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
changeCount=21
IW 0 [Indexer]: now sync _0.tis
IW 0 [Indexer]: now sync _5.prx
IW 0 [Indexer]: now sync _3.frq
IW 0 [Indexer]: now sync _3.tii
IW 0 [Indexer]: now sync _1.frq
IW 0 [Indexer]: now sync _6.frq
IW 0 [Indexer]: now sync _4.prx
IW 0 [Indexer]: now sync _4.fnm
IW 0 [Indexer]: now sync _2.tii
IW 0 [Indexer]: now sync _3.fnm
IW 0 [Indexer]: now sync _1.fnm
IW 0 [Indexer]: now sync _6.tis
IW 0 [Indexer]: now sync _4.frq
IW 0 [Indexer]: now sync _5.nrm
IW 0 [Indexer]: now sync _5.tis
IW 0 [Indexer]: now sync _1.tii
IW 0 [Indexer]: now sync _4.tis
IW 0 [Indexer]: now sync _0.prx
IW 0 [Indexer]: now sync _3.nrm
IW 0 [Indexer]: now sync _4.tii
IW 0 [Indexer]: now sync _0.nrm
IW 0 [Indexer]: now sync _5.fnm
IW 0 [Indexer]: now sync _1.tis
IW 0 [Indexer]: now sync _0.fnm
IW 0 [Indexer]: now sync _2.prx
IW 0 [Indexer]: now sync _6.tii
IW 0 [Indexer]: now sync _4.nrm
IW 0 [Indexer]: now sync _2.frq
IW 0 [Indexer]: now sync _5.frq
IW 0 [Indexer]: now sync _3.prx
IW 0 [Indexer]: now sync _5.tii
IW 0 [Indexer]: now sync _2.fnm
IW 0 [Indexer]: now sync _1.prx
IW 0 [Indexer]: now sync _2.tis
IW 0 [Indexer]: now sync _0.tii
IW 0 [Indexer]: now sync _6.prx
IW 0 [Indexer]: now sync _0.frq
IW 0 [Indexer]: now sync _6.fnm
IW 0 [Indexer]: now sync _0.fdx
IW 0 [Indexer]: now sync _6.nrm
IW 0 [Indexer]: now sync _0.fdt
IW 0 [Indexer]: now sync _1.nrm
IW 0 [Indexer]: now sync _2.nrm
IW 0 [Indexer]: now sync _3.tis
IW 0 [Indexer]: done all syncs
IW 0 [Indexer]: commit: pendingCommit != null
IW 0 [Indexer]: commit: wrote segments file "segments_2"
IFD [Indexer]: now checkpoint "segments_2" [7 segments ; isCommit = true]
IFD [Indexer]: deleteCommits: now decRef commit "segments_1"
IFD [Indexer]: delete "segments_1"
IW 0 [Indexer]: commit: done
IW 0 [Indexer]: optimize: index now _0:C171638->_0 _1:C204995->_0
_2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
IW 0 [Indexer]:   flush: segment=null docStoreSegment=_6 docStoreOffset=0
flushDocs=false flushDeletes=true flushDocStores=false numDocs=0
numBufDelTerms=0
IW 0 [Indexer]:   index before flush _0:C171638->_0 _1:C204995->_0
_2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
IW 0 [Indexer]: add merge to pendingMerges: _0:C171638->_0 _1:C204995->_0
_2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
[optimize] [total 1 pending]
IW 0 [Indexer]: CMS: now merge
IW 0 [Indexer]: CMS:   index: _0:C171638->_0 _1:C204995->_0 _2:C168236->_0
_3:C146894->_0 _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
IW 0 [Indexer]: CMS:   consider merge _0:C171638->_0 _1:C204995->_0
_2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
into _7 [optimize]
IW 0 [Indexer]: CMS:     launch new thread [Lucene Merge Thread #0]
IW 0 [Indexer]: CMS:   no more merges pending; now return
IW 0 [Lucene Merge Thread #0]: CMS:   merge thread: start
IW 0 [Lucene Merge Thread #0]: now merge
  merge=_0:C171638->_0 _1:C204995->_0 _2:C168236->_0 _3:C146894->_0
_4:C162034->_0 _5:C146250->_0 _6:C76978->_0 into _7 [optimize]
  merge=org.apache.lucene.index.MergePolicy$OneMerge@78688954
  index=_0:C171638->_0 _1:C204995->_0 _2:C168236->_0 _3:C146894->_0
_4:C162034->_0 _5:C146250->_0 _6:C76978->_0
IW 0 [Lucene Merge Thread #0]: merging _0:C171638->_0 _1:C204995->_0
_2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
into _7 [optimize]
IW 0 [Lucene Merge Thread #0]: merge: total 1077025 docs
IW 0 [Lucene Merge Thread #0]: commitMerge: _0:C171638->_0 _1:C204995->_0
_2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
into _7 [optimize] index=_0:C171638->_0 _1:C204995->_0 _2:C168236->_0
_3:C146894->_0 _4:C162034->_0 _5:C146250->_0 _6:C76978->_0
IW 0 [Lucene Merge Thread #0]: commitMergeDeletes _0:C171638->_0
_1:C204995->_0 _2:C168236->_0 _3:C146894->_0 _4:C162034->_0 _5:C146250->_0
_6:C76978->_0 into _7 [optimize]
IFD [Lucene Merge Thread #0]: now checkpoint "segments_2" [1 segments ;
isCommit = false]
IW 0 [Lucene Merge Thread #0]: CMS:   merge thread: done
IW 0 [Indexer]: now flush at close
IW 0 [Indexer]:   flush: segment=null docStoreSegment=_6 docStoreOffset=0
flushDocs=false flushDeletes=true flushDocStores=true numDocs=0
numBufDelTerms=0
IW 0 [Indexer]:   index before flush _7:C1077025->_0
IW 0 [Indexer]:   flush shared docStore segment _6
IW 0 [Indexer]: DW: closeDocStore: 0 files to flush to segment _6 numDocs=0
IW 0 [Indexer]: CMS: now merge
IW 0 [Indexer]: CMS:   index: _7:C1077025->_0
IW 0 [Indexer]: CMS:   no more merges pending; now return
IW 0 [Indexer]: now call final commit()
IW 0 [Indexer]: startCommit(): start sizeInBytes=0
IW 0 [Indexer]: startCommit index=_7:C1077025->_0 changeCount=23
IW 0 [Indexer]: now sync _7.prx
IW 0 [Indexer]: now sync _7.fnm
IW 0 [Indexer]: now sync _7.tis
IW 0 [Indexer]: now sync _7.nrm
IW 0 [Indexer]: now sync _7.tii
IW 0 [Indexer]: now sync _7.frq
IW 0 [Indexer]: done all syncs
IW 0 [Indexer]: commit: pendingCommit != null
IW 0 [Indexer]: commit: wrote segments file "segments_3"
IFD [Indexer]: now checkpoint "segments_3" [1 segments ; isCommit = true]
IFD [Indexer]: deleteCommits: now decRef commit "segments_2"
IFD [Indexer]: delete "_0.tis"
IFD [Indexer]: delete "_5.prx"
IFD [Indexer]: delete "_3.tii"
IFD [Indexer]: delete "_3.frq"
IFD [Indexer]: delete "_1.frq"
IFD [Indexer]: delete "_6.frq"
IFD [Indexer]: delete "_4.prx"
IFD [Indexer]: delete "_4.fnm"
IFD [Indexer]: delete "_2.tii"
IFD [Indexer]: delete "_3.fnm"
IFD [Indexer]: delete "_1.fnm"
IFD [Indexer]: delete "_6.tis"
IFD [Indexer]: delete "_4.frq"
IFD [Indexer]: delete "_5.nrm"
IFD [Indexer]: delete "_5.tis"
IFD [Indexer]: delete "_1.tii"
IFD [Indexer]: delete "_4.tis"
IFD [Indexer]: delete "_0.prx"
IFD [Indexer]: delete "_3.nrm"
IFD [Indexer]: delete "_4.tii"
IFD [Indexer]: delete "_0.nrm"
IFD [Indexer]: delete "_5.fnm"
IFD [Indexer]: delete "_1.tis"
IFD [Indexer]: delete "_0.fnm"
IFD [Indexer]: delete "_2.prx"
IFD [Indexer]: delete "_6.tii"
IFD [Indexer]: delete "_4.nrm"
IFD [Indexer]: delete "_2.frq"
IFD [Indexer]: delete "_5.frq"
IFD [Indexer]: delete "_3.prx"
IFD [Indexer]: delete "_5.tii"
IFD [Indexer]: delete "_2.fnm"
IFD [Indexer]: delete "_1.prx"
IFD [Indexer]: delete "_2.tis"
IFD [Indexer]: delete "_0.tii"
IFD [Indexer]: delete "_6.prx"
IFD [Indexer]: delete "_0.frq"
IFD [Indexer]: delete "segments_2"
IFD [Indexer]: delete "_6.fnm"
IFD [Indexer]: delete "_6.nrm"
IFD [Indexer]: delete "_1.nrm"
IFD [Indexer]: delete "_2.nrm"
IFD [Indexer]: delete "_3.tis"
IW 0 [Indexer]: commit: done
IW 0 [Indexer]: at close: _7:C1077025->_0

I see no errors.
Peter


On Tue, Oct 27, 2009 at 10:44 AM, Peter Keegan <pe...@gmail.com>wrote:

>
>
> On Tue, Oct 27, 2009 at 10:37 AM, Michael McCandless <
> lucene@mikemccandless.com> wrote:
>
>> OK that exception looks more reasonable, for a disk full event.
>>
>> But, I can't tell from your followon emails: did this lead to index
>> corruption?
>>
>
> Yes, but this may be caused by the application ignoring a Lucene exception
> somewhere else. I will chase this down.
>
>>
>> Also, I noticed you're using a rather old 1.6.0 JRE (1.6.0_03) -- you
>> really should upgrade that to the latest 1.6.0 -- there's at least one
>> known problem with Lucene and early 1.6.0 JREs.
>>
>
> Yes, I remember this problem - that's why we stayed at _03
> Thanks.
>
>>
>> Mike
>>
>> On Tue, Oct 27, 2009 at 10:00 AM, Peter Keegan <pe...@gmail.com>
>> wrote:
>> > After rebuilding the corrupted indexes, the low disk space exception is
>> now
>> > occurring as expected. Sorry for the distraction.
>> >
>> > fyi, here are the details:
>> >
>> >  java.io.IOException: There is not enough space on the disk
>> >    at java.io.RandomAccessFile.writeBytes(Native Method)
>> >    at java.io.RandomAccessFile.write(Unknown Source)
>> >    at
>> >
>> org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexOutput.flushBuffer(SimpleFSDirectory.java:192)
>> >    at
>> >
>> org.apache.lucene.store.BufferedIndexOutput.flushBuffer(BufferedIndexOutput.java:96)
>> >    at
>> >
>> org.apache.lucene.store.BufferedIndexOutput.flush(BufferedIndexOutput.java:85)
>> >    at
>> >
>> org.apache.lucene.store.BufferedIndexOutput.close(BufferedIndexOutput.java:109)
>> >    at
>> >
>> org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexOutput.close(SimpleFSDirectory.java:199)
>> >    at org.apache.lucene.index.FieldsWriter.close(FieldsWriter.java:144)
>> >    at
>> >
>> org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:357)
>> >    at
>> org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:153)
>> >    at
>> > org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:5011)
>> >    at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4596)
>> >    at
>> >
>> org.apache.lucene.index.IndexWriter.resolveExternalSegments(IndexWriter.java:3786)
>> >    at
>> >
>> org.apache.lucene.index.IndexWriter.addIndexesNoOptimize(IndexWriter.java:3695)
>> >
>> >
>> > And the corresponding index info log:
>> >
>> > IFD [Indexer]: setInfoStream
>> >
>> deletionPolicy=org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy@256ef705
>> > IW 1 [Indexer]: setInfoStream:
>> > dir=org.apache.lucene.store.SimpleFSDirectory@D
>> :\mnsavs\lresumes3\lresumes3.luc\lresumes3.update.main.4
>> > autoCommit=false
>> >
>> mergePolicy=org.apache.lucene.index.LogByteSizeMergePolicy@181b7c76mergeScheduler
>> =org.apache.lucene.index.ConcurrentMergeScheduler@34883357ramBufferSizeMB
>> =16.0
>> > maxBufferedDocs=-1 maxBuffereDeleteTerms=-1
>> > maxFieldLength=2147483647 index=
>> > IW 1 [Indexer]: flush at addIndexesNoOptimize
>> > IW 1 [Indexer]:   flush: segment=null docStoreSegment=null
>> docStoreOffset=0
>> > flushDocs=false flushDeletes=true flushDocStores=false numDocs=0
>> > numBufDelTerms=0
>> > IW 1 [Indexer]:   index before flush
>> > IW 1 [Indexer]: now start transaction
>> > IW 1 [Indexer]: LMP: findMerges: 2 segments
>> > IW 1 [Indexer]: LMP:   level 8.774518 to 9.524518: 1 segments
>> > IW 1 [Indexer]: LMP:   level 6.2973914 to 7.0473914: 1 segments
>> > IW 1 [Indexer]: CMS: now merge
>> > IW 1 [Indexer]: CMS:   index: _7:Cx1075533->_0** _8:Cx2795**
>> > IW 1 [Indexer]: CMS:   no more merges pending; now return
>> > IW 1 [Indexer]: add merge to pendingMerges: _7:Cx1075533->_0 [total 1
>> > pending]
>> > IW 1 [Indexer]: now merge
>> >  merge=_7:Cx1075533->_0 into _0 [mergeDocStores]
>> >  merge=org.apache.lucene.index.MergePolicy$OneMerge@4d480ea
>> >  index=_7:Cx1075533->_0** _8:Cx2795**
>> > IW 1 [Indexer]: merging _7:Cx1075533->_0 into _0 [mergeDocStores]
>> > IW 1 [Indexer]: merge: total 1074388 docs
>> > IW 1 [Indexer]: commitMerge: _7:Cx1075533->_0 into _0 [mergeDocStores]
>> > index=_7:Cx1075533->_0** _8:Cx2795**
>> > IW 1 [Indexer]: commitMergeDeletes _7:Cx1075533->_0 into _0
>> [mergeDocStores]
>> > IFD [Indexer]: now checkpoint "segments_1" [2 segments ; isCommit =
>> false]
>> > IW 1 [Indexer]: LMP: findMerges: 2 segments
>> > IW 1 [Indexer]: LMP:   level 8.864886 to 9.614886: 1 segments
>> > IW 1 [Indexer]: LMP:   level 6.2973914 to 7.0473914: 1 segments
>> > IW 1 [Indexer]: add merge to pendingMerges: _8:Cx2795 [total 1 pending]
>> > IW 1 [Indexer]: now merge
>> >  merge=_8:Cx2795 into _1 [mergeDocStores]
>> >  merge=org.apache.lucene.index.MergePolicy$OneMerge@606f8b2b
>> >  index=_0:C1074388 _8:Cx2795**
>> > IW 1 [Indexer]: merging _8:Cx2795 into _1 [mergeDocStores]
>> > IW 1 [Indexer]: merge: total 2795 docs
>> > IW 1 [Indexer]: handleMergeException: merge=_8:Cx2795 into _1
>> > [mergeDocStores] exc=java.io.IOException: There is not enough space on
>> the
>> > disk
>> > IW 1 [Indexer]: hit exception during merge
>> > IFD [Indexer]: refresh [prefix=_1]: removing newly created unreferenced
>> file
>> > "_1.fdt"
>> > IFD [Indexer]: delete "_1.fdt"
>> > IFD [Indexer]: refresh [prefix=_1]: removing newly created unreferenced
>> file
>> > "_1.fdx"
>> > IFD [Indexer]: delete "_1.fdx"
>> > IFD [Indexer]: refresh [prefix=_1]: removing newly created unreferenced
>> file
>> > "_1.fnm"
>> > IFD [Indexer]: delete "_1.fnm"
>> > IW 1 [Indexer]: now rollback transaction
>> > IW 1 [Indexer]: all running merges have aborted
>> > IFD [Indexer]: now checkpoint "segments_1" [0 segments ; isCommit =
>> false]
>> > IFD [Indexer]: delete "_0.nrm"
>> > IFD [Indexer]: delete "_0.tis"
>> > IFD [Indexer]: delete "_0.fnm"
>> > IFD [Indexer]: delete "_0.tii"
>> > IFD [Indexer]: delete "_0.frq"
>> > IFD [Indexer]: delete "_0.fdx"
>> > IFD [Indexer]: delete "_0.prx"
>> > IFD [Indexer]: delete "_0.fdt"
>> >
>> >
>> > Peter
>> >
>> > On Mon, Oct 26, 2009 at 3:59 PM, Peter Keegan <peterlkeegan@gmail.com
>> >wrote:
>> >
>> >>
>> >>
>> >> On Mon, Oct 26, 2009 at 3:00 PM, Michael McCandless <
>> >> lucene@mikemccandless.com> wrote:
>> >>
>> >>> On Mon, Oct 26, 2009 at 2:55 PM, Peter Keegan <peterlkeegan@gmail.com
>> >
>> >>> wrote:
>> >>> > On Mon, Oct 26, 2009 at 2:50 PM, Michael McCandless <
>> >>> > lucene@mikemccandless.com> wrote:
>> >>> >
>> >>> >> On Mon, Oct 26, 2009 at 10:44 AM, Peter Keegan <
>> peterlkeegan@gmail.com
>> >>> >
>> >>> >> wrote:
>> >>> >> > Even running in console mode, the exception is difficult to
>> >>> interpret.
>> >>> >> > Here's an exception that I think occurred during an add document,
>> >>> commit
>> >>> >> or
>> >>> >> > close:
>> >>> >> > doc counts differ for segment _g: field Reader shows 137 but
>> >>> segmentInfo
>> >>> >> > shows 5777
>> >>> >>
>> >>> >> That's spooky.  Do you have the full exception for this one?  What
>> IO
>> >>> >> system are you running on?  (Is it just a local drive on your
>> windows
>> >>> >> computer?) It's almost as if the IO system is not generating an
>> >>> >> IOException to Java when disk fills up.
>> >>> >>
>> >>> >
>> >>> > Index and code are all on a local drive. There is no other exception
>> >>> coming
>> >>> > back - just what I reported.
>> >>>
>> >>> But, you didn't report a traceback for this first one?
>> >>>
>> >>
>> >> Yes, I need to add some more printStackTrace calls.
>> >>
>> >>
>> >>>
>> >>> >> > I ensured that the disk space was low before updating the index.
>> >>> >>
>> >>> >> You mean, to intentionally test the disk-full case?
>> >>> >>
>> >>> >
>> >>> > Yes, that's right.
>> >>>
>> >>> OK.  Can you turn on IndexWriter's infoStream, get this disk full /
>> >>> corruption to happen again, and post back the resulting output?  Make
>> >>> sure your index first passes CheckIndex before starting (so we don't
>> >>> begin the test w/ any pre-existing index corruption).
>> >>>
>> >>
>> >> Good point about CheckIndex.  I've already found 2 bad ones. I will
>> build
>> >> new indexes from scratch. This will take a while.
>> >>
>> >>
>> >>> >> > On another occasion, the exception was:
>> >>> >> > background merge hit exception: _0:C1080260 _1:C139 _2:C123
>> _3:C107
>> >>> >> _4:C126
>> >>> >> > _5:C121 _6:C126 _7:C711 _8:C116 into _9 [optimize]
>> [mergeDocStores]
>> >>> >>
>> >>> >> In this case, the SegmentMerger was trying to open this segment,
>> but
>> >>> >> on attempting to read the first int from the fdx (fields index)
>> file
>> >>> >> for one of the segments, it hit EOF.
>> >>> >>
>> >>> >> This is also spooky -- this looks like index corruption, which
>> should
>> >>> >> never happen on hitting disk full.
>> >>> >>
>> >>> >
>> >>> > That's what I thought, too. Could Lucene be catching the IOException
>> and
>> >>> > turning it into a different exception?
>> >>>
>> >>> I think that's unlikely, but I guess possible.  We have "disk full"
>> >>> tests in the unit tests, that throw an IOException at different times.
>> >>>
>> >>> What exact windows version are you using?  The local drive is NTFS?
>> >>>
>> >>
>> >> Windows Server 2003 Enterprise x64 SP2. Local drive is NTFS
>> >>
>> >>
>> >>>
>> >>> Mike
>> >>>
>> >>> ---------------------------------------------------------------------
>> >>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>> >>> For additional commands, e-mail: java-user-help@lucene.apache.org
>> >>>
>> >>>
>> >>
>> >
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>> For additional commands, e-mail: java-user-help@lucene.apache.org
>>
>>
>

Re: IO exception during merge/optimize

Posted by Peter Keegan <pe...@gmail.com>.
On Tue, Oct 27, 2009 at 10:37 AM, Michael McCandless <
lucene@mikemccandless.com> wrote:

> OK that exception looks more reasonable, for a disk full event.
>
> But, I can't tell from your followon emails: did this lead to index
> corruption?
>

Yes, but this may be caused by the application ignoring a Lucene exception
somewhere else. I will chase this down.

>
> Also, I noticed you're using a rather old 1.6.0 JRE (1.6.0_03) -- you
> really should upgrade that to the latest 1.6.0 -- there's at least one
> known problem with Lucene and early 1.6.0 JREs.
>

Yes, I remember this problem - that's why we stayed at _03
Thanks.

>
> Mike
>
> On Tue, Oct 27, 2009 at 10:00 AM, Peter Keegan <pe...@gmail.com>
> wrote:
> > After rebuilding the corrupted indexes, the low disk space exception is
> now
> > occurring as expected. Sorry for the distraction.
> >
> > fyi, here are the details:
> >
> >  java.io.IOException: There is not enough space on the disk
> >    at java.io.RandomAccessFile.writeBytes(Native Method)
> >    at java.io.RandomAccessFile.write(Unknown Source)
> >    at
> >
> org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexOutput.flushBuffer(SimpleFSDirectory.java:192)
> >    at
> >
> org.apache.lucene.store.BufferedIndexOutput.flushBuffer(BufferedIndexOutput.java:96)
> >    at
> >
> org.apache.lucene.store.BufferedIndexOutput.flush(BufferedIndexOutput.java:85)
> >    at
> >
> org.apache.lucene.store.BufferedIndexOutput.close(BufferedIndexOutput.java:109)
> >    at
> >
> org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexOutput.close(SimpleFSDirectory.java:199)
> >    at org.apache.lucene.index.FieldsWriter.close(FieldsWriter.java:144)
> >    at
> > org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:357)
> >    at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:153)
> >    at
> > org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:5011)
> >    at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4596)
> >    at
> >
> org.apache.lucene.index.IndexWriter.resolveExternalSegments(IndexWriter.java:3786)
> >    at
> >
> org.apache.lucene.index.IndexWriter.addIndexesNoOptimize(IndexWriter.java:3695)
> >
> >
> > And the corresponding index info log:
> >
> > IFD [Indexer]: setInfoStream
> >
> deletionPolicy=org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy@256ef705
> > IW 1 [Indexer]: setInfoStream:
> > dir=org.apache.lucene.store.SimpleFSDirectory@D
> :\mnsavs\lresumes3\lresumes3.luc\lresumes3.update.main.4
> > autoCommit=false
> >
> mergePolicy=org.apache.lucene.index.LogByteSizeMergePolicy@181b7c76mergeScheduler
> =org.apache.lucene.index.ConcurrentMergeScheduler@34883357ramBufferSizeMB
> =16.0
> > maxBufferedDocs=-1 maxBuffereDeleteTerms=-1
> > maxFieldLength=2147483647 index=
> > IW 1 [Indexer]: flush at addIndexesNoOptimize
> > IW 1 [Indexer]:   flush: segment=null docStoreSegment=null
> docStoreOffset=0
> > flushDocs=false flushDeletes=true flushDocStores=false numDocs=0
> > numBufDelTerms=0
> > IW 1 [Indexer]:   index before flush
> > IW 1 [Indexer]: now start transaction
> > IW 1 [Indexer]: LMP: findMerges: 2 segments
> > IW 1 [Indexer]: LMP:   level 8.774518 to 9.524518: 1 segments
> > IW 1 [Indexer]: LMP:   level 6.2973914 to 7.0473914: 1 segments
> > IW 1 [Indexer]: CMS: now merge
> > IW 1 [Indexer]: CMS:   index: _7:Cx1075533->_0** _8:Cx2795**
> > IW 1 [Indexer]: CMS:   no more merges pending; now return
> > IW 1 [Indexer]: add merge to pendingMerges: _7:Cx1075533->_0 [total 1
> > pending]
> > IW 1 [Indexer]: now merge
> >  merge=_7:Cx1075533->_0 into _0 [mergeDocStores]
> >  merge=org.apache.lucene.index.MergePolicy$OneMerge@4d480ea
> >  index=_7:Cx1075533->_0** _8:Cx2795**
> > IW 1 [Indexer]: merging _7:Cx1075533->_0 into _0 [mergeDocStores]
> > IW 1 [Indexer]: merge: total 1074388 docs
> > IW 1 [Indexer]: commitMerge: _7:Cx1075533->_0 into _0 [mergeDocStores]
> > index=_7:Cx1075533->_0** _8:Cx2795**
> > IW 1 [Indexer]: commitMergeDeletes _7:Cx1075533->_0 into _0
> [mergeDocStores]
> > IFD [Indexer]: now checkpoint "segments_1" [2 segments ; isCommit =
> false]
> > IW 1 [Indexer]: LMP: findMerges: 2 segments
> > IW 1 [Indexer]: LMP:   level 8.864886 to 9.614886: 1 segments
> > IW 1 [Indexer]: LMP:   level 6.2973914 to 7.0473914: 1 segments
> > IW 1 [Indexer]: add merge to pendingMerges: _8:Cx2795 [total 1 pending]
> > IW 1 [Indexer]: now merge
> >  merge=_8:Cx2795 into _1 [mergeDocStores]
> >  merge=org.apache.lucene.index.MergePolicy$OneMerge@606f8b2b
> >  index=_0:C1074388 _8:Cx2795**
> > IW 1 [Indexer]: merging _8:Cx2795 into _1 [mergeDocStores]
> > IW 1 [Indexer]: merge: total 2795 docs
> > IW 1 [Indexer]: handleMergeException: merge=_8:Cx2795 into _1
> > [mergeDocStores] exc=java.io.IOException: There is not enough space on
> the
> > disk
> > IW 1 [Indexer]: hit exception during merge
> > IFD [Indexer]: refresh [prefix=_1]: removing newly created unreferenced
> file
> > "_1.fdt"
> > IFD [Indexer]: delete "_1.fdt"
> > IFD [Indexer]: refresh [prefix=_1]: removing newly created unreferenced
> file
> > "_1.fdx"
> > IFD [Indexer]: delete "_1.fdx"
> > IFD [Indexer]: refresh [prefix=_1]: removing newly created unreferenced
> file
> > "_1.fnm"
> > IFD [Indexer]: delete "_1.fnm"
> > IW 1 [Indexer]: now rollback transaction
> > IW 1 [Indexer]: all running merges have aborted
> > IFD [Indexer]: now checkpoint "segments_1" [0 segments ; isCommit =
> false]
> > IFD [Indexer]: delete "_0.nrm"
> > IFD [Indexer]: delete "_0.tis"
> > IFD [Indexer]: delete "_0.fnm"
> > IFD [Indexer]: delete "_0.tii"
> > IFD [Indexer]: delete "_0.frq"
> > IFD [Indexer]: delete "_0.fdx"
> > IFD [Indexer]: delete "_0.prx"
> > IFD [Indexer]: delete "_0.fdt"
> >
> >
> > Peter
> >
> > On Mon, Oct 26, 2009 at 3:59 PM, Peter Keegan <peterlkeegan@gmail.com
> >wrote:
> >
> >>
> >>
> >> On Mon, Oct 26, 2009 at 3:00 PM, Michael McCandless <
> >> lucene@mikemccandless.com> wrote:
> >>
> >>> On Mon, Oct 26, 2009 at 2:55 PM, Peter Keegan <pe...@gmail.com>
> >>> wrote:
> >>> > On Mon, Oct 26, 2009 at 2:50 PM, Michael McCandless <
> >>> > lucene@mikemccandless.com> wrote:
> >>> >
> >>> >> On Mon, Oct 26, 2009 at 10:44 AM, Peter Keegan <
> peterlkeegan@gmail.com
> >>> >
> >>> >> wrote:
> >>> >> > Even running in console mode, the exception is difficult to
> >>> interpret.
> >>> >> > Here's an exception that I think occurred during an add document,
> >>> commit
> >>> >> or
> >>> >> > close:
> >>> >> > doc counts differ for segment _g: field Reader shows 137 but
> >>> segmentInfo
> >>> >> > shows 5777
> >>> >>
> >>> >> That's spooky.  Do you have the full exception for this one?  What
> IO
> >>> >> system are you running on?  (Is it just a local drive on your
> windows
> >>> >> computer?) It's almost as if the IO system is not generating an
> >>> >> IOException to Java when disk fills up.
> >>> >>
> >>> >
> >>> > Index and code are all on a local drive. There is no other exception
> >>> coming
> >>> > back - just what I reported.
> >>>
> >>> But, you didn't report a traceback for this first one?
> >>>
> >>
> >> Yes, I need to add some more printStackTrace calls.
> >>
> >>
> >>>
> >>> >> > I ensured that the disk space was low before updating the index.
> >>> >>
> >>> >> You mean, to intentionally test the disk-full case?
> >>> >>
> >>> >
> >>> > Yes, that's right.
> >>>
> >>> OK.  Can you turn on IndexWriter's infoStream, get this disk full /
> >>> corruption to happen again, and post back the resulting output?  Make
> >>> sure your index first passes CheckIndex before starting (so we don't
> >>> begin the test w/ any pre-existing index corruption).
> >>>
> >>
> >> Good point about CheckIndex.  I've already found 2 bad ones. I will
> build
> >> new indexes from scratch. This will take a while.
> >>
> >>
> >>> >> > On another occasion, the exception was:
> >>> >> > background merge hit exception: _0:C1080260 _1:C139 _2:C123
> _3:C107
> >>> >> _4:C126
> >>> >> > _5:C121 _6:C126 _7:C711 _8:C116 into _9 [optimize]
> [mergeDocStores]
> >>> >>
> >>> >> In this case, the SegmentMerger was trying to open this segment, but
> >>> >> on attempting to read the first int from the fdx (fields index) file
> >>> >> for one of the segments, it hit EOF.
> >>> >>
> >>> >> This is also spooky -- this looks like index corruption, which
> should
> >>> >> never happen on hitting disk full.
> >>> >>
> >>> >
> >>> > That's what I thought, too. Could Lucene be catching the IOException
> and
> >>> > turning it into a different exception?
> >>>
> >>> I think that's unlikely, but I guess possible.  We have "disk full"
> >>> tests in the unit tests, that throw an IOException at different times.
> >>>
> >>> What exact windows version are you using?  The local drive is NTFS?
> >>>
> >>
> >> Windows Server 2003 Enterprise x64 SP2. Local drive is NTFS
> >>
> >>
> >>>
> >>> Mike
> >>>
> >>> ---------------------------------------------------------------------
> >>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> >>> For additional commands, e-mail: java-user-help@lucene.apache.org
> >>>
> >>>
> >>
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> For additional commands, e-mail: java-user-help@lucene.apache.org
>
>

Re: IO exception during merge/optimize

Posted by Michael McCandless <lu...@mikemccandless.com>.
OK that exception looks more reasonable, for a disk full event.

But, I can't tell from your followon emails: did this lead to index corruption?

Also, I noticed you're using a rather old 1.6.0 JRE (1.6.0_03) -- you
really should upgrade that to the latest 1.6.0 -- there's at least one
known problem with Lucene and early 1.6.0 JREs.

Mike

On Tue, Oct 27, 2009 at 10:00 AM, Peter Keegan <pe...@gmail.com> wrote:
> After rebuilding the corrupted indexes, the low disk space exception is now
> occurring as expected. Sorry for the distraction.
>
> fyi, here are the details:
>
>  java.io.IOException: There is not enough space on the disk
>    at java.io.RandomAccessFile.writeBytes(Native Method)
>    at java.io.RandomAccessFile.write(Unknown Source)
>    at
> org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexOutput.flushBuffer(SimpleFSDirectory.java:192)
>    at
> org.apache.lucene.store.BufferedIndexOutput.flushBuffer(BufferedIndexOutput.java:96)
>    at
> org.apache.lucene.store.BufferedIndexOutput.flush(BufferedIndexOutput.java:85)
>    at
> org.apache.lucene.store.BufferedIndexOutput.close(BufferedIndexOutput.java:109)
>    at
> org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexOutput.close(SimpleFSDirectory.java:199)
>    at org.apache.lucene.index.FieldsWriter.close(FieldsWriter.java:144)
>    at
> org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:357)
>    at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:153)
>    at
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:5011)
>    at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4596)
>    at
> org.apache.lucene.index.IndexWriter.resolveExternalSegments(IndexWriter.java:3786)
>    at
> org.apache.lucene.index.IndexWriter.addIndexesNoOptimize(IndexWriter.java:3695)
>
>
> And the corresponding index info log:
>
> IFD [Indexer]: setInfoStream
> deletionPolicy=org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy@256ef705
> IW 1 [Indexer]: setInfoStream:
> dir=org.apache.lucene.store.SimpleFSDirectory@D:\mnsavs\lresumes3\lresumes3.luc\lresumes3.update.main.4
> autoCommit=false
> mergePolicy=org.apache.lucene.index.LogByteSizeMergePolicy@181b7c76mergeScheduler=org.apache.lucene.index.ConcurrentMergeScheduler@34883357ramBufferSizeMB=16.0
> maxBufferedDocs=-1 maxBuffereDeleteTerms=-1
> maxFieldLength=2147483647 index=
> IW 1 [Indexer]: flush at addIndexesNoOptimize
> IW 1 [Indexer]:   flush: segment=null docStoreSegment=null docStoreOffset=0
> flushDocs=false flushDeletes=true flushDocStores=false numDocs=0
> numBufDelTerms=0
> IW 1 [Indexer]:   index before flush
> IW 1 [Indexer]: now start transaction
> IW 1 [Indexer]: LMP: findMerges: 2 segments
> IW 1 [Indexer]: LMP:   level 8.774518 to 9.524518: 1 segments
> IW 1 [Indexer]: LMP:   level 6.2973914 to 7.0473914: 1 segments
> IW 1 [Indexer]: CMS: now merge
> IW 1 [Indexer]: CMS:   index: _7:Cx1075533->_0** _8:Cx2795**
> IW 1 [Indexer]: CMS:   no more merges pending; now return
> IW 1 [Indexer]: add merge to pendingMerges: _7:Cx1075533->_0 [total 1
> pending]
> IW 1 [Indexer]: now merge
>  merge=_7:Cx1075533->_0 into _0 [mergeDocStores]
>  merge=org.apache.lucene.index.MergePolicy$OneMerge@4d480ea
>  index=_7:Cx1075533->_0** _8:Cx2795**
> IW 1 [Indexer]: merging _7:Cx1075533->_0 into _0 [mergeDocStores]
> IW 1 [Indexer]: merge: total 1074388 docs
> IW 1 [Indexer]: commitMerge: _7:Cx1075533->_0 into _0 [mergeDocStores]
> index=_7:Cx1075533->_0** _8:Cx2795**
> IW 1 [Indexer]: commitMergeDeletes _7:Cx1075533->_0 into _0 [mergeDocStores]
> IFD [Indexer]: now checkpoint "segments_1" [2 segments ; isCommit = false]
> IW 1 [Indexer]: LMP: findMerges: 2 segments
> IW 1 [Indexer]: LMP:   level 8.864886 to 9.614886: 1 segments
> IW 1 [Indexer]: LMP:   level 6.2973914 to 7.0473914: 1 segments
> IW 1 [Indexer]: add merge to pendingMerges: _8:Cx2795 [total 1 pending]
> IW 1 [Indexer]: now merge
>  merge=_8:Cx2795 into _1 [mergeDocStores]
>  merge=org.apache.lucene.index.MergePolicy$OneMerge@606f8b2b
>  index=_0:C1074388 _8:Cx2795**
> IW 1 [Indexer]: merging _8:Cx2795 into _1 [mergeDocStores]
> IW 1 [Indexer]: merge: total 2795 docs
> IW 1 [Indexer]: handleMergeException: merge=_8:Cx2795 into _1
> [mergeDocStores] exc=java.io.IOException: There is not enough space on the
> disk
> IW 1 [Indexer]: hit exception during merge
> IFD [Indexer]: refresh [prefix=_1]: removing newly created unreferenced file
> "_1.fdt"
> IFD [Indexer]: delete "_1.fdt"
> IFD [Indexer]: refresh [prefix=_1]: removing newly created unreferenced file
> "_1.fdx"
> IFD [Indexer]: delete "_1.fdx"
> IFD [Indexer]: refresh [prefix=_1]: removing newly created unreferenced file
> "_1.fnm"
> IFD [Indexer]: delete "_1.fnm"
> IW 1 [Indexer]: now rollback transaction
> IW 1 [Indexer]: all running merges have aborted
> IFD [Indexer]: now checkpoint "segments_1" [0 segments ; isCommit = false]
> IFD [Indexer]: delete "_0.nrm"
> IFD [Indexer]: delete "_0.tis"
> IFD [Indexer]: delete "_0.fnm"
> IFD [Indexer]: delete "_0.tii"
> IFD [Indexer]: delete "_0.frq"
> IFD [Indexer]: delete "_0.fdx"
> IFD [Indexer]: delete "_0.prx"
> IFD [Indexer]: delete "_0.fdt"
>
>
> Peter
>
> On Mon, Oct 26, 2009 at 3:59 PM, Peter Keegan <pe...@gmail.com>wrote:
>
>>
>>
>> On Mon, Oct 26, 2009 at 3:00 PM, Michael McCandless <
>> lucene@mikemccandless.com> wrote:
>>
>>> On Mon, Oct 26, 2009 at 2:55 PM, Peter Keegan <pe...@gmail.com>
>>> wrote:
>>> > On Mon, Oct 26, 2009 at 2:50 PM, Michael McCandless <
>>> > lucene@mikemccandless.com> wrote:
>>> >
>>> >> On Mon, Oct 26, 2009 at 10:44 AM, Peter Keegan <peterlkeegan@gmail.com
>>> >
>>> >> wrote:
>>> >> > Even running in console mode, the exception is difficult to
>>> interpret.
>>> >> > Here's an exception that I think occurred during an add document,
>>> commit
>>> >> or
>>> >> > close:
>>> >> > doc counts differ for segment _g: field Reader shows 137 but
>>> segmentInfo
>>> >> > shows 5777
>>> >>
>>> >> That's spooky.  Do you have the full exception for this one?  What IO
>>> >> system are you running on?  (Is it just a local drive on your windows
>>> >> computer?) It's almost as if the IO system is not generating an
>>> >> IOException to Java when disk fills up.
>>> >>
>>> >
>>> > Index and code are all on a local drive. There is no other exception
>>> coming
>>> > back - just what I reported.
>>>
>>> But, you didn't report a traceback for this first one?
>>>
>>
>> Yes, I need to add some more printStackTrace calls.
>>
>>
>>>
>>> >> > I ensured that the disk space was low before updating the index.
>>> >>
>>> >> You mean, to intentionally test the disk-full case?
>>> >>
>>> >
>>> > Yes, that's right.
>>>
>>> OK.  Can you turn on IndexWriter's infoStream, get this disk full /
>>> corruption to happen again, and post back the resulting output?  Make
>>> sure your index first passes CheckIndex before starting (so we don't
>>> begin the test w/ any pre-existing index corruption).
>>>
>>
>> Good point about CheckIndex.  I've already found 2 bad ones. I will build
>> new indexes from scratch. This will take a while.
>>
>>
>>> >> > On another occasion, the exception was:
>>> >> > background merge hit exception: _0:C1080260 _1:C139 _2:C123 _3:C107
>>> >> _4:C126
>>> >> > _5:C121 _6:C126 _7:C711 _8:C116 into _9 [optimize] [mergeDocStores]
>>> >>
>>> >> In this case, the SegmentMerger was trying to open this segment, but
>>> >> on attempting to read the first int from the fdx (fields index) file
>>> >> for one of the segments, it hit EOF.
>>> >>
>>> >> This is also spooky -- this looks like index corruption, which should
>>> >> never happen on hitting disk full.
>>> >>
>>> >
>>> > That's what I thought, too. Could Lucene be catching the IOException and
>>> > turning it into a different exception?
>>>
>>> I think that's unlikely, but I guess possible.  We have "disk full"
>>> tests in the unit tests, that throw an IOException at different times.
>>>
>>> What exact windows version are you using?  The local drive is NTFS?
>>>
>>
>> Windows Server 2003 Enterprise x64 SP2. Local drive is NTFS
>>
>>
>>>
>>> Mike
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>>> For additional commands, e-mail: java-user-help@lucene.apache.org
>>>
>>>
>>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: IO exception during merge/optimize

Posted by Peter Keegan <pe...@gmail.com>.
Clarification: this CheckIndex is on the index from which the merge/optimize
failed.
Peter

On Tue, Oct 27, 2009 at 10:07 AM, Peter Keegan <pe...@gmail.com>wrote:

> Running CheckIndex after the IOException did produce an error in a term
> frequency:
>
> Opening index @ D:\mnsavs\lresumes3\lresumes3.luc\lresumes3.search.main.3
>
> Segments file=segments_4 numSegments=2 version=FORMAT_DIAGNOSTICS [Lucene
> 2.9]
>   1 of 2: name=_7 docCount=1075533
>     compound=false
>     hasProx=true
>     numFiles=9
>     size (MB)=3,190.933
>     diagnostics = {optimize=true, mergeFactor=7, os.version=5.2, os=Windows
> 2003
> , mergeDocStores=false, lucene.version=2.9 exported - 2009-10-26 07:58:55,
> sourc
> e=merge, os.arch=amd64, java.version=1.6.0_03, java.vendor=Sun Microsystems
> Inc.
> }
>     docStoreOffset=0
>     docStoreSegment=_0
>     docStoreIsCompoundFile=false
>     has deletions [delFileName=_7_1.del]
>     test: open reader.........OK [1145 deleted docs]
>     test: fields..............OK [33 fields]
>     test: field norms.........OK [33 fields]
>     test: terms, freq, prox...ERROR [term literals:cfid129$ docFreq=1 !=
> num doc
> s seen 95 + num docs deleted 0]
> java.lang.RuntimeException: term literals:cfid129$ docFreq=1 != num docs
> seen 95
>  + num docs deleted 0
>         at
> org.apache.lucene.index.CheckIndex.testTermIndex(CheckIndex.java:675)
>
>         at
> org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:530)
>         at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
>     test: stored fields.......OK [3223164 total field count; avg 3 fields
> per do
> c]
>     test: term vectors........OK [0 total vector count; avg 0 term/freq
> vector f
> ields per doc]
> FAILED
>     WARNING: fixIndex() would remove reference to this segment; full
> exception:
> java.lang.RuntimeException: Term Index test failed
>         at
> org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:543)
>         at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
>
>   2 of 2: name=_8 docCount=2795
>     compound=false
>     hasProx=true
>     numFiles=8
>     size (MB)=10.636
>     diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
> exported
> - 2009-10-26 07:58:55, source=flush, os.arch=amd64, java.version=1.6.0_03,
> java.
> vendor=Sun Microsystems Inc.}
>     no deletions
>     test: open reader.........OK
>     test: fields..............OK [33 fields]
>     test: field norms.........OK [33 fields]
>     test: terms, freq, prox...OK [228791 terms; 1139340 terms/docs pairs;
> 220927
> 3 tokens]
>     test: stored fields.......OK [8385 total field count; avg 3 fields per
> doc]
>     test: term vectors........OK [0 total vector count; avg 0 term/freq
> vector f
> ields per doc]
>
> WARNING: 1 broken segments (containing 1074388 documents) detected
> WARNING: 1074388 documents will be lost
>
> NOTE: will write new segments file in 5 seconds; this will remove 1074388
> docs f
> rom the index. THIS IS YOUR LAST CHANCE TO CTRL+C!
>   5...
>   4...
>   3...
>   2...
>   1...
> Writing...
> OK
> Wrote new segments file "segments_5"
>
> Peter
>
>
>
> On Tue, Oct 27, 2009 at 10:00 AM, Peter Keegan <pe...@gmail.com>wrote:
>
>> After rebuilding the corrupted indexes, the low disk space exception is
>> now occurring as expected. Sorry for the distraction.
>>
>> fyi, here are the details:
>>
>>  java.io.IOException: There is not enough space on the disk
>>     at java.io.RandomAccessFile.writeBytes(Native Method)
>>     at java.io.RandomAccessFile.write(Unknown Source)
>>     at
>> org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexOutput.flushBuffer(SimpleFSDirectory.java:192)
>>     at
>> org.apache.lucene.store.BufferedIndexOutput.flushBuffer(BufferedIndexOutput.java:96)
>>     at
>> org.apache.lucene.store.BufferedIndexOutput.flush(BufferedIndexOutput.java:85)
>>     at
>> org.apache.lucene.store.BufferedIndexOutput.close(BufferedIndexOutput.java:109)
>>     at
>> org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexOutput.close(SimpleFSDirectory.java:199)
>>     at org.apache.lucene.index.FieldsWriter.close(FieldsWriter.java:144)
>>     at
>> org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:357)
>>     at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:153)
>>     at
>> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:5011)
>>
>>     at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4596)
>>     at
>> org.apache.lucene.index.IndexWriter.resolveExternalSegments(IndexWriter.java:3786)
>>     at
>> org.apache.lucene.index.IndexWriter.addIndexesNoOptimize(IndexWriter.java:3695)
>>
>>
>> And the corresponding index info log:
>>
>> IFD [Indexer]: setInfoStream
>> deletionPolicy=org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy@256ef705
>> IW 1 [Indexer]: setInfoStream:
>> dir=org.apache.lucene.store.SimpleFSDirectory@D:\mnsavs\lresumes3\lresumes3.luc\lresumes3.update.main.4
>> autoCommit=false
>> mergePolicy=org.apache.lucene.index.LogByteSizeMergePolicy@181b7c76mergeScheduler=org.apache.lucene.index.ConcurrentMergeScheduler@34883357ramBufferSizeMB=16.0 maxBufferedDocs=-1 maxBuffereDeleteTerms=-1
>> maxFieldLength=2147483647 index=
>> IW 1 [Indexer]: flush at addIndexesNoOptimize
>> IW 1 [Indexer]:   flush: segment=null docStoreSegment=null
>> docStoreOffset=0 flushDocs=false flushDeletes=true flushDocStores=false
>> numDocs=0 numBufDelTerms=0
>> IW 1 [Indexer]:   index before flush
>> IW 1 [Indexer]: now start transaction
>> IW 1 [Indexer]: LMP: findMerges: 2 segments
>> IW 1 [Indexer]: LMP:   level 8.774518 to 9.524518: 1 segments
>> IW 1 [Indexer]: LMP:   level 6.2973914 to 7.0473914: 1 segments
>> IW 1 [Indexer]: CMS: now merge
>> IW 1 [Indexer]: CMS:   index: _7:Cx1075533->_0** _8:Cx2795**
>> IW 1 [Indexer]: CMS:   no more merges pending; now return
>> IW 1 [Indexer]: add merge to pendingMerges: _7:Cx1075533->_0 [total 1
>> pending]
>> IW 1 [Indexer]: now merge
>>   merge=_7:Cx1075533->_0 into _0 [mergeDocStores]
>>   merge=org.apache.lucene.index.MergePolicy$OneMerge@4d480ea
>>   index=_7:Cx1075533->_0** _8:Cx2795**
>> IW 1 [Indexer]: merging _7:Cx1075533->_0 into _0 [mergeDocStores]
>> IW 1 [Indexer]: merge: total 1074388 docs
>> IW 1 [Indexer]: commitMerge: _7:Cx1075533->_0 into _0 [mergeDocStores]
>> index=_7:Cx1075533->_0** _8:Cx2795**
>> IW 1 [Indexer]: commitMergeDeletes _7:Cx1075533->_0 into _0
>> [mergeDocStores]
>> IFD [Indexer]: now checkpoint "segments_1" [2 segments ; isCommit = false]
>> IW 1 [Indexer]: LMP: findMerges: 2 segments
>> IW 1 [Indexer]: LMP:   level 8.864886 to 9.614886: 1 segments
>> IW 1 [Indexer]: LMP:   level 6.2973914 to 7.0473914: 1 segments
>> IW 1 [Indexer]: add merge to pendingMerges: _8:Cx2795 [total 1 pending]
>> IW 1 [Indexer]: now merge
>>   merge=_8:Cx2795 into _1 [mergeDocStores]
>>   merge=org.apache.lucene.index.MergePolicy$OneMerge@606f8b2b
>>   index=_0:C1074388 _8:Cx2795**
>> IW 1 [Indexer]: merging _8:Cx2795 into _1 [mergeDocStores]
>> IW 1 [Indexer]: merge: total 2795 docs
>> IW 1 [Indexer]: handleMergeException: merge=_8:Cx2795 into _1
>> [mergeDocStores] exc=java.io.IOException: There is not enough space on the
>> disk
>> IW 1 [Indexer]: hit exception during merge
>> IFD [Indexer]: refresh [prefix=_1]: removing newly created unreferenced
>> file "_1.fdt"
>> IFD [Indexer]: delete "_1.fdt"
>> IFD [Indexer]: refresh [prefix=_1]: removing newly created unreferenced
>> file "_1.fdx"
>> IFD [Indexer]: delete "_1.fdx"
>> IFD [Indexer]: refresh [prefix=_1]: removing newly created unreferenced
>> file "_1.fnm"
>> IFD [Indexer]: delete "_1.fnm"
>> IW 1 [Indexer]: now rollback transaction
>> IW 1 [Indexer]: all running merges have aborted
>> IFD [Indexer]: now checkpoint "segments_1" [0 segments ; isCommit = false]
>> IFD [Indexer]: delete "_0.nrm"
>> IFD [Indexer]: delete "_0.tis"
>> IFD [Indexer]: delete "_0.fnm"
>> IFD [Indexer]: delete "_0.tii"
>> IFD [Indexer]: delete "_0.frq"
>> IFD [Indexer]: delete "_0.fdx"
>> IFD [Indexer]: delete "_0.prx"
>> IFD [Indexer]: delete "_0.fdt"
>>
>>
>> Peter
>>
>>
>> On Mon, Oct 26, 2009 at 3:59 PM, Peter Keegan <pe...@gmail.com>wrote:
>>
>>>
>>>
>>> On Mon, Oct 26, 2009 at 3:00 PM, Michael McCandless <
>>> lucene@mikemccandless.com> wrote:
>>>
>>>> On Mon, Oct 26, 2009 at 2:55 PM, Peter Keegan <pe...@gmail.com>
>>>> wrote:
>>>> > On Mon, Oct 26, 2009 at 2:50 PM, Michael McCandless <
>>>> > lucene@mikemccandless.com> wrote:
>>>> >
>>>> >> On Mon, Oct 26, 2009 at 10:44 AM, Peter Keegan <
>>>> peterlkeegan@gmail.com>
>>>> >> wrote:
>>>> >> > Even running in console mode, the exception is difficult to
>>>> interpret.
>>>> >> > Here's an exception that I think occurred during an add document,
>>>> commit
>>>> >> or
>>>> >> > close:
>>>> >> > doc counts differ for segment _g: field Reader shows 137 but
>>>> segmentInfo
>>>> >> > shows 5777
>>>> >>
>>>> >> That's spooky.  Do you have the full exception for this one?  What IO
>>>> >> system are you running on?  (Is it just a local drive on your windows
>>>> >> computer?) It's almost as if the IO system is not generating an
>>>> >> IOException to Java when disk fills up.
>>>> >>
>>>> >
>>>> > Index and code are all on a local drive. There is no other exception
>>>> coming
>>>> > back - just what I reported.
>>>>
>>>> But, you didn't report a traceback for this first one?
>>>>
>>>
>>> Yes, I need to add some more printStackTrace calls.
>>>
>>>
>>>>
>>>> >> > I ensured that the disk space was low before updating the index.
>>>> >>
>>>> >> You mean, to intentionally test the disk-full case?
>>>> >>
>>>> >
>>>> > Yes, that's right.
>>>>
>>>> OK.  Can you turn on IndexWriter's infoStream, get this disk full /
>>>> corruption to happen again, and post back the resulting output?  Make
>>>> sure your index first passes CheckIndex before starting (so we don't
>>>> begin the test w/ any pre-existing index corruption).
>>>>
>>>
>>> Good point about CheckIndex.  I've already found 2 bad ones. I will build
>>> new indexes from scratch. This will take a while.
>>>
>>>
>>>> >> > On another occasion, the exception was:
>>>> >> > background merge hit exception: _0:C1080260 _1:C139 _2:C123 _3:C107
>>>> >> _4:C126
>>>> >> > _5:C121 _6:C126 _7:C711 _8:C116 into _9 [optimize] [mergeDocStores]
>>>> >>
>>>> >> In this case, the SegmentMerger was trying to open this segment, but
>>>> >> on attempting to read the first int from the fdx (fields index) file
>>>> >> for one of the segments, it hit EOF.
>>>> >>
>>>> >> This is also spooky -- this looks like index corruption, which should
>>>> >> never happen on hitting disk full.
>>>> >>
>>>> >
>>>> > That's what I thought, too. Could Lucene be catching the IOException
>>>> and
>>>> > turning it into a different exception?
>>>>
>>>> I think that's unlikely, but I guess possible.  We have "disk full"
>>>> tests in the unit tests, that throw an IOException at different times.
>>>>
>>>> What exact windows version are you using?  The local drive is NTFS?
>>>>
>>>
>>> Windows Server 2003 Enterprise x64 SP2. Local drive is NTFS
>>>
>>>
>>>>
>>>> Mike
>>>>
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>>>> For additional commands, e-mail: java-user-help@lucene.apache.org
>>>>
>>>>
>>>
>>
>

Re: IO exception during merge/optimize

Posted by Peter Keegan <pe...@gmail.com>.
Running CheckIndex after the IOException did produce an error in a term
frequency:

Opening index @ D:\mnsavs\lresumes3\lresumes3.luc\lresumes3.search.main.3

Segments file=segments_4 numSegments=2 version=FORMAT_DIAGNOSTICS [Lucene
2.9]
  1 of 2: name=_7 docCount=1075533
    compound=false
    hasProx=true
    numFiles=9
    size (MB)=3,190.933
    diagnostics = {optimize=true, mergeFactor=7, os.version=5.2, os=Windows
2003
, mergeDocStores=false, lucene.version=2.9 exported - 2009-10-26 07:58:55,
sourc
e=merge, os.arch=amd64, java.version=1.6.0_03, java.vendor=Sun Microsystems
Inc.
}
    docStoreOffset=0
    docStoreSegment=_0
    docStoreIsCompoundFile=false
    has deletions [delFileName=_7_1.del]
    test: open reader.........OK [1145 deleted docs]
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...ERROR [term literals:cfid129$ docFreq=1 != num
doc
s seen 95 + num docs deleted 0]
java.lang.RuntimeException: term literals:cfid129$ docFreq=1 != num docs
seen 95
 + num docs deleted 0
        at
org.apache.lucene.index.CheckIndex.testTermIndex(CheckIndex.java:675)

        at
org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:530)
        at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
    test: stored fields.......OK [3223164 total field count; avg 3 fields
per do
c]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector f
ields per doc]
FAILED
    WARNING: fixIndex() would remove reference to this segment; full
exception:
java.lang.RuntimeException: Term Index test failed
        at
org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:543)
        at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)

  2 of 2: name=_8 docCount=2795
    compound=false
    hasProx=true
    numFiles=8
    size (MB)=10.636
    diagnostics = {os.version=5.2, os=Windows 2003, lucene.version=2.9
exported
- 2009-10-26 07:58:55, source=flush, os.arch=amd64, java.version=1.6.0_03,
java.
vendor=Sun Microsystems Inc.}
    no deletions
    test: open reader.........OK
    test: fields..............OK [33 fields]
    test: field norms.........OK [33 fields]
    test: terms, freq, prox...OK [228791 terms; 1139340 terms/docs pairs;
220927
3 tokens]
    test: stored fields.......OK [8385 total field count; avg 3 fields per
doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq
vector f
ields per doc]

WARNING: 1 broken segments (containing 1074388 documents) detected
WARNING: 1074388 documents will be lost

NOTE: will write new segments file in 5 seconds; this will remove 1074388
docs f
rom the index. THIS IS YOUR LAST CHANCE TO CTRL+C!
  5...
  4...
  3...
  2...
  1...
Writing...
OK
Wrote new segments file "segments_5"

Peter


On Tue, Oct 27, 2009 at 10:00 AM, Peter Keegan <pe...@gmail.com>wrote:

> After rebuilding the corrupted indexes, the low disk space exception is now
> occurring as expected. Sorry for the distraction.
>
> fyi, here are the details:
>
>  java.io.IOException: There is not enough space on the disk
>     at java.io.RandomAccessFile.writeBytes(Native Method)
>     at java.io.RandomAccessFile.write(Unknown Source)
>     at
> org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexOutput.flushBuffer(SimpleFSDirectory.java:192)
>     at
> org.apache.lucene.store.BufferedIndexOutput.flushBuffer(BufferedIndexOutput.java:96)
>     at
> org.apache.lucene.store.BufferedIndexOutput.flush(BufferedIndexOutput.java:85)
>     at
> org.apache.lucene.store.BufferedIndexOutput.close(BufferedIndexOutput.java:109)
>     at
> org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexOutput.close(SimpleFSDirectory.java:199)
>     at org.apache.lucene.index.FieldsWriter.close(FieldsWriter.java:144)
>     at
> org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:357)
>     at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:153)
>     at
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:5011)
>
>     at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4596)
>     at
> org.apache.lucene.index.IndexWriter.resolveExternalSegments(IndexWriter.java:3786)
>     at
> org.apache.lucene.index.IndexWriter.addIndexesNoOptimize(IndexWriter.java:3695)
>
>
> And the corresponding index info log:
>
> IFD [Indexer]: setInfoStream
> deletionPolicy=org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy@256ef705
> IW 1 [Indexer]: setInfoStream:
> dir=org.apache.lucene.store.SimpleFSDirectory@D:\mnsavs\lresumes3\lresumes3.luc\lresumes3.update.main.4
> autoCommit=false
> mergePolicy=org.apache.lucene.index.LogByteSizeMergePolicy@181b7c76mergeScheduler=org.apache.lucene.index.ConcurrentMergeScheduler@34883357ramBufferSizeMB=16.0 maxBufferedDocs=-1 maxBuffereDeleteTerms=-1
> maxFieldLength=2147483647 index=
> IW 1 [Indexer]: flush at addIndexesNoOptimize
> IW 1 [Indexer]:   flush: segment=null docStoreSegment=null docStoreOffset=0
> flushDocs=false flushDeletes=true flushDocStores=false numDocs=0
> numBufDelTerms=0
> IW 1 [Indexer]:   index before flush
> IW 1 [Indexer]: now start transaction
> IW 1 [Indexer]: LMP: findMerges: 2 segments
> IW 1 [Indexer]: LMP:   level 8.774518 to 9.524518: 1 segments
> IW 1 [Indexer]: LMP:   level 6.2973914 to 7.0473914: 1 segments
> IW 1 [Indexer]: CMS: now merge
> IW 1 [Indexer]: CMS:   index: _7:Cx1075533->_0** _8:Cx2795**
> IW 1 [Indexer]: CMS:   no more merges pending; now return
> IW 1 [Indexer]: add merge to pendingMerges: _7:Cx1075533->_0 [total 1
> pending]
> IW 1 [Indexer]: now merge
>   merge=_7:Cx1075533->_0 into _0 [mergeDocStores]
>   merge=org.apache.lucene.index.MergePolicy$OneMerge@4d480ea
>   index=_7:Cx1075533->_0** _8:Cx2795**
> IW 1 [Indexer]: merging _7:Cx1075533->_0 into _0 [mergeDocStores]
> IW 1 [Indexer]: merge: total 1074388 docs
> IW 1 [Indexer]: commitMerge: _7:Cx1075533->_0 into _0 [mergeDocStores]
> index=_7:Cx1075533->_0** _8:Cx2795**
> IW 1 [Indexer]: commitMergeDeletes _7:Cx1075533->_0 into _0
> [mergeDocStores]
> IFD [Indexer]: now checkpoint "segments_1" [2 segments ; isCommit = false]
> IW 1 [Indexer]: LMP: findMerges: 2 segments
> IW 1 [Indexer]: LMP:   level 8.864886 to 9.614886: 1 segments
> IW 1 [Indexer]: LMP:   level 6.2973914 to 7.0473914: 1 segments
> IW 1 [Indexer]: add merge to pendingMerges: _8:Cx2795 [total 1 pending]
> IW 1 [Indexer]: now merge
>   merge=_8:Cx2795 into _1 [mergeDocStores]
>   merge=org.apache.lucene.index.MergePolicy$OneMerge@606f8b2b
>   index=_0:C1074388 _8:Cx2795**
> IW 1 [Indexer]: merging _8:Cx2795 into _1 [mergeDocStores]
> IW 1 [Indexer]: merge: total 2795 docs
> IW 1 [Indexer]: handleMergeException: merge=_8:Cx2795 into _1
> [mergeDocStores] exc=java.io.IOException: There is not enough space on the
> disk
> IW 1 [Indexer]: hit exception during merge
> IFD [Indexer]: refresh [prefix=_1]: removing newly created unreferenced
> file "_1.fdt"
> IFD [Indexer]: delete "_1.fdt"
> IFD [Indexer]: refresh [prefix=_1]: removing newly created unreferenced
> file "_1.fdx"
> IFD [Indexer]: delete "_1.fdx"
> IFD [Indexer]: refresh [prefix=_1]: removing newly created unreferenced
> file "_1.fnm"
> IFD [Indexer]: delete "_1.fnm"
> IW 1 [Indexer]: now rollback transaction
> IW 1 [Indexer]: all running merges have aborted
> IFD [Indexer]: now checkpoint "segments_1" [0 segments ; isCommit = false]
> IFD [Indexer]: delete "_0.nrm"
> IFD [Indexer]: delete "_0.tis"
> IFD [Indexer]: delete "_0.fnm"
> IFD [Indexer]: delete "_0.tii"
> IFD [Indexer]: delete "_0.frq"
> IFD [Indexer]: delete "_0.fdx"
> IFD [Indexer]: delete "_0.prx"
> IFD [Indexer]: delete "_0.fdt"
>
>
> Peter
>
>
> On Mon, Oct 26, 2009 at 3:59 PM, Peter Keegan <pe...@gmail.com>wrote:
>
>>
>>
>> On Mon, Oct 26, 2009 at 3:00 PM, Michael McCandless <
>> lucene@mikemccandless.com> wrote:
>>
>>> On Mon, Oct 26, 2009 at 2:55 PM, Peter Keegan <pe...@gmail.com>
>>> wrote:
>>> > On Mon, Oct 26, 2009 at 2:50 PM, Michael McCandless <
>>> > lucene@mikemccandless.com> wrote:
>>> >
>>> >> On Mon, Oct 26, 2009 at 10:44 AM, Peter Keegan <
>>> peterlkeegan@gmail.com>
>>> >> wrote:
>>> >> > Even running in console mode, the exception is difficult to
>>> interpret.
>>> >> > Here's an exception that I think occurred during an add document,
>>> commit
>>> >> or
>>> >> > close:
>>> >> > doc counts differ for segment _g: field Reader shows 137 but
>>> segmentInfo
>>> >> > shows 5777
>>> >>
>>> >> That's spooky.  Do you have the full exception for this one?  What IO
>>> >> system are you running on?  (Is it just a local drive on your windows
>>> >> computer?) It's almost as if the IO system is not generating an
>>> >> IOException to Java when disk fills up.
>>> >>
>>> >
>>> > Index and code are all on a local drive. There is no other exception
>>> coming
>>> > back - just what I reported.
>>>
>>> But, you didn't report a traceback for this first one?
>>>
>>
>> Yes, I need to add some more printStackTrace calls.
>>
>>
>>>
>>> >> > I ensured that the disk space was low before updating the index.
>>> >>
>>> >> You mean, to intentionally test the disk-full case?
>>> >>
>>> >
>>> > Yes, that's right.
>>>
>>> OK.  Can you turn on IndexWriter's infoStream, get this disk full /
>>> corruption to happen again, and post back the resulting output?  Make
>>> sure your index first passes CheckIndex before starting (so we don't
>>> begin the test w/ any pre-existing index corruption).
>>>
>>
>> Good point about CheckIndex.  I've already found 2 bad ones. I will build
>> new indexes from scratch. This will take a while.
>>
>>
>>> >> > On another occasion, the exception was:
>>> >> > background merge hit exception: _0:C1080260 _1:C139 _2:C123 _3:C107
>>> >> _4:C126
>>> >> > _5:C121 _6:C126 _7:C711 _8:C116 into _9 [optimize] [mergeDocStores]
>>> >>
>>> >> In this case, the SegmentMerger was trying to open this segment, but
>>> >> on attempting to read the first int from the fdx (fields index) file
>>> >> for one of the segments, it hit EOF.
>>> >>
>>> >> This is also spooky -- this looks like index corruption, which should
>>> >> never happen on hitting disk full.
>>> >>
>>> >
>>> > That's what I thought, too. Could Lucene be catching the IOException
>>> and
>>> > turning it into a different exception?
>>>
>>> I think that's unlikely, but I guess possible.  We have "disk full"
>>> tests in the unit tests, that throw an IOException at different times.
>>>
>>> What exact windows version are you using?  The local drive is NTFS?
>>>
>>
>> Windows Server 2003 Enterprise x64 SP2. Local drive is NTFS
>>
>>
>>>
>>> Mike
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>>> For additional commands, e-mail: java-user-help@lucene.apache.org
>>>
>>>
>>
>

Re: IO exception during merge/optimize

Posted by Peter Keegan <pe...@gmail.com>.
After rebuilding the corrupted indexes, the low disk space exception is now
occurring as expected. Sorry for the distraction.

fyi, here are the details:

 java.io.IOException: There is not enough space on the disk
    at java.io.RandomAccessFile.writeBytes(Native Method)
    at java.io.RandomAccessFile.write(Unknown Source)
    at
org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexOutput.flushBuffer(SimpleFSDirectory.java:192)
    at
org.apache.lucene.store.BufferedIndexOutput.flushBuffer(BufferedIndexOutput.java:96)
    at
org.apache.lucene.store.BufferedIndexOutput.flush(BufferedIndexOutput.java:85)
    at
org.apache.lucene.store.BufferedIndexOutput.close(BufferedIndexOutput.java:109)
    at
org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexOutput.close(SimpleFSDirectory.java:199)
    at org.apache.lucene.index.FieldsWriter.close(FieldsWriter.java:144)
    at
org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:357)
    at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:153)
    at
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:5011)
    at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4596)
    at
org.apache.lucene.index.IndexWriter.resolveExternalSegments(IndexWriter.java:3786)
    at
org.apache.lucene.index.IndexWriter.addIndexesNoOptimize(IndexWriter.java:3695)


And the corresponding index info log:

IFD [Indexer]: setInfoStream
deletionPolicy=org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy@256ef705
IW 1 [Indexer]: setInfoStream:
dir=org.apache.lucene.store.SimpleFSDirectory@D:\mnsavs\lresumes3\lresumes3.luc\lresumes3.update.main.4
autoCommit=false
mergePolicy=org.apache.lucene.index.LogByteSizeMergePolicy@181b7c76mergeScheduler=org.apache.lucene.index.ConcurrentMergeScheduler@34883357ramBufferSizeMB=16.0
maxBufferedDocs=-1 maxBuffereDeleteTerms=-1
maxFieldLength=2147483647 index=
IW 1 [Indexer]: flush at addIndexesNoOptimize
IW 1 [Indexer]:   flush: segment=null docStoreSegment=null docStoreOffset=0
flushDocs=false flushDeletes=true flushDocStores=false numDocs=0
numBufDelTerms=0
IW 1 [Indexer]:   index before flush
IW 1 [Indexer]: now start transaction
IW 1 [Indexer]: LMP: findMerges: 2 segments
IW 1 [Indexer]: LMP:   level 8.774518 to 9.524518: 1 segments
IW 1 [Indexer]: LMP:   level 6.2973914 to 7.0473914: 1 segments
IW 1 [Indexer]: CMS: now merge
IW 1 [Indexer]: CMS:   index: _7:Cx1075533->_0** _8:Cx2795**
IW 1 [Indexer]: CMS:   no more merges pending; now return
IW 1 [Indexer]: add merge to pendingMerges: _7:Cx1075533->_0 [total 1
pending]
IW 1 [Indexer]: now merge
  merge=_7:Cx1075533->_0 into _0 [mergeDocStores]
  merge=org.apache.lucene.index.MergePolicy$OneMerge@4d480ea
  index=_7:Cx1075533->_0** _8:Cx2795**
IW 1 [Indexer]: merging _7:Cx1075533->_0 into _0 [mergeDocStores]
IW 1 [Indexer]: merge: total 1074388 docs
IW 1 [Indexer]: commitMerge: _7:Cx1075533->_0 into _0 [mergeDocStores]
index=_7:Cx1075533->_0** _8:Cx2795**
IW 1 [Indexer]: commitMergeDeletes _7:Cx1075533->_0 into _0 [mergeDocStores]
IFD [Indexer]: now checkpoint "segments_1" [2 segments ; isCommit = false]
IW 1 [Indexer]: LMP: findMerges: 2 segments
IW 1 [Indexer]: LMP:   level 8.864886 to 9.614886: 1 segments
IW 1 [Indexer]: LMP:   level 6.2973914 to 7.0473914: 1 segments
IW 1 [Indexer]: add merge to pendingMerges: _8:Cx2795 [total 1 pending]
IW 1 [Indexer]: now merge
  merge=_8:Cx2795 into _1 [mergeDocStores]
  merge=org.apache.lucene.index.MergePolicy$OneMerge@606f8b2b
  index=_0:C1074388 _8:Cx2795**
IW 1 [Indexer]: merging _8:Cx2795 into _1 [mergeDocStores]
IW 1 [Indexer]: merge: total 2795 docs
IW 1 [Indexer]: handleMergeException: merge=_8:Cx2795 into _1
[mergeDocStores] exc=java.io.IOException: There is not enough space on the
disk
IW 1 [Indexer]: hit exception during merge
IFD [Indexer]: refresh [prefix=_1]: removing newly created unreferenced file
"_1.fdt"
IFD [Indexer]: delete "_1.fdt"
IFD [Indexer]: refresh [prefix=_1]: removing newly created unreferenced file
"_1.fdx"
IFD [Indexer]: delete "_1.fdx"
IFD [Indexer]: refresh [prefix=_1]: removing newly created unreferenced file
"_1.fnm"
IFD [Indexer]: delete "_1.fnm"
IW 1 [Indexer]: now rollback transaction
IW 1 [Indexer]: all running merges have aborted
IFD [Indexer]: now checkpoint "segments_1" [0 segments ; isCommit = false]
IFD [Indexer]: delete "_0.nrm"
IFD [Indexer]: delete "_0.tis"
IFD [Indexer]: delete "_0.fnm"
IFD [Indexer]: delete "_0.tii"
IFD [Indexer]: delete "_0.frq"
IFD [Indexer]: delete "_0.fdx"
IFD [Indexer]: delete "_0.prx"
IFD [Indexer]: delete "_0.fdt"


Peter

On Mon, Oct 26, 2009 at 3:59 PM, Peter Keegan <pe...@gmail.com>wrote:

>
>
> On Mon, Oct 26, 2009 at 3:00 PM, Michael McCandless <
> lucene@mikemccandless.com> wrote:
>
>> On Mon, Oct 26, 2009 at 2:55 PM, Peter Keegan <pe...@gmail.com>
>> wrote:
>> > On Mon, Oct 26, 2009 at 2:50 PM, Michael McCandless <
>> > lucene@mikemccandless.com> wrote:
>> >
>> >> On Mon, Oct 26, 2009 at 10:44 AM, Peter Keegan <peterlkeegan@gmail.com
>> >
>> >> wrote:
>> >> > Even running in console mode, the exception is difficult to
>> interpret.
>> >> > Here's an exception that I think occurred during an add document,
>> commit
>> >> or
>> >> > close:
>> >> > doc counts differ for segment _g: field Reader shows 137 but
>> segmentInfo
>> >> > shows 5777
>> >>
>> >> That's spooky.  Do you have the full exception for this one?  What IO
>> >> system are you running on?  (Is it just a local drive on your windows
>> >> computer?) It's almost as if the IO system is not generating an
>> >> IOException to Java when disk fills up.
>> >>
>> >
>> > Index and code are all on a local drive. There is no other exception
>> coming
>> > back - just what I reported.
>>
>> But, you didn't report a traceback for this first one?
>>
>
> Yes, I need to add some more printStackTrace calls.
>
>
>>
>> >> > I ensured that the disk space was low before updating the index.
>> >>
>> >> You mean, to intentionally test the disk-full case?
>> >>
>> >
>> > Yes, that's right.
>>
>> OK.  Can you turn on IndexWriter's infoStream, get this disk full /
>> corruption to happen again, and post back the resulting output?  Make
>> sure your index first passes CheckIndex before starting (so we don't
>> begin the test w/ any pre-existing index corruption).
>>
>
> Good point about CheckIndex.  I've already found 2 bad ones. I will build
> new indexes from scratch. This will take a while.
>
>
>> >> > On another occasion, the exception was:
>> >> > background merge hit exception: _0:C1080260 _1:C139 _2:C123 _3:C107
>> >> _4:C126
>> >> > _5:C121 _6:C126 _7:C711 _8:C116 into _9 [optimize] [mergeDocStores]
>> >>
>> >> In this case, the SegmentMerger was trying to open this segment, but
>> >> on attempting to read the first int from the fdx (fields index) file
>> >> for one of the segments, it hit EOF.
>> >>
>> >> This is also spooky -- this looks like index corruption, which should
>> >> never happen on hitting disk full.
>> >>
>> >
>> > That's what I thought, too. Could Lucene be catching the IOException and
>> > turning it into a different exception?
>>
>> I think that's unlikely, but I guess possible.  We have "disk full"
>> tests in the unit tests, that throw an IOException at different times.
>>
>> What exact windows version are you using?  The local drive is NTFS?
>>
>
> Windows Server 2003 Enterprise x64 SP2. Local drive is NTFS
>
>
>>
>> Mike
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>> For additional commands, e-mail: java-user-help@lucene.apache.org
>>
>>
>

Re: IO exception during merge/optimize

Posted by Peter Keegan <pe...@gmail.com>.
On Mon, Oct 26, 2009 at 3:00 PM, Michael McCandless <
lucene@mikemccandless.com> wrote:

> On Mon, Oct 26, 2009 at 2:55 PM, Peter Keegan <pe...@gmail.com>
> wrote:
> > On Mon, Oct 26, 2009 at 2:50 PM, Michael McCandless <
> > lucene@mikemccandless.com> wrote:
> >
> >> On Mon, Oct 26, 2009 at 10:44 AM, Peter Keegan <pe...@gmail.com>
> >> wrote:
> >> > Even running in console mode, the exception is difficult to interpret.
> >> > Here's an exception that I think occurred during an add document,
> commit
> >> or
> >> > close:
> >> > doc counts differ for segment _g: field Reader shows 137 but
> segmentInfo
> >> > shows 5777
> >>
> >> That's spooky.  Do you have the full exception for this one?  What IO
> >> system are you running on?  (Is it just a local drive on your windows
> >> computer?) It's almost as if the IO system is not generating an
> >> IOException to Java when disk fills up.
> >>
> >
> > Index and code are all on a local drive. There is no other exception
> coming
> > back - just what I reported.
>
> But, you didn't report a traceback for this first one?
>

Yes, I need to add some more printStackTrace calls.


>
> >> > I ensured that the disk space was low before updating the index.
> >>
> >> You mean, to intentionally test the disk-full case?
> >>
> >
> > Yes, that's right.
>
> OK.  Can you turn on IndexWriter's infoStream, get this disk full /
> corruption to happen again, and post back the resulting output?  Make
> sure your index first passes CheckIndex before starting (so we don't
> begin the test w/ any pre-existing index corruption).
>

Good point about CheckIndex.  I've already found 2 bad ones. I will build
new indexes from scratch. This will take a while.


> >> > On another occasion, the exception was:
> >> > background merge hit exception: _0:C1080260 _1:C139 _2:C123 _3:C107
> >> _4:C126
> >> > _5:C121 _6:C126 _7:C711 _8:C116 into _9 [optimize] [mergeDocStores]
> >>
> >> In this case, the SegmentMerger was trying to open this segment, but
> >> on attempting to read the first int from the fdx (fields index) file
> >> for one of the segments, it hit EOF.
> >>
> >> This is also spooky -- this looks like index corruption, which should
> >> never happen on hitting disk full.
> >>
> >
> > That's what I thought, too. Could Lucene be catching the IOException and
> > turning it into a different exception?
>
> I think that's unlikely, but I guess possible.  We have "disk full"
> tests in the unit tests, that throw an IOException at different times.
>
> What exact windows version are you using?  The local drive is NTFS?
>

Windows Server 2003 Enterprise x64 SP2. Local drive is NTFS


>
> Mike
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> For additional commands, e-mail: java-user-help@lucene.apache.org
>
>

Re: IO exception during merge/optimize

Posted by Michael McCandless <lu...@mikemccandless.com>.
On Mon, Oct 26, 2009 at 2:55 PM, Peter Keegan <pe...@gmail.com> wrote:
> On Mon, Oct 26, 2009 at 2:50 PM, Michael McCandless <
> lucene@mikemccandless.com> wrote:
>
>> On Mon, Oct 26, 2009 at 10:44 AM, Peter Keegan <pe...@gmail.com>
>> wrote:
>> > Even running in console mode, the exception is difficult to interpret.
>> > Here's an exception that I think occurred during an add document, commit
>> or
>> > close:
>> > doc counts differ for segment _g: field Reader shows 137 but segmentInfo
>> > shows 5777
>>
>> That's spooky.  Do you have the full exception for this one?  What IO
>> system are you running on?  (Is it just a local drive on your windows
>> computer?) It's almost as if the IO system is not generating an
>> IOException to Java when disk fills up.
>>
>
> Index and code are all on a local drive. There is no other exception coming
> back - just what I reported.

But, you didn't report a traceback for this first one?

>> > I ensured that the disk space was low before updating the index.
>>
>> You mean, to intentionally test the disk-full case?
>>
>
> Yes, that's right.

OK.  Can you turn on IndexWriter's infoStream, get this disk full /
corruption to happen again, and post back the resulting output?  Make
sure your index first passes CheckIndex before starting (so we don't
begin the test w/ any pre-existing index corruption).

>> > On another occasion, the exception was:
>> > background merge hit exception: _0:C1080260 _1:C139 _2:C123 _3:C107
>> _4:C126
>> > _5:C121 _6:C126 _7:C711 _8:C116 into _9 [optimize] [mergeDocStores]
>>
>> In this case, the SegmentMerger was trying to open this segment, but
>> on attempting to read the first int from the fdx (fields index) file
>> for one of the segments, it hit EOF.
>>
>> This is also spooky -- this looks like index corruption, which should
>> never happen on hitting disk full.
>>
>
> That's what I thought, too. Could Lucene be catching the IOException and
> turning it into a different exception?

I think that's unlikely, but I guess possible.  We have "disk full"
tests in the unit tests, that throw an IOException at different times.

What exact windows version are you using?  The local drive is NTFS?

Mike

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: IO exception during merge/optimize

Posted by Peter Keegan <pe...@gmail.com>.
On Mon, Oct 26, 2009 at 2:50 PM, Michael McCandless <
lucene@mikemccandless.com> wrote:

> On Mon, Oct 26, 2009 at 10:44 AM, Peter Keegan <pe...@gmail.com>
> wrote:
> > Even running in console mode, the exception is difficult to interpret.
> > Here's an exception that I think occurred during an add document, commit
> or
> > close:
> > doc counts differ for segment _g: field Reader shows 137 but segmentInfo
> > shows 5777
>
> That's spooky.  Do you have the full exception for this one?  What IO
> system are you running on?  (Is it just a local drive on your windows
> computer?) It's almost as if the IO system is not generating an
> IOException to Java when disk fills up.
>

Index and code are all on a local drive. There is no other exception coming
back - just what I reported.

>
> > I ensured that the disk space was low before updating the index.
>
> You mean, to intentionally test the disk-full case?
>

Yes, that's right.


>
> > On another occasion, the exception was:
> > background merge hit exception: _0:C1080260 _1:C139 _2:C123 _3:C107
> _4:C126
> > _5:C121 _6:C126 _7:C711 _8:C116 into _9 [optimize] [mergeDocStores]
>
> In this case, the SegmentMerger was trying to open this segment, but
> on attempting to read the first int from the fdx (fields index) file
> for one of the segments, it hit EOF.
>
> This is also spooky -- this looks like index corruption, which should
> never happen on hitting disk full.
>

That's what I thought, too. Could Lucene be catching the IOException and
turning it into a different exception?


>
> Mike
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> For additional commands, e-mail: java-user-help@lucene.apache.org
>
>

Re: IO exception during merge/optimize

Posted by Michael McCandless <lu...@mikemccandless.com>.
On Mon, Oct 26, 2009 at 10:44 AM, Peter Keegan <pe...@gmail.com> wrote:
> Even running in console mode, the exception is difficult to interpret.
> Here's an exception that I think occurred during an add document, commit or
> close:
> doc counts differ for segment _g: field Reader shows 137 but segmentInfo
> shows 5777

That's spooky.  Do you have the full exception for this one?  What IO
system are you running on?  (Is it just a local drive on your windows
computer?) It's almost as if the IO system is not generating an
IOException to Java when disk fills up.

> I ensured that the disk space was low before updating the index.

You mean, to intentionally test the disk-full case?

> On another occasion, the exception was:
> background merge hit exception: _0:C1080260 _1:C139 _2:C123 _3:C107 _4:C126
> _5:C121 _6:C126 _7:C711 _8:C116 into _9 [optimize] [mergeDocStores]

In this case, the SegmentMerger was trying to open this segment, but
on attempting to read the first int from the fdx (fields index) file
for one of the segments, it hit EOF.

This is also spooky -- this looks like index corruption, which should
never happen on hitting disk full.

Mike

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: IO exception during merge/optimize

Posted by Peter Keegan <pe...@gmail.com>.
Even running in console mode, the exception is difficult to interpret.
Here's an exception that I think occurred during an add document, commit or
close:
doc counts differ for segment _g: field Reader shows 137 but segmentInfo
shows 5777
I ensured that the disk space was low before updating the index.

On another occasion, the exception was:
background merge hit exception: _0:C1080260 _1:C139 _2:C123 _3:C107 _4:C126
_5:C121 _6:C126 _7:C711 _8:C116 into _9 [optimize] [mergeDocStores]
And the accompanying stack trace was:
java.io.IOException: read past EOF
        at
org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:151)
        at
org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:38)
        at org.apache.lucene.store.IndexInput.readInt(IndexInput.java:70)
        at
org.apache.lucene.index.FieldsReader.<init>(FieldsReader.java:110)
        at
org.apache.lucene.index.SegmentReader$CoreReaders.openDocStores(SegmentReader.java:277)
        at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:640)
        at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:608)
        at
org.apache.lucene.index.IndexWriter$ReaderPool.get(IndexWriter.java:679)
        at
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4961)
        at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4596)
        at
org.apache.lucene.index.IndexWriter.resolveExternalSegments(IndexWriter.java:3786)
        at
org.apache.lucene.index.IndexWriter.addIndexesNoOptimize(IndexWriter.java:3695)

I guess this is just the nature of a low disk space condition on Windows. I
expected to see a 'no space left on device' IO exception.

Peter

On Sun, Oct 25, 2009 at 8:54 PM, Peter Keegan <pe...@gmail.com>wrote:

> The environment involves a lot of I/O from merge/optimize operations on
> multiple indexes (shards) on one server.
> I will try running the indexers in console mode, where I would expect to
> see all errors and exceptions.
>
> Peter
>
>
>
> On Sun, Oct 25, 2009 at 8:40 PM, Michael McCandless <
> lucene@mikemccandless.com> wrote:
>
>> Hmm, if you got no exception whatsoever, something more fundamental
>> seems to be wrong w/ the error reporting when running as a windows
>> service.  Maybe make a simple Java test program that throws an
>> exception and try to get that working?
>>
>> Mike
>>
>> On Sun, Oct 25, 2009 at 7:35 PM, Peter Keegan <pe...@gmail.com>
>> wrote:
>> >>Did you get any traceback printed at all?
>> > no, only what I reported.
>> >
>> >>Did you see any BG thread exceptions on wherever your System.err is
>> > directed to?
>> > The jvm was running as a windows service, so output to System.err may
>> have
>> > gone to the bit bucket.
>> > That's an interesting point, though.
>> >
>> > Peter
>> >
>> >
>> > On Sun, Oct 25, 2009 at 8:47 AM, Michael McCandless <
>> > lucene@mikemccandless.com> wrote:
>> >
>> >> Hmm... Lucene tries to catch the original cause (from the BG thread
>> >> doing the merge) and forward it to the main thread waiting for
>> >> optimize to complete.
>> >>
>> >> Did you get any traceback printed at all?  It should include one
>> >> traceback into Lucene's optimized method, and then another (under
>> >> "caused by") showing the exception from the BG merge thread.
>> >>
>> >> Did you see any BG thread exceptions on wherever your System.err is
>> >> directed to?
>> >>
>> >> Mike
>> >>
>> >> On Sat, Oct 24, 2009 at 5:21 PM, Peter Keegan <pe...@gmail.com>
>> >> wrote:
>> >> > btw, this is with Lucene 2.9
>> >> >
>> >> > On Sat, Oct 24, 2009 at 5:20 PM, Peter Keegan <
>> peterlkeegan@gmail.com
>> >> >wrote:
>> >> >
>> >> >> I'm sometimes seeing the following exception from an operation that
>> does
>> >> a
>> >> >> merge and optimize:
>> >> >>  java.io.IOException: background merge hit exception: _0:C1082866
>> _1:C79
>> >> >> into _2 [optimize] [mergeDocStores]
>> >> >> I'm pretty sure that it's caused by a temporary low disk space
>> >> condition,
>> >> >> but I'd like to be able to confirm this. It would be nice to have
>> the
>> >> java
>> >> >> exception included in the Lucene exception. Any way to get this?
>> >> >>
>> >> >> Peter
>> >> >>
>> >> >>
>> >> >
>> >>
>> >> ---------------------------------------------------------------------
>> >> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>> >> For additional commands, e-mail: java-user-help@lucene.apache.org
>> >>
>> >>
>> >
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>> For additional commands, e-mail: java-user-help@lucene.apache.org
>>
>>
>

Re: IO exception during merge/optimize

Posted by Peter Keegan <pe...@gmail.com>.
The environment involves a lot of I/O from merge/optimize operations on
multiple indexes (shards) on one server.
I will try running the indexers in console mode, where I would expect to see
all errors and exceptions.

Peter


On Sun, Oct 25, 2009 at 8:40 PM, Michael McCandless <
lucene@mikemccandless.com> wrote:

> Hmm, if you got no exception whatsoever, something more fundamental
> seems to be wrong w/ the error reporting when running as a windows
> service.  Maybe make a simple Java test program that throws an
> exception and try to get that working?
>
> Mike
>
> On Sun, Oct 25, 2009 at 7:35 PM, Peter Keegan <pe...@gmail.com>
> wrote:
> >>Did you get any traceback printed at all?
> > no, only what I reported.
> >
> >>Did you see any BG thread exceptions on wherever your System.err is
> > directed to?
> > The jvm was running as a windows service, so output to System.err may
> have
> > gone to the bit bucket.
> > That's an interesting point, though.
> >
> > Peter
> >
> >
> > On Sun, Oct 25, 2009 at 8:47 AM, Michael McCandless <
> > lucene@mikemccandless.com> wrote:
> >
> >> Hmm... Lucene tries to catch the original cause (from the BG thread
> >> doing the merge) and forward it to the main thread waiting for
> >> optimize to complete.
> >>
> >> Did you get any traceback printed at all?  It should include one
> >> traceback into Lucene's optimized method, and then another (under
> >> "caused by") showing the exception from the BG merge thread.
> >>
> >> Did you see any BG thread exceptions on wherever your System.err is
> >> directed to?
> >>
> >> Mike
> >>
> >> On Sat, Oct 24, 2009 at 5:21 PM, Peter Keegan <pe...@gmail.com>
> >> wrote:
> >> > btw, this is with Lucene 2.9
> >> >
> >> > On Sat, Oct 24, 2009 at 5:20 PM, Peter Keegan <peterlkeegan@gmail.com
> >> >wrote:
> >> >
> >> >> I'm sometimes seeing the following exception from an operation that
> does
> >> a
> >> >> merge and optimize:
> >> >>  java.io.IOException: background merge hit exception: _0:C1082866
> _1:C79
> >> >> into _2 [optimize] [mergeDocStores]
> >> >> I'm pretty sure that it's caused by a temporary low disk space
> >> condition,
> >> >> but I'd like to be able to confirm this. It would be nice to have the
> >> java
> >> >> exception included in the Lucene exception. Any way to get this?
> >> >>
> >> >> Peter
> >> >>
> >> >>
> >> >
> >>
> >> ---------------------------------------------------------------------
> >> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> >> For additional commands, e-mail: java-user-help@lucene.apache.org
> >>
> >>
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> For additional commands, e-mail: java-user-help@lucene.apache.org
>
>

Re: IO exception during merge/optimize

Posted by Michael McCandless <lu...@mikemccandless.com>.
Hmm, if you got no exception whatsoever, something more fundamental
seems to be wrong w/ the error reporting when running as a windows
service.  Maybe make a simple Java test program that throws an
exception and try to get that working?

Mike

On Sun, Oct 25, 2009 at 7:35 PM, Peter Keegan <pe...@gmail.com> wrote:
>>Did you get any traceback printed at all?
> no, only what I reported.
>
>>Did you see any BG thread exceptions on wherever your System.err is
> directed to?
> The jvm was running as a windows service, so output to System.err may have
> gone to the bit bucket.
> That's an interesting point, though.
>
> Peter
>
>
> On Sun, Oct 25, 2009 at 8:47 AM, Michael McCandless <
> lucene@mikemccandless.com> wrote:
>
>> Hmm... Lucene tries to catch the original cause (from the BG thread
>> doing the merge) and forward it to the main thread waiting for
>> optimize to complete.
>>
>> Did you get any traceback printed at all?  It should include one
>> traceback into Lucene's optimized method, and then another (under
>> "caused by") showing the exception from the BG merge thread.
>>
>> Did you see any BG thread exceptions on wherever your System.err is
>> directed to?
>>
>> Mike
>>
>> On Sat, Oct 24, 2009 at 5:21 PM, Peter Keegan <pe...@gmail.com>
>> wrote:
>> > btw, this is with Lucene 2.9
>> >
>> > On Sat, Oct 24, 2009 at 5:20 PM, Peter Keegan <peterlkeegan@gmail.com
>> >wrote:
>> >
>> >> I'm sometimes seeing the following exception from an operation that does
>> a
>> >> merge and optimize:
>> >>  java.io.IOException: background merge hit exception: _0:C1082866 _1:C79
>> >> into _2 [optimize] [mergeDocStores]
>> >> I'm pretty sure that it's caused by a temporary low disk space
>> condition,
>> >> but I'd like to be able to confirm this. It would be nice to have the
>> java
>> >> exception included in the Lucene exception. Any way to get this?
>> >>
>> >> Peter
>> >>
>> >>
>> >
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>> For additional commands, e-mail: java-user-help@lucene.apache.org
>>
>>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: IO exception during merge/optimize

Posted by Peter Keegan <pe...@gmail.com>.
>Did you get any traceback printed at all?
no, only what I reported.

>Did you see any BG thread exceptions on wherever your System.err is
directed to?
The jvm was running as a windows service, so output to System.err may have
gone to the bit bucket.
That's an interesting point, though.

Peter


On Sun, Oct 25, 2009 at 8:47 AM, Michael McCandless <
lucene@mikemccandless.com> wrote:

> Hmm... Lucene tries to catch the original cause (from the BG thread
> doing the merge) and forward it to the main thread waiting for
> optimize to complete.
>
> Did you get any traceback printed at all?  It should include one
> traceback into Lucene's optimized method, and then another (under
> "caused by") showing the exception from the BG merge thread.
>
> Did you see any BG thread exceptions on wherever your System.err is
> directed to?
>
> Mike
>
> On Sat, Oct 24, 2009 at 5:21 PM, Peter Keegan <pe...@gmail.com>
> wrote:
> > btw, this is with Lucene 2.9
> >
> > On Sat, Oct 24, 2009 at 5:20 PM, Peter Keegan <peterlkeegan@gmail.com
> >wrote:
> >
> >> I'm sometimes seeing the following exception from an operation that does
> a
> >> merge and optimize:
> >>  java.io.IOException: background merge hit exception: _0:C1082866 _1:C79
> >> into _2 [optimize] [mergeDocStores]
> >> I'm pretty sure that it's caused by a temporary low disk space
> condition,
> >> but I'd like to be able to confirm this. It would be nice to have the
> java
> >> exception included in the Lucene exception. Any way to get this?
> >>
> >> Peter
> >>
> >>
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> For additional commands, e-mail: java-user-help@lucene.apache.org
>
>

Re: IO exception during merge/optimize

Posted by Michael McCandless <lu...@mikemccandless.com>.
Hmm... Lucene tries to catch the original cause (from the BG thread
doing the merge) and forward it to the main thread waiting for
optimize to complete.

Did you get any traceback printed at all?  It should include one
traceback into Lucene's optimized method, and then another (under
"caused by") showing the exception from the BG merge thread.

Did you see any BG thread exceptions on wherever your System.err is directed to?

Mike

On Sat, Oct 24, 2009 at 5:21 PM, Peter Keegan <pe...@gmail.com> wrote:
> btw, this is with Lucene 2.9
>
> On Sat, Oct 24, 2009 at 5:20 PM, Peter Keegan <pe...@gmail.com>wrote:
>
>> I'm sometimes seeing the following exception from an operation that does a
>> merge and optimize:
>>  java.io.IOException: background merge hit exception: _0:C1082866 _1:C79
>> into _2 [optimize] [mergeDocStores]
>> I'm pretty sure that it's caused by a temporary low disk space condition,
>> but I'd like to be able to confirm this. It would be nice to have the java
>> exception included in the Lucene exception. Any way to get this?
>>
>> Peter
>>
>>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: IO exception during merge/optimize

Posted by Peter Keegan <pe...@gmail.com>.
btw, this is with Lucene 2.9

On Sat, Oct 24, 2009 at 5:20 PM, Peter Keegan <pe...@gmail.com>wrote:

> I'm sometimes seeing the following exception from an operation that does a
> merge and optimize:
>  java.io.IOException: background merge hit exception: _0:C1082866 _1:C79
> into _2 [optimize] [mergeDocStores]
> I'm pretty sure that it's caused by a temporary low disk space condition,
> but I'd like to be able to confirm this. It would be nice to have the java
> exception included in the Lucene exception. Any way to get this?
>
> Peter
>
>