You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by eks dev <ek...@yahoo.co.uk> on 2007/07/13 10:33:39 UTC

Re: Post mortem kudos for (LUCENE-843) :)

Hi Mike, 
> Was 24M (and not more) clearly the fastest performance?

No, this is kind of optimum. Throwing more memory up to 32M makes things slightly faster at slow rate, having maximum at 32.  After that things start getting slower   (slowly)

We are not yet completely done with tuning, especially  with two tips  you mentioned in this mail.
Fields are already reused, but

1. Reusing Document, this is one new Vector() in there  (and at these speeds, something like this  makes difference!!!) 
in Document  List fields = new Vector(); (by the way, must this be synchronized  Vector? Why not ArrayList? Any difference from it)

2. Reusing Field, excuse my ignorance, but how I can do it? with Document is easy with 
luceneDocument.add(field)
luceneDocument.removeFields(name) //Wouldn't be better to have luceneDocument.removeAllFields()



3. "LUCENE-845" Whoops, I totally overlooked this one! And I am sure my maxBufferedDocs is well under what fits in 24Mb?!?  Any good tip on how to determine good number: count added docs and see how far this number goes before flush() triggers (how I detect when flush by ram gets triggered?) and than add 10% to this number...

----- Original Message ----
From: Michael McCandless <lu...@mikemccandless.com>
To: java-dev@lucene.apache.org
Sent: Thursday, 12 July, 2007 9:22:48 PM
Subject: Re: Post mortem kudos for (LUCENE-843) :)


Thank you for the compliments, and thank you for being such early
adopter testers!  I'm very glad you didn't hit any issues :)

> before LUCENE-843 indexing speed was 5-6k records per second (and I
> believed this was already as fast as it gets)
> after (trunk version yesterday) 60-65k documents per second! All
> (exhaustive!) tests pass on this index.

Wow, 10X speedup is even faster than my fastest results!

> autocommit = false, 24M RAMBuffer, using char[] instead of String
> for Token (this was the reason we separated Analysis in two phases,
> leaving for Lucene Analyzer only simple whitespace tokenization)

Looks like you're doing everything right to get fastest performance.

You can also re-use the Document & Field instances, and also the Token
instance in your analyzer and that should help more.

Was 24M (and not more) clearly the fastest performance?

Also note that you must workaround LUCENE-845 (still open):

  http://issues.apache.org/jira/browse/LUCENE-845;jsessionid=E110C767DA8EFEC5B7D39D00EEF1EB74

You should set your maxBufferedDocs to something "close to but always
above" how many docs actually get flushed after 24 MB RAM is full,
else you could spend way too much time merging.  I'm working on
LUCENE-845 now but not yet sure when it will be resolved...

Mike

---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-dev-help@lucene.apache.org






		
___________________________________________________________ 
What kind of emailer are you? Find out today - get a free analysis of your email personality. Take the quiz at the Yahoo! Mail Championship. 
http://uk.rd.yahoo.com/evt=44106/*http://mail.yahoo.net/uk 


---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-dev-help@lucene.apache.org


Re: Post mortem kudos for (LUCENE-843) :)

Posted by Michael McCandless <lu...@mikemccandless.com>.
"Peter Keegan" <pe...@gmail.com> wrote:
> I did some performance comparison testing of Lucene 2.0 vs. trunk (with
> LUCENE-843). I'm seeing at least a 4X increase in indexing rate with the new
> DocumentsWriter in LUCENE-843 (still doing single-threaded indexing). Better
> yet, the total time to build the index is much shorter because I can now
> build the entire 3GB index (900K docs) in one segment in RAM (using
> FSDirectory) and flush it to disk at the end. Before, I had to build smaller
> segments (20K docs), merge after 20 segments and then optimize at the end.

Awesome :)

> The memory usage with LUCENE-843 is much lower, presumably because stored
> fields and term vectors no longer sit in RAM.

Right, not buffering the stored fields & term vectors in RAM is a big
win.  In addition, the storage of the postings in RAM as a single shared
hash table using a pool of large byte[] arrays vs separate 1 KB
buffers for the files for a single segment document, also improve RAM
efficiency.

In my tests, using Europarl content with small docs (~100 terms = ~550
bytes per doc) with stored fields & term vectors enabled the RAM
efficiency is 44X better than before.

> I also observed a 20-25% gain by reusing the Field objects. Implementing my
> own Fieldable class was too complicated, so I simply extended the Field
> class (after removing final) and added 2 setter methods:
> 
>       public void setValue(String value) {
>         this.fieldsData = value;
>       }
>       public void setValue(byte[] value) {
>         this.fieldsData = value;
>       }
> 
> Since this improved performance significantly, I would vote to either add
> setters to Field or make it extendable.

OK I've opened LUCENE-963 for this & attached a patch.

> Kudos to Mike for this huge improvement!

Thanks!

Mike

---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-dev-help@lucene.apache.org


Re: Post mortem kudos for (LUCENE-843) :)

Posted by Peter Keegan <pe...@gmail.com>.
I did some performance comparison testing of Lucene 2.0 vs. trunk (with
LUCENE-843). I'm seeing at least a 4X increase in indexing rate with the new
DocumentsWriter in LUCENE-843 (still doing single-threaded indexing). Better
yet, the total time to build the index is much shorter because I can now
build the entire 3GB index (900K docs) in one segment in RAM (using
FSDirectory) and flush it to disk at the end. Before, I had to build smaller
segments (20K docs), merge after 20 segments and then optimize at the end.
The memory usage with LUCENE-843 is much lower, presumably because stored
fields and term vectors no longer sit in RAM.

I also observed a 20-25% gain by reusing the Field objects. Implementing my
own Fieldable class was too complicated, so I simply extended the Field
class (after removing final) and added 2 setter methods:

      public void setValue(String value) {
        this.fieldsData = value;
      }
      public void setValue(byte[] value) {
        this.fieldsData = value;
      }

Since this improved performance significantly, I would vote to either add
setters to Field or make it extendable.

Kudos to Mike for this huge improvement!

Peter

On 7/13/07, Michael McCandless <lu...@mikemccandless.com> wrote:
>
> "Grant Ingersoll" <gs...@apache.org> wrote:
>
> > This is good stuff...  Might be good to put a organized version of
> > this up on the Wiki under Best Practices
>
> I agree!  I will update the ImproveIndexingSpeed page:
>
>     http://wiki.apache.org/lucene-java/ImproveIndexingSpeed
>
> with these suggestions.
>
> > On Jul 13, 2007, at 8:13 AM, Michael McCandless wrote:
> >
> > > Yeah it's not so easy now: Field.java does not have setters.
> > >
> > > You have to make your own class that implements Fieldable (or
> > > subclasses AbstractField) and adds your own setters.  Field.java is
> > > also [currently] final so you can't subclass it.
> > >
> >
> > Should we consider putting in these changes?  I think it might be a
> > little weird on the Search side to have setters for Field and it
> > sounds like it could cause trouble for people esp. in a threaded
> > indexing situation, but maybe I am mistaken?
>
> I think adding setters would be reasonable, if we document clearly
> that they are advanced, be careful about threads, use at your own risk
> sort of methods?  Are there any concerns with that approach?  If not
> I'll open an issue and do it... this just makes it easier for people
> to maximize indexing performance "out of the box".
>
> Mike
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: java-dev-unsubscribe@lucene.apache.org
> For additional commands, e-mail: java-dev-help@lucene.apache.org
>
>

Re: Post mortem kudos for (LUCENE-843) :)

Posted by Michael McCandless <lu...@mikemccandless.com>.
"Grant Ingersoll" <gs...@apache.org> wrote:

> This is good stuff...  Might be good to put a organized version of  
> this up on the Wiki under Best Practices

I agree!  I will update the ImproveIndexingSpeed page:

    http://wiki.apache.org/lucene-java/ImproveIndexingSpeed

with these suggestions.

> On Jul 13, 2007, at 8:13 AM, Michael McCandless wrote:
>
> > Yeah it's not so easy now: Field.java does not have setters.
> >
> > You have to make your own class that implements Fieldable (or
> > subclasses AbstractField) and adds your own setters.  Field.java is
> > also [currently] final so you can't subclass it.
> >
> 
> Should we consider putting in these changes?  I think it might be a  
> little weird on the Search side to have setters for Field and it  
> sounds like it could cause trouble for people esp. in a threaded  
> indexing situation, but maybe I am mistaken?

I think adding setters would be reasonable, if we document clearly
that they are advanced, be careful about threads, use at your own risk
sort of methods?  Are there any concerns with that approach?  If not
I'll open an issue and do it... this just makes it easier for people
to maximize indexing performance "out of the box".

Mike

---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-dev-help@lucene.apache.org


Re: Post mortem kudos for (LUCENE-843) :)

Posted by Grant Ingersoll <gs...@apache.org>.
This is good stuff...  Might be good to put a organized version of  
this up on the Wiki under Best Practices

On Jul 13, 2007, at 8:13 AM, Michael McCandless wrote:

>
> Yeah it's not so easy now: Field.java does not have setters.
>
> You have to make your own class that implements Fieldable (or
> subclasses AbstractField) and adds your own setters.  Field.java is
> also [currently] final so you can't subclass it.
>

Should we consider putting in these changes?  I think it might be a  
little weird on the Search side to have setters for Field and it  
sounds like it could cause trouble for people esp. in a threaded  
indexing situation, but maybe I am mistaken?

At any rate, it sounds like these would be good contributions as long  
as they are well documented.


-Grant



---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-dev-help@lucene.apache.org


Re: Post mortem kudos for (LUCENE-843) :)

Posted by Michael McCandless <lu...@mikemccandless.com>.
"eks dev" <ek...@yahoo.co.uk> wrote:

> > Was 24M (and not more) clearly the fastest performance?
> 
> No, this is kind of optimum. Throwing more memory up to 32M makes things
> slightly faster at slow rate, having maximum at 32.  After that things
> start getting slower   (slowly)

Interesting.  This matches the experience Doron had where adding more
RAM actually slowed things down a bit (posted to
LUCENE-843).

> We are not yet completely done with tuning, especially  with two tips 
> you mentioned in this mail.
> Fields are already reused, but

Super.

> 1. Reusing Document, this is one new Vector() in there  (and at these
> speeds, something like this  makes difference!!!) 
> in Document  List fields = new Vector(); (by the way, must this be
> synchronized  Vector? Why not ArrayList? Any difference from it)

Oh yeah, it would be good to not "new Vector()" every time.

What I did in the benchmarking for LUCENE-843 was make a single
Document, make my N fields (using my own class that implements
Fieldable but lets me change the value), add these fields to the
Document, and then hold onto the fields as local variables (textField,
titleField, idField, etc.).

Then for each doc I just set the field values
(textField.setValue(...), etc.) and then call writer.addDocument(doc).

> 2. Reusing Field, excuse my ignorance, but how I can do it? with Document
> is easy with 
> luceneDocument.add(field)
> luceneDocument.removeFields(name) //Wouldn't be better to have
> luceneDocument.removeAllFields()

Yeah it's not so easy now: Field.java does not have setters.

You have to make your own class that implements Fieldable (or
subclasses AbstractField) and adds your own setters.  Field.java is
also [currently] final so you can't subclass it.

In the benchmarking code (see patch in
http://issues.apache.org/jira/browse/LUCENE-947) I created a
ReusableStringField that lets you setStringValue(...).  You could use
that as your Field class.

Alternatively you can make a "ReusableStringReader" (there's one in
DocumentsWriter in the trunk now) and then use the normal Field class
but pass in your instance of ReusableStringReader.  This approach
could be faster if you implemented it to use a char[] instead of a
String (the current one in DocumentsWriter reads a String).

> 3. "LUCENE-845" Whoops, I totally overlooked this one! And I am sure my
> maxBufferedDocs is well under what fits in 24Mb?!?  Any good tip on how
> to determine good number: count added docs and see how far this number
> goes before flush() triggers (how I detect when flush by ram gets
> triggered?) and than add 10% to this number...

Whoa, OK.

First you need to figure out how many docs are "typically" getting
flushed at 24 MB.  Easiest way would be to call
writer.setInfoStream(System.out) and look for the lines that say
"flush postings as segment XXX numDocs=YYY".  Likely your YYY is
"fairly" close every time since your docs are so predictable in size.

Then, set your maxBufferedDocs anywhere above YYY and below 10 * YYY
and you shouldn't hit LUCENE-845 (actually 5.5 * YYY is best since it
gives you max safety margin).  Note that you should call
setMaxBufferedDocs(...) first and then call setRamBufferSizeMB(...)
in that order.  If you do it backwards then the writer will flush @
exactly that number of buffered docs instead.

Mike

---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-dev-help@lucene.apache.org