You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by Aviran <am...@infosciences.com> on 2004/07/12 20:19:08 UTC

FW: Lucene Search has poor cpu utilization on a 4-CPU machine

Hi all,
First let me explain what I found out. I'm running Lucene on a 4 CPU server.
While doing some stress tests I've noticed (by doing full thread dump) that
searching threads are blocked on the method: public FieldInfo fieldInfo(int
fieldNumber) This causes for a significant cpu idle time. 
I noticed that the class org.apache.lucene.index.FieldInfos uses private
class members Vector byNumber and Hashtable byName, both of which are
synchronized objects. By changing the Vector byNumber to ArrayList byNumber
and byName to HashMap, I was able to get 110% improvement in performance
(number of searches per second).

This issue was raised on Lucene user group, in which Doug suggested I submit
a  patch to the developer mailing list. So here it is attached to this
email.
I also reported this issue in bugzilla (Bug 30058)

Thanks,
Aviran



---------------------------------------------------------------------
To unsubscribe, e-mail: lucene-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: lucene-user-help@jakarta.apache.org


Re: FW: Lucene Search has poor cpu utilization on a 4-CPU machine

Posted by Byron Miller <by...@gmail.com>.
I can provide access to debugging on a dual xeon w/ht enabled with
100million document index.

On Thu, 15 Jul 2004 14:09:23 -0400, Aviran <am...@infosciences.com> wrote:

> This is just a subset of the entire index.
> In the project that I'm working on we have ~1500 documents (this is a
> dynamic number, since documents are being added and deleted from the index
> every day).
> 
> What do you consider a large index, I can build one just for testing
> purposes and see what the difference is.

---------------------------------------------------------------------
To unsubscribe, e-mail: lucene-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: lucene-dev-help@jakarta.apache.org


RE: FW: Lucene Search has poor cpu utilization on a 4-CPU machine

Posted by Aviran <am...@infosciences.com>.
Aviran wrote:
> My test index is pretty small size, about 250 documents and about 24 
> fields in each document. The test is done by starting 10 threads that 
> repeat simple one word query (each thread query on a different word). 
> Neither range nor wildcard query is done.
> I let the test run for about a minute and then I do a full thread dump to
> see the stack trace.
> I use a single searcher which never gets closed.

Thanks for providing these details.

Benchmarking with such a small index will emphasize per-query-term 
overheads, like dictionary lookup.  This is probably why you've seen 
such a large speedup when removing some thread contention from that 
area.  However, folks with larger indexes are much less likely to 
encounter this thread contention, since relatively less of their time 
will be spent in this area.

Is this benchmark typical of your application?

Doug

This is just a subset of the entire index.
In the project that I'm working on we have ~1500 documents (this is a
dynamic number, since documents are being added and deleted from the index
every day).

What do you consider a large index, I can build one just for testing
purposes and see what the difference is.

Aviran




---------------------------------------------------------------------
To unsubscribe, e-mail: lucene-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: lucene-dev-help@jakarta.apache.org


Re: FW: Lucene Search has poor cpu utilization on a 4-CPU machine

Posted by Doug Cutting <cu...@apache.org>.
Aviran wrote:
> My test index is pretty small size, about 250 documents and about 24 fields
> in each document. 
> The test is done by starting 10 threads that repeat simple one word query
> (each thread query on a different word). Neither range nor wildcard query is
> done.
> I let the test run for about a minute and then I do a full thread dump to
> see the stack trace.
> I use a single searcher which never gets closed.

Thanks for providing these details.

Benchmarking with such a small index will emphasize per-query-term 
overheads, like dictionary lookup.  This is probably why you've seen 
such a large speedup when removing some thread contention from that 
area.  However, folks with larger indexes are much less likely to 
encounter this thread contention, since relatively less of their time 
will be spent in this area.

Is this benchmark typical of your application?

Doug

---------------------------------------------------------------------
To unsubscribe, e-mail: lucene-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: lucene-dev-help@jakarta.apache.org


RE: FW: Lucene Search has poor cpu utilization on a 4-CPU machine

Posted by Aviran <am...@infosciences.com>.
> The second one is on
> org.apache.lucene.index.SegmentReader.norms(SegmentReader.java:318) 
> which is a synchronized method thus causing locks. I guess the 
> synchronization is done for a good reason, but you probably know the 
> answer better then me.

I'm surprised this is showing up.  Can you tell more about the size of 
your index and the nature of your queries?  If you're, e.g., doing lots 
of range or wildcard queries, then I can maybe see this showing up a 
little.  What is your benchmark like?

Are you "warming the cache" when you're performing these benchmarks?  In 
other words, are you first sending a few queries at a low rate before 
you start slamming it with high traffic?  If you're not, and/or you have 
a lot of fields, or you re-open searchers a lot, then this could show up 
too.

Doug


My test index is pretty small size, about 250 documents and about 24 fields
in each document. 
The test is done by starting 10 threads that repeat simple one word query
(each thread query on a different word). Neither range nor wildcard query is
done.
I let the test run for about a minute and then I do a full thread dump to
see the stack trace.
I use a single searcher which never gets closed.

Aviran



---------------------------------------------------------------------
To unsubscribe, e-mail: lucene-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: lucene-dev-help@jakarta.apache.org


Re: FW: Lucene Search has poor cpu utilization on a 4-CPU machine

Posted by Doug Cutting <cu...@apache.org>.
Aviran wrote:
> The next bottleneck is not very clear. There are two candidates which appear
> frequently in the thread dump.
> 
> The first one which appears more frequent then the others is using
> java.lang.StrictMath.log which is used in
> org.apache.lucene.search.DefaultSimilarity.idf. Definitely spending a lot of
> time there. (I don't know if there is anything we can do about it)

We could add an idf cache for small values of docFreq, e.g., 0-32, 
represented as a float[].  If you're doing a lot of range or wildcard 
queries then this should have very high hit rates.  The cache should be 
on the searcher, which determines the numDocs parameter.  It could be 
accessed through a new Searchable method, idf(Term).

> The second one is on
> org.apache.lucene.index.SegmentReader.norms(SegmentReader.java:318) which is
> a synchronized method thus causing locks. I guess the synchronization is
> done for a good reason, but you probably know the answer better then me.

I'm surprised this is showing up.  Can you tell more about the size of 
your index and the nature of your queries?  If you're, e.g., doing lots 
of range or wildcard queries, then I can maybe see this showing up a 
little.  What is your benchmark like?

Are you "warming the cache" when you're performing these benchmarks?  In 
other words, are you first sending a few queries at a low rate before 
you start slamming it with high traffic?  If you're not, and/or you have 
a lot of fields, or you re-open searchers a lot, then this could show up 
too.

Doug



---------------------------------------------------------------------
To unsubscribe, e-mail: lucene-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: lucene-dev-help@jakarta.apache.org


RE: FW: Lucene Search has poor cpu utilization on a 4-CPU machine

Posted by Aviran <am...@infosciences.com>.
Tested using cvs files, and it works great.

The next bottleneck is not very clear. There are two candidates which appear
frequently in the thread dump.

The first one which appears more frequent then the others is using
java.lang.StrictMath.log which is used in
org.apache.lucene.search.DefaultSimilarity.idf. Definitely spending a lot of
time there. (I don't know if there is anything we can do about it)
"Thread-14" daemon prio=1 tid=0x080dd7e0 nid=0xd51 runnable
[4ea42000..4ea4387c]
        at java.lang.StrictMath.log(Native Method)
        at java.lang.Math.log(Math.java:255)
        at
org.apache.lucene.search.DefaultSimilarity.idf(DefaultSimilarity.java:43)
        at org.apache.lucene.search.Similarity.idf(Similarity.java:255)
        at
org.apache.lucene.search.TermQuery$TermWeight.sumOfSquaredWeights(TermQuery.
java:47)
        at
org.apache.lucene.search.BooleanQuery$BooleanWeight.sumOfSquaredWeights(Bool
eanQuery.java:110)
        at org.apache.lucene.search.Query.weight(Query.java:86)
        at
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:154)



The second one is on
org.apache.lucene.index.SegmentReader.norms(SegmentReader.java:318) which is
a synchronized method thus causing locks. I guess the synchronization is
done for a good reason, but you probably know the answer better then me.
"Thread-29" daemon prio=1 tid=0x4ca26fa0 nid=0x188e waiting for monitor
entry [4f108000..4f10987c]
        at
org.apache.lucene.index.SegmentReader.norms(SegmentReader.java:318)
        - waiting to lock <0x4616b600> (a
org.apache.lucene.index.SegmentReader)
        at
org.apache.lucene.search.TermQuery$TermWeight.scorer(TermQuery.java:64)
        at
org.apache.lucene.search.BooleanQuery$BooleanWeight.scorer(BooleanQuery.java
:165)
        at
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:154)


Aviran


-----Original Message-----
From: Otis Gospodnetic [mailto:otis_gospodnetic@yahoo.com] 
Sent: Tuesday, July 13, 2004 10:08 AM
To: Lucene Developers List
Subject: Re: FW: Lucene Search has poor cpu utilization on a 4-CPU machine


Thanks, I applied your change to the code in CVS.
Maybe you can test things out with your change, and see what the next
bottleneck is.

Thanks,
Otis

--- Aviran <am...@infosciences.com> wrote:
> 
> Hi all,
> First let me explain what I found out. I'm running Lucene on a 4 CPU 
> server. While doing some stress tests I've noticed (by doing full 
> thread
> dump) that
> searching threads are blocked on the method: public FieldInfo 
> fieldInfo(int
> fieldNumber) This causes for a significant cpu idle time.
> I noticed that the class org.apache.lucene.index.FieldInfos uses
> private
> class members Vector byNumber and Hashtable byName, both of which are
> synchronized objects. By changing the Vector byNumber to ArrayList
> byNumber
> and byName to HashMap, I was able to get 110% improvement in
> performance
> (number of searches per second).
> 
> This issue was raised on Lucene user group, in which Doug suggested I 
> submit a  patch to the developer mailing list. So here it is attached 
> to this
> email.
> I also reported this issue in bugzilla (Bug 30058)
> 
> Thanks,
> Aviran
> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: lucene-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: lucene-user-help@jakarta.apache.org
> 
> > package org.apache.lucene.index;
> 
> /**
>  * Copyright 2004 The Apache Software Foundation
>  *
>  * Licensed under the Apache License, Version 2.0 (the "License");
>  * you may not use this file except in compliance with the License.
>  * You may obtain a copy of the License at
>  *
>  *     http://www.apache.org/licenses/LICENSE-2.0
>  *
>  * Unless required by applicable law or agreed to in writing, software
>  * distributed under the License is distributed on an "AS IS" BASIS,
>  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
> implied.
>  * See the License for the specific language governing permissions
> and
>  * limitations under the License.
>  */
> 
> import java.util.*;
> import java.io.IOException;
> 
> import org.apache.lucene.document.Document;
> import org.apache.lucene.document.Field;
> 
> import org.apache.lucene.store.Directory;
> import org.apache.lucene.store.OutputStream;
> import org.apache.lucene.store.InputStream;
> 
> /** Access to the Field Info file that describes document fields and 
> whether or
>  *  not they are indexed. Each segment has a separate Field Info file. 
> Objects
>  *  of this class are thread-safe for multiple readers, but only one 
> thread can
>  *  be adding documents at a time, with no other reader or writer 
> threads
>  *  accessing this object.
>  */
> final class FieldInfos {
>   // private Vector byNumber = new Vector(); CHANGE BY AVIRAN
>   private ArrayList byNumber = new ArrayList();
>   private HashMap byName = new HashMap(); // Changes by aviran from 
> Hashtable
> 
>   FieldInfos() {
>     add("", false);
>   }
> 
>   /**
>    * Construct a FieldInfos object using the directory and the name of 
> the file
>    * InputStream
>    * @param d The directory to open the InputStream from
>    * @param name The name of the file to open the InputStream from in 
> the Directory
>    * @throws IOException
>    *
>    * @see #read
>    */
>   FieldInfos(Directory d, String name) throws IOException {
>     InputStream input = d.openFile(name);
>     try {
>       read(input);
>     } finally {
>       input.close();
>     }
>   }
> 
>   /** Adds field info for a Document. */
>   public void add(Document doc) {
>     Enumeration fields = doc.fields();
>     while (fields.hasMoreElements()) {
>       Field field = (Field) fields.nextElement();
>       add(field.name(), field.isIndexed(), 
> field.isTermVectorStored());
>     }
>   }
> 
>   /**
>    * @param names The names of the fields
>    * @param storeTermVectors Whether the fields store term vectors or 
> not
>    */
>   public void addIndexed(Collection names, boolean storeTermVectors) {
>     Iterator i = names.iterator();
>     int j = 0;
>     while (i.hasNext()) {
>       add((String)i.next(), true, storeTermVectors);
>     }
>   }
> 
>   /**
>    * Assumes the field is not storing term vectors
>    * @param names The names of the fields
>    * @param isIndexed Whether the fields are indexed or not
>    *
>    * @see #add(String, boolean)
>    */
>   public void add(Collection names, boolean isIndexed) {
>     Iterator i = names.iterator();
>     int j = 0;
>     while (i.hasNext()) {
>       add((String)i.next(), isIndexed);
>     }
>   }
> 
>   /**
>    * Calls three parameter add with false for the storeTermVector 
> parameter
>    * @param name The name of the Field
>    * @param isIndexed true if the field is indexed
>    * @see #add(String, boolean, boolean)
>    */
>   public void add(String name, boolean isIndexed) {
>     add(name, isIndexed, false);
>   }
> 
> 
>   /** If the field is not yet known, adds it. If it is known, checks 
> to make
>    *  sure that the isIndexed flag is the same as was given previously 
> for this
>    *  field. If not - marks it as being indexed.  Same goes for 
> storeTermVector
>    *
>    * @param name The name of the field
>    * @param isIndexed true if the field is indexed
>    * @param storeTermVector true if the term vector should be stored
>    */
>   public void add(String name, boolean isIndexed, boolean
> storeTermVector) {
>     FieldInfo fi = fieldInfo(name);
>     if (fi == null) {
>       addInternal(name, isIndexed, storeTermVector);
>     } else {
>       if (fi.isIndexed != isIndexed) {
>         fi.isIndexed = true;                      // once indexed,
> always index
>       }
>       if (fi.storeTermVector != storeTermVector) {
>         fi.storeTermVector = true;                // once vector,
> always vector
>       }
>     }
>   }
> 
>   private void addInternal(String name, boolean isIndexed,
>                            boolean storeTermVector) {
>     FieldInfo fi =
>       new FieldInfo(name, isIndexed, byNumber.size(), 
> storeTermVector);
>     byNumber.add(fi);
>     byName.put(name, fi);
>   }
> 
>   public int fieldNumber(String fieldName) {
>     FieldInfo fi = fieldInfo(fieldName);
>     if (fi != null)
>       return fi.number;
>     else
>       return -1;
>   }
> 
>   public FieldInfo fieldInfo(String fieldName) {
>     return (FieldInfo) byName.get(fieldName);
>   }
> 
>   public String fieldName(int fieldNumber) {
>     return fieldInfo(fieldNumber).name;
>   }
> 
>   public FieldInfo fieldInfo(int fieldNumber) {
>       return (FieldInfo) byNumber.get(fieldNumber);
>   }
> 
>   public int size() {
>     return byNumber.size();
>   }
> 
>   public boolean hasVectors() {
>     boolean hasVectors = false;
>     for (int i = 0; i < size(); i++) {
>       if (fieldInfo(i).storeTermVector)
>         hasVectors = true;
>     }
>     return hasVectors;
>   }
> 
>   public void write(Directory d, String name) throws IOException {
>     OutputStream output = d.createFile(name);
>     try {
>       write(output);
>     } finally {
>       output.close();
>     }
>   }
> 
>   public void write(OutputStream output) throws IOException {
>     output.writeVInt(size());
>     for (int i = 0; i < size(); i++) {
>       FieldInfo fi = fieldInfo(i);
>       byte bits = 0x0;
>       if (fi.isIndexed) bits |= 0x1;
>       if (fi.storeTermVector) bits |= 0x2;
>       output.writeString(fi.name);
>       //Was REMOVE
>       //output.writeByte((byte)(fi.isIndexed ? 1 : 0));
>       output.writeByte(bits);
>     }
>   }
> 
>   private void read(InputStream input) throws IOException {
>     int size = input.readVInt();//read in the size
>     for (int i = 0; i < size; i++) {
>       String name = input.readString().intern();
>       byte bits = input.readByte();
>       boolean isIndexed = (bits & 0x1) != 0;
>       boolean storeTermVector = (bits & 0x2) != 0;
>       addInternal(name, isIndexed, storeTermVector);
>     }
>   }
> 
> }
> 
> >
---------------------------------------------------------------------
> To unsubscribe, e-mail: lucene-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: lucene-dev-help@jakarta.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: lucene-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: lucene-dev-help@jakarta.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: lucene-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: lucene-dev-help@jakarta.apache.org


Re: FW: Lucene Search has poor cpu utilization on a 4-CPU machine

Posted by Otis Gospodnetic <ot...@yahoo.com>.
Thanks, I applied your change to the code in CVS.
Maybe you can test things out with your change, and see what the next
bottleneck is.

Thanks,
Otis

--- Aviran <am...@infosciences.com> wrote:
> 
> Hi all,
> First let me explain what I found out. I'm running Lucene on a 4 CPU
> server.
> While doing some stress tests I've noticed (by doing full thread
> dump) that
> searching threads are blocked on the method: public FieldInfo
> fieldInfo(int
> fieldNumber) This causes for a significant cpu idle time. 
> I noticed that the class org.apache.lucene.index.FieldInfos uses
> private
> class members Vector byNumber and Hashtable byName, both of which are
> synchronized objects. By changing the Vector byNumber to ArrayList
> byNumber
> and byName to HashMap, I was able to get 110% improvement in
> performance
> (number of searches per second).
> 
> This issue was raised on Lucene user group, in which Doug suggested I
> submit
> a  patch to the developer mailing list. So here it is attached to
> this
> email.
> I also reported this issue in bugzilla (Bug 30058)
> 
> Thanks,
> Aviran
> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: lucene-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: lucene-user-help@jakarta.apache.org
> 
> > package org.apache.lucene.index;
> 
> /**
>  * Copyright 2004 The Apache Software Foundation
>  *
>  * Licensed under the Apache License, Version 2.0 (the "License");
>  * you may not use this file except in compliance with the License.
>  * You may obtain a copy of the License at
>  *
>  *     http://www.apache.org/licenses/LICENSE-2.0
>  *
>  * Unless required by applicable law or agreed to in writing,
> software
>  * distributed under the License is distributed on an "AS IS" BASIS,
>  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
> implied.
>  * See the License for the specific language governing permissions
> and
>  * limitations under the License.
>  */
> 
> import java.util.*;
> import java.io.IOException;
> 
> import org.apache.lucene.document.Document;
> import org.apache.lucene.document.Field;
> 
> import org.apache.lucene.store.Directory;
> import org.apache.lucene.store.OutputStream;
> import org.apache.lucene.store.InputStream;
> 
> /** Access to the Field Info file that describes document fields and
> whether or
>  *  not they are indexed. Each segment has a separate Field Info
> file. Objects
>  *  of this class are thread-safe for multiple readers, but only one
> thread can
>  *  be adding documents at a time, with no other reader or writer
> threads
>  *  accessing this object.
>  */
> final class FieldInfos {
>   // private Vector byNumber = new Vector(); CHANGE BY AVIRAN
>   private ArrayList byNumber = new ArrayList();
>   private HashMap byName = new HashMap(); // Changes by aviran from
> Hashtable
> 
>   FieldInfos() {
>     add("", false);
>   }
> 
>   /**
>    * Construct a FieldInfos object using the directory and the name
> of the file
>    * InputStream
>    * @param d The directory to open the InputStream from
>    * @param name The name of the file to open the InputStream from in
> the Directory
>    * @throws IOException
>    *
>    * @see #read
>    */
>   FieldInfos(Directory d, String name) throws IOException {
>     InputStream input = d.openFile(name);
>     try {
>       read(input);
>     } finally {
>       input.close();
>     }
>   }
> 
>   /** Adds field info for a Document. */
>   public void add(Document doc) {
>     Enumeration fields = doc.fields();
>     while (fields.hasMoreElements()) {
>       Field field = (Field) fields.nextElement();
>       add(field.name(), field.isIndexed(),
> field.isTermVectorStored());
>     }
>   }
> 
>   /**
>    * @param names The names of the fields
>    * @param storeTermVectors Whether the fields store term vectors or
> not
>    */
>   public void addIndexed(Collection names, boolean storeTermVectors)
> {
>     Iterator i = names.iterator();
>     int j = 0;
>     while (i.hasNext()) {
>       add((String)i.next(), true, storeTermVectors);
>     }
>   }
> 
>   /**
>    * Assumes the field is not storing term vectors
>    * @param names The names of the fields
>    * @param isIndexed Whether the fields are indexed or not
>    *
>    * @see #add(String, boolean)
>    */
>   public void add(Collection names, boolean isIndexed) {
>     Iterator i = names.iterator();
>     int j = 0;
>     while (i.hasNext()) {
>       add((String)i.next(), isIndexed);
>     }
>   }
> 
>   /**
>    * Calls three parameter add with false for the storeTermVector
> parameter
>    * @param name The name of the Field
>    * @param isIndexed true if the field is indexed
>    * @see #add(String, boolean, boolean)
>    */
>   public void add(String name, boolean isIndexed) {
>     add(name, isIndexed, false);
>   }
> 
> 
>   /** If the field is not yet known, adds it. If it is known, checks
> to make
>    *  sure that the isIndexed flag is the same as was given
> previously for this
>    *  field. If not - marks it as being indexed.  Same goes for
> storeTermVector
>    *
>    * @param name The name of the field
>    * @param isIndexed true if the field is indexed
>    * @param storeTermVector true if the term vector should be stored
>    */
>   public void add(String name, boolean isIndexed, boolean
> storeTermVector) {
>     FieldInfo fi = fieldInfo(name);
>     if (fi == null) {
>       addInternal(name, isIndexed, storeTermVector);
>     } else {
>       if (fi.isIndexed != isIndexed) {
>         fi.isIndexed = true;                      // once indexed,
> always index
>       }
>       if (fi.storeTermVector != storeTermVector) {
>         fi.storeTermVector = true;                // once vector,
> always vector
>       }
>     }
>   }
> 
>   private void addInternal(String name, boolean isIndexed,
>                            boolean storeTermVector) {
>     FieldInfo fi =
>       new FieldInfo(name, isIndexed, byNumber.size(),
> storeTermVector);
>     byNumber.add(fi);
>     byName.put(name, fi);
>   }
> 
>   public int fieldNumber(String fieldName) {
>     FieldInfo fi = fieldInfo(fieldName);
>     if (fi != null)
>       return fi.number;
>     else
>       return -1;
>   }
> 
>   public FieldInfo fieldInfo(String fieldName) {
>     return (FieldInfo) byName.get(fieldName);
>   }
> 
>   public String fieldName(int fieldNumber) {
>     return fieldInfo(fieldNumber).name;
>   }
> 
>   public FieldInfo fieldInfo(int fieldNumber) {
>       return (FieldInfo) byNumber.get(fieldNumber);
>   }
> 
>   public int size() {
>     return byNumber.size();
>   }
> 
>   public boolean hasVectors() {
>     boolean hasVectors = false;
>     for (int i = 0; i < size(); i++) {
>       if (fieldInfo(i).storeTermVector)
>         hasVectors = true;
>     }
>     return hasVectors;
>   }
> 
>   public void write(Directory d, String name) throws IOException {
>     OutputStream output = d.createFile(name);
>     try {
>       write(output);
>     } finally {
>       output.close();
>     }
>   }
> 
>   public void write(OutputStream output) throws IOException {
>     output.writeVInt(size());
>     for (int i = 0; i < size(); i++) {
>       FieldInfo fi = fieldInfo(i);
>       byte bits = 0x0;
>       if (fi.isIndexed) bits |= 0x1;
>       if (fi.storeTermVector) bits |= 0x2;
>       output.writeString(fi.name);
>       //Was REMOVE
>       //output.writeByte((byte)(fi.isIndexed ? 1 : 0));
>       output.writeByte(bits);
>     }
>   }
> 
>   private void read(InputStream input) throws IOException {
>     int size = input.readVInt();//read in the size
>     for (int i = 0; i < size; i++) {
>       String name = input.readString().intern();
>       byte bits = input.readByte();
>       boolean isIndexed = (bits & 0x1) != 0;
>       boolean storeTermVector = (bits & 0x2) != 0;
>       addInternal(name, isIndexed, storeTermVector);
>     }
>   }
> 
> }
> 
> >
---------------------------------------------------------------------
> To unsubscribe, e-mail: lucene-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: lucene-dev-help@jakarta.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: lucene-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: lucene-dev-help@jakarta.apache.org