You are viewing a plain text version of this content. The canonical link for it is here.
Posted to java-user@lucene.apache.org by Michael McCandless <lu...@mikemccandless.com> on 2007/03/13 14:44:55 UTC

Re: Urgent : How much actually the disk space needed to optimize the index?

"maureen tanuwidjaja" <au...@yahoo.com> wrote:

>   "One thing that stands out in your listing is: your norms file
>   (_1ke1.nrm) is enormous compared to all other files.  Are you indexing
>   many tiny docs where each docs has highly variable fields or
>   something?"
>   
>   Ya I also confuse why this nrm file is trmendous in size.
>   I am indexing a total of 657739 XML document .
>   Total number of fields are 37552 fields (I am using XML tags as the
>   field)

OK, this is going to be a problem for Lucene.

This case will definitely go over 2X disk usage during optimize.  I
will update the javadocs to call out this caveat.

The .nrm file (norms) require 1 byte per document per unique field in
the segment, regardless of whether that document has that field (ie,
it's not a "sparse" representation).

When you have many small docs, and each doc has (somewhat) different
fields from the others, this results in a tremendously large storage
for the norms.

The thing is, within one segment it may be OK since that segment has a
subset of all docs and fields.  But then when segments are merged
(like optimize does) the product of #docs and #fields grows
"multiplicatively" and results in far far more storage required than
the sum of the individual segments.

The only simple workaround I can think of is to set maxMergeDocs to
keep all segments "small".  But then you may have too many segments
with time.  Either that or find a way to reduce the number of unique
fields that you actually need to store.

Mike

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: Urgent : How much actually the disk space needed to optimize the index?

Posted by Michael McCandless <lu...@mikemccandless.com>.
"Michael McCandless" <lu...@mikemccandless.com> wrote:

> The only simple workaround I can think of is to set maxMergeDocs to
> keep all segments "small".  But then you may have too many segments
> with time.  Either that or find a way to reduce the number of unique
> fields that you actually need to store.

Actually one more, even simpler, workaround is to turn off norms
for these fields.

I've opened Jira issue 830 to track this:

    http://issues.apache.org/jira/browse/LUCENE-830

Mike

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: How to disable lucene norm factor?

Posted by maureen tanuwidjaja <au...@yahoo.com>.
ok mike.I'll try it and see wheter could work :) then I will proceed to optimize the index.
  Well then i guess it's fine to use the default value for maxMergeDocs which is INTEGER.MAX?
  
  Thanks a lot
  
  Regards,
  Maureen
  

Michael McCandless <lu...@mikemccandless.com> wrote:  
"maureen tanuwidjaja"  wrote:

>   How to disable lucene norm factor?

Once you've created a Field and before adding to your Document
index, just call field.setOmitNorms(true).

Note, however, that you must do this for all Field instances by that
same field name because whenever Lucene merges segments, if even one
Document did not disable norms then this will "spread" so that all documents
keep their norms, for the same field name.

Ie you must fully rebuild your index with the above code change to
truly not store norms.

Mike

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org



 
---------------------------------
Bored stiff? Loosen up...
Download and play hundreds of games for free on Yahoo! Games.

Re: How to disable lucene norm factor?

Posted by Michael McCandless <lu...@mikemccandless.com>.
"maureen tanuwidjaja" <au...@yahoo.com> wrote:

>   How to disable lucene norm factor?

Once you've created a Field and before adding to your Document
index, just call field.setOmitNorms(true).

Note, however, that you must do this for all Field instances by that
same field name because whenever Lucene merges segments, if even one
Document did not disable norms then this will "spread" so that all documents
keep their norms, for the same field name.

Ie you must fully rebuild your index with the above code change to
truly not store norms.

Mike

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


How to disable lucene norm factor?

Posted by maureen tanuwidjaja <au...@yahoo.com>.
Hi all,
  How to disable lucene norm factor?
  
  Thanks,
  Maureen



 
---------------------------------
We won't tell. Get more on shows you hate to love
(and love to hate): Yahoo! TV's Guilty Pleasures list.

Re: Urgent : How much actually the disk space needed to optimize the index?

Posted by maureen tanuwidjaja <au...@yahoo.com>.
Hi Mike,
  
  
  How to disable/turn off the norm?is it while indexing?
  
  Thanks,
  Maureen
  
 
---------------------------------
Need Mail bonding?
Go to the Yahoo! Mail Q&A for great tips from Yahoo! Answers users.

lengthNorm accessible?

Posted by maureen tanuwidjaja <au...@yahoo.com>.
hmmm...now I wonder wheter it is possible to access this lengthNorm  value so that it can be used as before but without creating any nrm  file --> setOmitNorm = true
  
  Any other suggestion on how i could get the same rank as before by making use of this lengthNorm but without creating nrm file?
  
  
  Thanks,
  Maureen
  
  
  
  Xiaocheng Luan <je...@yahoo.com> wrote:  
You can store the fields in the index itself if you want, without  indexing them (just flag it as stored/unindexed). I believe storing  fields should not incur the "norms" size problem, please correct me if  I'm wrong.

Thanks,
Xiaocheng
maureen tanuwidjaja  wrote: Ya...I think i will store it in the database so that later it could be used in scoring/ranking for retrieval...:)
  
  Another thing i would like to see is whether the precision or recall will be much affaected by this...
  
  Regards,
  Maureen

Xiaocheng  Luan wrote:One side-effect of turning off the norms may be that the  scoring/ranking will be different? Do you need to search by each of  these many fields? If not, you probably don't have to index these  fields (but store them for retrieval?).

Just a thought.
Xiaocheng

Michael McCandless  wrote: "maureen tanuwidjaja"  wrote:
   
> "The only simple workaround I can think of is to set maxMergeDocs to
> keep all segments "small".  But then you may have too many segments
> with time.  Either that or find a way to reduce the number of unique
> fields that you actually need to store."
>   It is not possible for me to reduce the number of fields needed to
>   store...
>   
>   Could you recommend what is the maxMerge value that is small enough to
>   keep all segment small?
>   
>   I also would like to ask wheter, if optimize is successful,will it then
>    perform faster  searching significantly compared to the  unoptimized
>   one?

I think you'd need to test different values for your situation.  Maybe
try 66,000 which will give you ~ 10 segments at your current number of
docs?

>   I have the searching result in 30 to 3 minutes, which is actually quite
>    unacceptable for the "search engine" I build...Is there any 
>   recommendation on how faster searching could be done? 

I think you'll need to turn off norms.  I expect alot of the slowness is
in loading the large norms files for the first time.

Mike

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org



 
---------------------------------
Sucker-punch spam with award-winning protection.
 Try the free Yahoo! Mail Beta.

 
---------------------------------
Don't be flakey. Get Yahoo! Mail for Mobile and 
always stay connected to friends.

 
---------------------------------
Bored stiff? Loosen up...
Download and play hundreds of games for free on Yahoo! Games.

 
---------------------------------
No need to miss a message. Get email on-the-go 
with Yahoo! Mail for Mobile. Get started.

Re: Urgent : How much actually the disk space needed to optimize the index?

Posted by Xiaocheng Luan <je...@yahoo.com>.
You can store the fields in the index itself if you want, without indexing them (just flag it as stored/unindexed). I believe storing fields should not incur the "norms" size problem, please correct me if I'm wrong.

Thanks,
Xiaocheng
maureen tanuwidjaja <au...@yahoo.com> wrote: Ya...I think i will store it in the database so that later it could be used in scoring/ranking for retrieval...:)
  
  Another thing i would like to see is whether the precision or recall will be much affaected by this...
  
  Regards,
  Maureen

Xiaocheng Luan  wrote:One  side-effect of turning off the norms may be that the scoring/ranking  will be different? Do you need to search by each of these many fields?  If not, you probably don't have to index these fields (but store them  for retrieval?).

Just a thought.
Xiaocheng

Michael McCandless  wrote: "maureen tanuwidjaja"  wrote:
   
> "The only simple workaround I can think of is to set maxMergeDocs to
> keep all segments "small".  But then you may have too many segments
> with time.  Either that or find a way to reduce the number of unique
> fields that you actually need to store."
>   It is not possible for me to reduce the number of fields needed to
>   store...
>   
>   Could you recommend what is the maxMerge value that is small enough to
>   keep all segment small?
>   
>   I also would like to ask wheter, if optimize is successful,will it then
>    perform faster  searching significantly compared to the  unoptimized
>   one?

I think you'd need to test different values for your situation.  Maybe
try 66,000 which will give you ~ 10 segments at your current number of
docs?

>   I have the searching result in 30 to 3 minutes, which is actually quite
>    unacceptable for the "search engine" I build...Is there any 
>   recommendation on how faster searching could be done? 

I think you'll need to turn off norms.  I expect alot of the slowness is
in loading the large norms files for the first time.

Mike

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org



 
---------------------------------
Sucker-punch spam with award-winning protection.
 Try the free Yahoo! Mail Beta.

 
---------------------------------
Don't be flakey. Get Yahoo! Mail for Mobile and 
always stay connected to friends.

 
---------------------------------
Bored stiff? Loosen up...
Download and play hundreds of games for free on Yahoo! Games.

Re: Urgent : How much actually the disk space needed to optimize the index?

Posted by maureen tanuwidjaja <au...@yahoo.com>.
Ya...I think i will store it in the database so that later it could be used in scoring/ranking for retrieval...:)
  
  Another thing i would like to see is whether the precision or recall will be much affaected by this...
  
  Regards,
  Maureen

Xiaocheng Luan <je...@yahoo.com> wrote:One  side-effect of turning off the norms may be that the scoring/ranking  will be different? Do you need to search by each of these many fields?  If not, you probably don't have to index these fields (but store them  for retrieval?).

Just a thought.
Xiaocheng

Michael McCandless  wrote: "maureen tanuwidjaja"  wrote:
   
> "The only simple workaround I can think of is to set maxMergeDocs to
> keep all segments "small".  But then you may have too many segments
> with time.  Either that or find a way to reduce the number of unique
> fields that you actually need to store."
>   It is not possible for me to reduce the number of fields needed to
>   store...
>   
>   Could you recommend what is the maxMerge value that is small enough to
>   keep all segment small?
>   
>   I also would like to ask wheter, if optimize is successful,will it then
>    perform faster  searching significantly compared to the  unoptimized
>   one?

I think you'd need to test different values for your situation.  Maybe
try 66,000 which will give you ~ 10 segments at your current number of
docs?

>   I have the searching result in 30 to 3 minutes, which is actually quite
>    unacceptable for the "search engine" I build...Is there any 
>   recommendation on how faster searching could be done? 

I think you'll need to turn off norms.  I expect alot of the slowness is
in loading the large norms files for the first time.

Mike

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org



 
---------------------------------
Sucker-punch spam with award-winning protection.
 Try the free Yahoo! Mail Beta.

 
---------------------------------
Don't be flakey. Get Yahoo! Mail for Mobile and 
always stay connected to friends.

Re: Urgent : How much actually the disk space needed to optimize the index?

Posted by Xiaocheng Luan <je...@yahoo.com>.
One side-effect of turning off the norms may be that the scoring/ranking will be different? Do you need to search by each of these many fields? If not, you probably don't have to index these fields (but store them for retrieval?).

Just a thought.
Xiaocheng

Michael McCandless <lu...@mikemccandless.com> wrote: "maureen tanuwidjaja"  wrote:
   
> "The only simple workaround I can think of is to set maxMergeDocs to
> keep all segments "small".  But then you may have too many segments
> with time.  Either that or find a way to reduce the number of unique
> fields that you actually need to store."
>   It is not possible for me to reduce the number of fields needed to
>   store...
>   
>   Could you recommend what is the maxMerge value that is small enough to
>   keep all segment small?
>   
>   I also would like to ask wheter, if optimize is successful,will it then
>    perform faster  searching significantly compared to the  unoptimized
>   one?

I think you'd need to test different values for your situation.  Maybe
try 66,000 which will give you ~ 10 segments at your current number of
docs?

>   I have the searching result in 30 to 3 minutes, which is actually quite
>    unacceptable for the "search engine" I build...Is there any 
>   recommendation on how faster searching could be done? 

I think you'll need to turn off norms.  I expect alot of the slowness is
in loading the large norms files for the first time.

Mike

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org



 
---------------------------------
Sucker-punch spam with award-winning protection.
 Try the free Yahoo! Mail Beta.

Re: Urgent : How much actually the disk space needed to optimize the index?

Posted by Michael McCandless <lu...@mikemccandless.com>.
"maureen tanuwidjaja" <au...@yahoo.com> wrote:
   
> "The only simple workaround I can think of is to set maxMergeDocs to
> keep all segments "small".  But then you may have too many segments
> with time.  Either that or find a way to reduce the number of unique
> fields that you actually need to store."
>   It is not possible for me to reduce the number of fields needed to
>   store...
>   
>   Could you recommend what is the maxMerge value that is small enough to
>   keep all segment small?
>   
>   I also would like to ask wheter, if optimize is successful,will it then
>    perform faster  searching significantly compared to the  unoptimized
>   one?

I think you'd need to test different values for your situation.  Maybe
try 66,000 which will give you ~ 10 segments at your current number of
docs?

>   I have the searching result in 30 to 3 minutes, which is actually quite
>    unacceptable for the "search engine" I build...Is there any 
>   recommendation on how faster searching could be done? 

I think you'll need to turn off norms.  I expect alot of the slowness is
in loading the large norms files for the first time.

Mike

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: Urgent : How much actually the disk space needed to optimize the index?

Posted by maureen tanuwidjaja <au...@yahoo.com>.
Oops sorry,mistyping..

  I have the searching result in 30 SECONDS to 3 minutes, which is actually 
quite  unacceptable for the "search engine" I build...Is there any  
recommendation on how faster searching could be done? 
  

maureen tanuwidjaja <au...@yahoo.com> wrote:  Hi mike
  
  
"The only simple workaround I can think of is to set maxMergeDocs to
keep all segments "small".  But then you may have too many segments
with time.  Either that or find a way to reduce the number of unique
fields that you actually need to store."
  It is not possible for me to reduce the number of fields needed to store...
  
  Could you recommend what is the maxMerge value that is small enough to keep all segment small?
  
  I also would like to ask wheter, if optimize is successful,will it then  perform faster searching significantly compared to the unoptimized one?
  
  I have the searching result in 30 to 3 minutes, which is actually quite  unacceptable for the "search engine" I build...Is there any  recommendation on how faster searching could be done? 
  
  Thanks,
  Maureen
  
  

Michael McCandless  wrote:  "maureen tanuwidjaja"  wrote:

>   "One thing that stands out in your listing is: your norms file
>   (_1ke1.nrm) is enormous compared to all other files.  Are you indexing
>   many tiny docs where each docs has highly variable fields or
>   something?"
>   
>   Ya I also confuse why this nrm file is trmendous in size.
>   I am indexing a total of 657739 XML document .
>   Total number of fields are 37552 fields (I am using XML tags as the
>   field)

OK, this is going to be a problem for Lucene.

This case will definitely go over 2X disk usage during optimize.  I
will update the javadocs to call out this caveat.

The .nrm file (norms) require 1 byte per document per unique field in
the segment, regardless of whether that document has that field (ie,
it's not a "sparse" representation).

When you have many small docs, and each doc has (somewhat) different
fields from the others, this results in a tremendously large storage
for the norms.

The thing is, within one segment it may be OK since that segment has a
subset of all docs and fields.  But then when segments are merged
(like optimize does) the product of #docs and #fields grows
"multiplicatively" and results in far far more storage required than
the sum of the individual segments.

The only simple workaround I can think of is to set maxMergeDocs to
keep all segments "small".  But then you may have too many segments
with time.  Either that or find a way to reduce the number of unique
fields that you actually need to store.

Mike

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org



 
---------------------------------
No need to miss a message. Get email on-the-go 
with Yahoo! Mail for Mobile. Get started.

  
---------------------------------
Looking for earth-friendly autos? 
 Browse Top Cars by "Green Rating" at Yahoo! Autos' Green Center.  

Re: Urgent : How much actually the disk space needed to optimize the index?

Posted by maureen tanuwidjaja <au...@yahoo.com>.
Hi mike
  
  
"The only simple workaround I can think of is to set maxMergeDocs to
keep all segments "small".  But then you may have too many segments
with time.  Either that or find a way to reduce the number of unique
fields that you actually need to store."
  It is not possible for me to reduce the number of fields needed to store...
  
  Could you recommend what is the maxMerge value that is small enough to keep all segment small?
  
  I also would like to ask wheter, if optimize is successful,will it then  perform faster  searching significantly compared to the  unoptimized one?
  
  I have the searching result in 30 to 3 minutes, which is actually quite  unacceptable for the "search engine" I build...Is there any  recommendation on how faster searching could be done? 
  
  Thanks,
  Maureen
  
  

Michael McCandless <lu...@mikemccandless.com> wrote:  "maureen tanuwidjaja"  wrote:

>   "One thing that stands out in your listing is: your norms file
>   (_1ke1.nrm) is enormous compared to all other files.  Are you indexing
>   many tiny docs where each docs has highly variable fields or
>   something?"
>   
>   Ya I also confuse why this nrm file is trmendous in size.
>   I am indexing a total of 657739 XML document .
>   Total number of fields are 37552 fields (I am using XML tags as the
>   field)

OK, this is going to be a problem for Lucene.

This case will definitely go over 2X disk usage during optimize.  I
will update the javadocs to call out this caveat.

The .nrm file (norms) require 1 byte per document per unique field in
the segment, regardless of whether that document has that field (ie,
it's not a "sparse" representation).

When you have many small docs, and each doc has (somewhat) different
fields from the others, this results in a tremendously large storage
for the norms.

The thing is, within one segment it may be OK since that segment has a
subset of all docs and fields.  But then when segments are merged
(like optimize does) the product of #docs and #fields grows
"multiplicatively" and results in far far more storage required than
the sum of the individual segments.

The only simple workaround I can think of is to set maxMergeDocs to
keep all segments "small".  But then you may have too many segments
with time.  Either that or find a way to reduce the number of unique
fields that you actually need to store.

Mike

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org



 
---------------------------------
No need to miss a message. Get email on-the-go 
with Yahoo! Mail for Mobile. Get started.