You are viewing a plain text version of this content. The canonical link for it is here.
Posted to java-user@lucene.apache.org by Sebastin <se...@gmail.com> on 2007/09/04 12:19:40 UTC

Java Heap Space -Out Of Memory Error

Hi All,
       i used to search 3 Lucene Index store of size 6 GB,10 GB,10 GB of
records using MultiReader class.

here is the following code snippet:


     
                                         Directory indexDir2 = 
                      FSDirectory.getDirectory(indexSourceDir02,false);
                                         IndexReader indexSource2 =
IndexReader.open(indexDir2);
                                       Directory indexDir3 = 
                                            
FSDirectory.getDirectory(indexSourceDir03,false);
                                         IndexReader indexSource3 =
IndexReader.open(indexDir3);
                                         Directory indexDir4 = 
                                            
FSDirectory.getDirectory(indexSourceDir04,false);
                                         IndexReader indexSource4 =
IndexReader.open(indexDir4); 
                                        
           
                                         
        IndexReader[] readArray = {indexSource2,indexSource3,indexSource4};
        //merged reader
        IndexReader mergedReader = new MultiReader(readArray);
        IndexSearcher is = new IndexSearcher(mergedReader);
        
                                
                                 QueryParser parser = 
                                     new QueryParser("contents" ,new
StandardAnalyzer());
                                 
                                 
                                String searchQuery= 
                                       new
StringBuffer().append(inputNo).append(" AND dateSc:["
).append(fromDate).append(" TO ").append(toDate).append("]").append("
").append("AND").append(" ").append(callTyp).toString();
                                   
                                 
                                 
                                 Query callDetailquery =
parser.parse(searchQuery);
                                 
                                 hits = is.search(callDetailquery); 


it takes 300 MB of RAM for every search and it is very very slow is there
any other way to control the Memory and to make search faster.i use
SINGLETON  to use the IndexSearcher as a one time used object for all the
instances.
-- 
View this message in context: http://www.nabble.com/Java-Heap-Space--Out-Of-Memory-Error-tf4376803.html#a12475468
Sent from the Lucene - Java Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: Java Heap Space -Out Of Memory Error

Posted by Sebastin <se...@gmail.com>.
Hi All,
       is it now possible to release the memory after every search in lucene
for 50 GB of records.

testn wrote:
> 
> I think you store dateSc with full precision i.e. with time. You should
> consider to index it just date part or to the resolution you really need.
> It should reduce the memory it use when constructing DateRangeQuery and
> plus it will improve search performance as well.
> 
> 
> 
> Sebastin wrote:
>> 
>> Hi All,
>>        i used to search 3 Lucene Index store of size 6 GB,10 GB,10 GB of
>> records using MultiReader class.
>> 
>> here is the following code snippet:
>> 
>> 
>>      
>>                                          Directory indexDir2 = 
>>                       FSDirectory.getDirectory(indexSourceDir02,false);
>>                                          IndexReader indexSource2 =
>> IndexReader.open(indexDir2);
>>                                        Directory indexDir3 = 
>>                                             
>> FSDirectory.getDirectory(indexSourceDir03,false);
>>                                          IndexReader indexSource3 =
>> IndexReader.open(indexDir3);
>>                                          Directory indexDir4 = 
>>                                             
>> FSDirectory.getDirectory(indexSourceDir04,false);
>>                                          IndexReader indexSource4 =
>> IndexReader.open(indexDir4); 
>>                                         
>>            
>>                                          
>>         IndexReader[] readArray =
>> {indexSource2,indexSource3,indexSource4};
>>         //merged reader
>>         IndexReader mergedReader = new MultiReader(readArray);
>>         IndexSearcher is = new IndexSearcher(mergedReader);
>>         
>>                                 
>>                                  QueryParser parser = 
>>                                      new QueryParser("contents" ,new
>> StandardAnalyzer());
>>                                  
>>                                  
>>                                 String searchQuery= 
>>                                        new
>> StringBuffer().append(inputNo).append(" AND dateSc:["
>> ).append(fromDate).append(" TO ").append(toDate).append("]").append("
>> ").append("AND").append(" ").append(callTyp).toString();
>>                                    
>>                                  
>>                                  
>>                                  Query callDetailquery =
>> parser.parse(searchQuery);
>>                                  
>>                                  hits = is.search(callDetailquery); 
>> 
>> 
>> it takes 300 MB of RAM for every search and it is very very slow is there
>> any other way to control the Memory and to make search faster.i use
>> SINGLETON  to use the IndexSearcher as a one time used object for all the
>> instances.
>> 
> 
> 

-- 
View this message in context: http://www.nabble.com/Java-Heap-Space--Out-Of-Memory-Error-tf4376803.html#a13423503
Sent from the Lucene - Java Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: Java Heap Space -Out Of Memory Error

Posted by testn <te...@doramail.com>.
I think you store dateSc with full precision i.e. with time. You should
consider to index it just date part or to the resolution you really need. It
should reduce the memory it use when constructing DateRangeQuery and plus it
will improve search performance as well.



Sebastin wrote:
> 
> Hi All,
>        i used to search 3 Lucene Index store of size 6 GB,10 GB,10 GB of
> records using MultiReader class.
> 
> here is the following code snippet:
> 
> 
>      
>                                          Directory indexDir2 = 
>                       FSDirectory.getDirectory(indexSourceDir02,false);
>                                          IndexReader indexSource2 =
> IndexReader.open(indexDir2);
>                                        Directory indexDir3 = 
>                                             
> FSDirectory.getDirectory(indexSourceDir03,false);
>                                          IndexReader indexSource3 =
> IndexReader.open(indexDir3);
>                                          Directory indexDir4 = 
>                                             
> FSDirectory.getDirectory(indexSourceDir04,false);
>                                          IndexReader indexSource4 =
> IndexReader.open(indexDir4); 
>                                         
>            
>                                          
>         IndexReader[] readArray =
> {indexSource2,indexSource3,indexSource4};
>         //merged reader
>         IndexReader mergedReader = new MultiReader(readArray);
>         IndexSearcher is = new IndexSearcher(mergedReader);
>         
>                                 
>                                  QueryParser parser = 
>                                      new QueryParser("contents" ,new
> StandardAnalyzer());
>                                  
>                                  
>                                 String searchQuery= 
>                                        new
> StringBuffer().append(inputNo).append(" AND dateSc:["
> ).append(fromDate).append(" TO ").append(toDate).append("]").append("
> ").append("AND").append(" ").append(callTyp).toString();
>                                    
>                                  
>                                  
>                                  Query callDetailquery =
> parser.parse(searchQuery);
>                                  
>                                  hits = is.search(callDetailquery); 
> 
> 
> it takes 300 MB of RAM for every search and it is very very slow is there
> any other way to control the Memory and to make search faster.i use
> SINGLETON  to use the IndexSearcher as a one time used object for all the
> instances.
> 

-- 
View this message in context: http://www.nabble.com/Java-Heap-Space--Out-Of-Memory-Error-tf4376803.html#a12476392
Sent from the Lucene - Java Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: Java Heap Space -Out Of Memory Error

Posted by testn <te...@doramail.com>.
As Mike mentioned, what is the version of Lucene you are using? Plus can you
also post the stacktrace?


Sebastin wrote:
> 
> Hi testn,
>              i wrote the case wrongly actually the error is 
> 
> java.io.ioexception file not found-segments
> 
> testn wrote:
>> 
>> Should the file be "segments_8" and "segments.gen"? Why is it "Segment"?
>> The case is different.
>> 
>> 
>> Sebastin wrote:
>>> 
>>> java.io.IoException:File Not Found- Segments  is the error message
>>> 
>>> testn wrote:
>>>> 
>>>> What is the error message? Probably Mike, Erick or Yonik can help you
>>>> better on this since I'm no one in index area.
>>>> 
>>>> Sebastin wrote:
>>>>> 
>>>>> HI testn,
>>>>>              1.I optimize the Large Indexes of size 10 GB using
>>>>> Luke.it optimize all the content into a single CFS file and it
>>>>> generates segments.gen and segments_8 file when i search the item it
>>>>> shows an error that segments file is not there.could you help me in
>>>>> this 
>>>>> 
>>>>> testn wrote:
>>>>>> 
>>>>>> 1. You can close the searcher once you're done. If you want to reopen
>>>>>> the index, you can close and reopen only the updated 3 readers and
>>>>>> keep the 2 old indexreaders and reuse it. It should reduce the time
>>>>>> to reopen it.
>>>>>> 2. Make sure that you optimize it every once in a while
>>>>>> 3. You might consider separating indices in separated storage and use
>>>>>> ParallelReader
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> Sebastin wrote:
>>>>>>> 
>>>>>>> The problem in my pplication are as follows:
>>>>>>>                  1.I am not able to see the updated records in my
>>>>>>> index store because i instantiate 
>>>>>>> IndexReader and IndexSearcher class once that is in the first
>>>>>>> search.further searches use the same IndexReaders(5 Directories) and
>>>>>>> IndexSearcher with different queries.
>>>>>>> 
>>>>>>>                 2.My search is very very slow First 2 Directories of
>>>>>>> size 10 GB each which are having old index records and no update in
>>>>>>> that remaining 3 Diretories are updated every second.
>>>>>>> 
>>>>>>>                 3.i am Indexing 20 million records per day so the
>>>>>>> Index store gets growing and it makes search very very slower.
>>>>>>>  
>>>>>>>                4.I am using searcherOne class as the global
>>>>>>> application helper class ,with the scope as APPLICATION it consists
>>>>>>> of one IndexReader and IndexSearcher get set method which will hold
>>>>>>> the IndexReader and IndexSearcher object after the First Search.it
>>>>>>> is used for all other searches.
>>>>>>> 
>>>>>>>               5.I am using Lucene 2.2.0 version, in a WEB
>>>>>>> Application which index 15 fields per document and Index 5
>>>>>>> Fieds,store 10 Fields.i am not using any sort in my query.for a
>>>>>>> single query upto the maximum it fetches 600 records from the index
>>>>>>> store(5 direcories)    
>>>>>>>                 
>>>>>>> 
>>>>>>> hossman wrote:
>>>>>>>> 
>>>>>>>> 
>>>>>>>> : I set IndexSearcher as the application Object after the first
>>>>>>>> search.
>>>>>>>> 	...
>>>>>>>> : how can i reconstruct the new IndexSearcher for every hour to see
>>>>>>>> the
>>>>>>>> : updated records .
>>>>>>>> 
>>>>>>>> i'm confused ... my understanding based on the comments you made
>>>>>>>> below 
>>>>>>>> (in an earlier message) was that you already *were* constructing a
>>>>>>>> new  
>>>>>>>> IndexSearcher once an hour -- but every time you do that, your
>>>>>>>> memory 
>>>>>>>> usage grows, and and that sometimes you got OOM Errors.
>>>>>>>> 
>>>>>>>> if that's not what you said, then i think you need to explain, in
>>>>>>>> detail, 
>>>>>>>> in one message, exactly what your problem is.  And don't assume we 
>>>>>>>> understand anything -- tell us *EVERYTHING* (like, for example,
>>>>>>>> what the 
>>>>>>>> word "crore" means, how "searcherOne" is implemented, and the
>>>>>>>> answer to 
>>>>>>>> the specfic question i asked in my last message: does your
>>>>>>>> application, 
>>>>>>>> contain anywhere in it, any code that will close anything
>>>>>>>> (IndexSearchers 
>>>>>>>> or IndexReaders) ?
>>>>>>>> 
>>>>>>>> 
>>>>>>>> : > : I use StandardAnalyzer.the records daily ranges from 5 crore
>>>>>>>> to 6 crore.
>>>>>>>> : > for
>>>>>>>> : > : every second i am updating my Index. i instantiate
>>>>>>>> IndexSearcher object
>>>>>>>> : > one
>>>>>>>> : > : time for all the searches. for an hour can i see the updated
>>>>>>>> records in
>>>>>>>> : > the
>>>>>>>> : > : indexstore by reinstantiating IndexSearcher object.but the
>>>>>>>> problem when
>>>>>>>> : > i
>>>>>>>> : > : reinstantiate IndexSearcher ,my RAM memory gets appended.is
>>>>>>>> there any
>>>>>>>> 
>>>>>>>> 
>>>>>>>> : > IndexSearcher are you explicitly closing both the old
>>>>>>>> IndexSearcher as 
>>>>>>>> : > well as all of 4 of those old IndexReaders and the MultiReader?
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> -Hoss
>>>>>>>> 
>>>>>>>> 
>>>>>>>> ---------------------------------------------------------------------
>>>>>>>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>>>>>>>> For additional commands, e-mail: java-user-help@lucene.apache.org
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>>> 
>>>>> 
>>>>> 
>>>> 
>>>> 
>>> 
>>> 
>> 
>> 
> 
> 

-- 
View this message in context: http://www.nabble.com/Java-Heap-Space--Out-Of-Memory-Error-tf4376803.html#a12657298
Sent from the Lucene - Java Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: Java Heap Space -Out Of Memory Error

Posted by Sebastin <se...@gmail.com>.
Hi testn,
             i wrote the case wrongly actually the error is 

java.io.ioexception file not found-segments

testn wrote:
> 
> Should the file be "segments_8" and "segments.gen"? Why is it "Segment"?
> The case is different.
> 
> 
> Sebastin wrote:
>> 
>> java.io.IoException:File Not Found- Segments  is the error message
>> 
>> testn wrote:
>>> 
>>> What is the error message? Probably Mike, Erick or Yonik can help you
>>> better on this since I'm no one in index area.
>>> 
>>> Sebastin wrote:
>>>> 
>>>> HI testn,
>>>>              1.I optimize the Large Indexes of size 10 GB using Luke.it
>>>> optimize all the content into a single CFS file and it generates
>>>> segments.gen and segments_8 file when i search the item it shows an
>>>> error that segments file is not there.could you help me in this 
>>>> 
>>>> testn wrote:
>>>>> 
>>>>> 1. You can close the searcher once you're done. If you want to reopen
>>>>> the index, you can close and reopen only the updated 3 readers and
>>>>> keep the 2 old indexreaders and reuse it. It should reduce the time to
>>>>> reopen it.
>>>>> 2. Make sure that you optimize it every once in a while
>>>>> 3. You might consider separating indices in separated storage and use
>>>>> ParallelReader
>>>>> 
>>>>> 
>>>>> 
>>>>> Sebastin wrote:
>>>>>> 
>>>>>> The problem in my pplication are as follows:
>>>>>>                  1.I am not able to see the updated records in my
>>>>>> index store because i instantiate 
>>>>>> IndexReader and IndexSearcher class once that is in the first
>>>>>> search.further searches use the same IndexReaders(5 Directories) and
>>>>>> IndexSearcher with different queries.
>>>>>> 
>>>>>>                 2.My search is very very slow First 2 Directories of
>>>>>> size 10 GB each which are having old index records and no update in
>>>>>> that remaining 3 Diretories are updated every second.
>>>>>> 
>>>>>>                 3.i am Indexing 20 million records per day so the
>>>>>> Index store gets growing and it makes search very very slower.
>>>>>>  
>>>>>>                4.I am using searcherOne class as the global
>>>>>> application helper class ,with the scope as APPLICATION it consists
>>>>>> of one IndexReader and IndexSearcher get set method which will hold
>>>>>> the IndexReader and IndexSearcher object after the First Search.it is
>>>>>> used for all other searches.
>>>>>> 
>>>>>>               5.I am using Lucene 2.2.0 version, in a WEB Application
>>>>>> which index 15 fields per document and Index 5 Fieds,store 10
>>>>>> Fields.i am not using any sort in my query.for a single query upto
>>>>>> the maximum it fetches 600 records from the index store(5 direcories)    
>>>>>>                 
>>>>>> 
>>>>>> hossman wrote:
>>>>>>> 
>>>>>>> 
>>>>>>> : I set IndexSearcher as the application Object after the first
>>>>>>> search.
>>>>>>> 	...
>>>>>>> : how can i reconstruct the new IndexSearcher for every hour to see
>>>>>>> the
>>>>>>> : updated records .
>>>>>>> 
>>>>>>> i'm confused ... my understanding based on the comments you made
>>>>>>> below 
>>>>>>> (in an earlier message) was that you already *were* constructing a
>>>>>>> new  
>>>>>>> IndexSearcher once an hour -- but every time you do that, your
>>>>>>> memory 
>>>>>>> usage grows, and and that sometimes you got OOM Errors.
>>>>>>> 
>>>>>>> if that's not what you said, then i think you need to explain, in
>>>>>>> detail, 
>>>>>>> in one message, exactly what your problem is.  And don't assume we 
>>>>>>> understand anything -- tell us *EVERYTHING* (like, for example, what
>>>>>>> the 
>>>>>>> word "crore" means, how "searcherOne" is implemented, and the answer
>>>>>>> to 
>>>>>>> the specfic question i asked in my last message: does your
>>>>>>> application, 
>>>>>>> contain anywhere in it, any code that will close anything
>>>>>>> (IndexSearchers 
>>>>>>> or IndexReaders) ?
>>>>>>> 
>>>>>>> 
>>>>>>> : > : I use StandardAnalyzer.the records daily ranges from 5 crore
>>>>>>> to 6 crore.
>>>>>>> : > for
>>>>>>> : > : every second i am updating my Index. i instantiate
>>>>>>> IndexSearcher object
>>>>>>> : > one
>>>>>>> : > : time for all the searches. for an hour can i see the updated
>>>>>>> records in
>>>>>>> : > the
>>>>>>> : > : indexstore by reinstantiating IndexSearcher object.but the
>>>>>>> problem when
>>>>>>> : > i
>>>>>>> : > : reinstantiate IndexSearcher ,my RAM memory gets appended.is
>>>>>>> there any
>>>>>>> 
>>>>>>> 
>>>>>>> : > IndexSearcher are you explicitly closing both the old
>>>>>>> IndexSearcher as 
>>>>>>> : > well as all of 4 of those old IndexReaders and the MultiReader?
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> -Hoss
>>>>>>> 
>>>>>>> 
>>>>>>> ---------------------------------------------------------------------
>>>>>>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>>>>>>> For additional commands, e-mail: java-user-help@lucene.apache.org
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>>> 
>>>>> 
>>>>> 
>>>> 
>>>> 
>>> 
>>> 
>> 
>> 
> 
> 

-- 
View this message in context: http://www.nabble.com/Java-Heap-Space--Out-Of-Memory-Error-tf4376803.html#a12657227
Sent from the Lucene - Java Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: Java Heap Space -Out Of Memory Error

Posted by Sebastin <se...@gmail.com>.
 HI testn,
               could you trigger me out the You can simply create a wrapper
that return MultiReader which you can cache for a while and close the oldest
index once the date rolls.,this point in detail.i am not able to get that. 

testn wrote:
> 
> If you know that there are only 15 days of indexes you need to search on,
> you just need to open only the latest 15 indexes at a time right? You can
> simply create a wrapper that return MultiReader which you can cache for a
> while and close the oldest index once the date rolls.
> 
> 
> Sebastin wrote:
>> 
>> HI testn,
>> 
>> it gives performance improvement while optimizing the Index. 
>> 
>> Now i seprate the IndexStore on a daily basis.(ie) 
>> For Every Day it create a new Index store ,sep- 08-2007,sep-09-2007 like
>> wise it will minimize the size of the IndexStore.could you give me an
>> idea on how to open every day folders for every search.
>> 
>> Query I use here is,
>> 
>> 9840836588 AND dateSc:[070901 TO 070910] 
>> 
>> 07---->year (2007)
>> 09---->month(september)
>> 01----->day
>> 
>> i restrict for 15 days that it is possible to search 15 days record in my
>> application.at a time 10 users aare going to search every store.is there
>> any other better way to improve the search performance to avoid memory
>> problem as well as speed of the search.
>> 
>> 
>> 
>> 
>> 
>> 
>> testn wrote:
>>> 
>>> So did you see any improvement in performance?
>>> 
>>> Sebastin wrote:
>>>> 
>>>> It works finally .i use Lucene 2.2  in my application.thanks testn and
>>>> Mike
>>>> 
>>>> Michael McCandless-2 wrote:
>>>>> 
>>>>> 
>>>>> It sounds like there may be a Lucene version mismatch?  When Luke was
>>>>> used
>>>>> it was likely based on Lucene 2.2, but it sounds like an older version
>>>>> of
>>>>> Lucene is now being used to open the index?
>>>>> 
>>>>> Mike
>>>>> 
>>>>> "testn" <te...@doramail.com> wrote:
>>>>>> 
>>>>>> Should the file be "segments_8" and "segments.gen"? Why is it
>>>>>> "Segment"?
>>>>>> The
>>>>>> case is different.
>>>>>> 
>>>>>> 
>>>>>> Sebastin wrote:
>>>>>> > 
>>>>>> > java.io.IoException:File Not Found- Segments  is the error message
>>>>>> > 
>>>>>> > testn wrote:
>>>>>> >> 
>>>>>> >> What is the error message? Probably Mike, Erick or Yonik can help
>>>>>> you
>>>>>> >> better on this since I'm no one in index area.
>>>>>> >> 
>>>>>> >> Sebastin wrote:
>>>>>> >>> 
>>>>>> >>> HI testn,
>>>>>> >>>              1.I optimize the Large Indexes of size 10 GB using
>>>>>> Luke.it
>>>>>> >>> optimize all the content into a single CFS file and it generates
>>>>>> >>> segments.gen and segments_8 file when i search the item it shows
>>>>>> an
>>>>>> >>> error that segments file is not there.could you help me in this 
>>>>>> >>> 
>>>>>> >>> testn wrote:
>>>>>> >>>> 
>>>>>> >>>> 1. You can close the searcher once you're done. If you want to
>>>>>> reopen
>>>>>> >>>> the index, you can close and reopen only the updated 3 readers
>>>>>> and keep
>>>>>> >>>> the 2 old indexreaders and reuse it. It should reduce the time
>>>>>> to
>>>>>> >>>> reopen it.
>>>>>> >>>> 2. Make sure that you optimize it every once in a while
>>>>>> >>>> 3. You might consider separating indices in separated storage
>>>>>> and use
>>>>>> >>>> ParallelReader
>>>>>> >>>> 
>>>>>> >>>> 
>>>>>> >>>> 
>>>>>> >>>> Sebastin wrote:
>>>>>> >>>>> 
>>>>>> >>>>> The problem in my pplication are as follows:
>>>>>> >>>>>                  1.I am not able to see the updated records in
>>>>>> my
>>>>>> >>>>> index store because i instantiate 
>>>>>> >>>>> IndexReader and IndexSearcher class once that is in the first
>>>>>> >>>>> search.further searches use the same IndexReaders(5
>>>>>> Directories) and
>>>>>> >>>>> IndexSearcher with different queries.
>>>>>> >>>>> 
>>>>>> >>>>>                 2.My search is very very slow First 2
>>>>>> Directories of
>>>>>> >>>>> size 10 GB each which are having old index records and no
>>>>>> update in
>>>>>> >>>>> that remaining 3 Diretories are updated every second.
>>>>>> >>>>> 
>>>>>> >>>>>                 3.i am Indexing 20 million records per day so
>>>>>> the
>>>>>> >>>>> Index store gets growing and it makes search very very slower.
>>>>>> >>>>>  
>>>>>> >>>>>                4.I am using searcherOne class as the global
>>>>>> >>>>> application helper class ,with the scope as APPLICATION it
>>>>>> consists of
>>>>>> >>>>> one IndexReader and IndexSearcher get set method which will
>>>>>> hold the
>>>>>> >>>>> IndexReader and IndexSearcher object after the First Search.it
>>>>>> is used
>>>>>> >>>>> for all other searches.
>>>>>> >>>>> 
>>>>>> >>>>>               5.I am using Lucene 2.2.0 version, in a WEB
>>>>>> Application
>>>>>> >>>>> which index 15 fields per document and Index 5 Fieds,store 10
>>>>>> Fields.i
>>>>>> >>>>> am not using any sort in my query.for a single query upto the
>>>>>> maximum
>>>>>> >>>>> it fetches 600 records from the index store(5 direcories)    
>>>>>> >>>>>                 
>>>>>> >>>>> 
>>>>>> >>>>> hossman wrote:
>>>>>> >>>>>> 
>>>>>> >>>>>> 
>>>>>> >>>>>> : I set IndexSearcher as the application Object after the
>>>>>> first
>>>>>> >>>>>> search.
>>>>>> >>>>>> 	...
>>>>>> >>>>>> : how can i reconstruct the new IndexSearcher for every hour
>>>>>> to see
>>>>>> >>>>>> the
>>>>>> >>>>>> : updated records .
>>>>>> >>>>>> 
>>>>>> >>>>>> i'm confused ... my understanding based on the comments you
>>>>>> made
>>>>>> >>>>>> below 
>>>>>> >>>>>> (in an earlier message) was that you already *were*
>>>>>> constructing a
>>>>>> >>>>>> new  
>>>>>> >>>>>> IndexSearcher once an hour -- but every time you do that, your
>>>>>> memory 
>>>>>> >>>>>> usage grows, and and that sometimes you got OOM Errors.
>>>>>> >>>>>> 
>>>>>> >>>>>> if that's not what you said, then i think you need to explain,
>>>>>> in
>>>>>> >>>>>> detail, 
>>>>>> >>>>>> in one message, exactly what your problem is.  And don't
>>>>>> assume we 
>>>>>> >>>>>> understand anything -- tell us *EVERYTHING* (like, for
>>>>>> example, what
>>>>>> >>>>>> the 
>>>>>> >>>>>> word "crore" means, how "searcherOne" is implemented, and the
>>>>>> answer
>>>>>> >>>>>> to 
>>>>>> >>>>>> the specfic question i asked in my last message: does your
>>>>>> >>>>>> application, 
>>>>>> >>>>>> contain anywhere in it, any code that will close anything
>>>>>> >>>>>> (IndexSearchers 
>>>>>> >>>>>> or IndexReaders) ?
>>>>>> >>>>>> 
>>>>>> >>>>>> 
>>>>>> >>>>>> : > : I use StandardAnalyzer.the records daily ranges from 5
>>>>>> crore to
>>>>>> >>>>>> 6 crore.
>>>>>> >>>>>> : > for
>>>>>> >>>>>> : > : every second i am updating my Index. i instantiate
>>>>>> >>>>>> IndexSearcher object
>>>>>> >>>>>> : > one
>>>>>> >>>>>> : > : time for all the searches. for an hour can i see the
>>>>>> updated
>>>>>> >>>>>> records in
>>>>>> >>>>>> : > the
>>>>>> >>>>>> : > : indexstore by reinstantiating IndexSearcher object.but
>>>>>> the
>>>>>> >>>>>> problem when
>>>>>> >>>>>> : > i
>>>>>> >>>>>> : > : reinstantiate IndexSearcher ,my RAM memory gets
>>>>>> appended.is
>>>>>> >>>>>> there any
>>>>>> >>>>>> 
>>>>>> >>>>>> 
>>>>>> >>>>>> : > IndexSearcher are you explicitly closing both the old
>>>>>> >>>>>> IndexSearcher as 
>>>>>> >>>>>> : > well as all of 4 of those old IndexReaders and the
>>>>>> MultiReader?
>>>>>> >>>>>> 
>>>>>> >>>>>> 
>>>>>> >>>>>> 
>>>>>> >>>>>> 
>>>>>> >>>>>> -Hoss
>>>>>> >>>>>> 
>>>>>> >>>>>> 
>>>>>> >>>>>>
>>>>>> ---------------------------------------------------------------------
>>>>>> >>>>>> To unsubscribe, e-mail:
>>>>>> java-user-unsubscribe@lucene.apache.org
>>>>>> >>>>>> For additional commands, e-mail:
>>>>>> java-user-help@lucene.apache.org
>>>>>> >>>>>> 
>>>>>> >>>>>> 
>>>>>> >>>>>> 
>>>>>> >>>>> 
>>>>>> >>>>> 
>>>>>> >>>> 
>>>>>> >>>> 
>>>>>> >>> 
>>>>>> >>> 
>>>>>> >> 
>>>>>> >> 
>>>>>> > 
>>>>>> > 
>>>>>> 
>>>>>> -- 
>>>>>> View this message in context:
>>>>>> http://www.nabble.com/Java-Heap-Space--Out-Of-Memory-Error-tf4376803.html#a12655880
>>>>>> Sent from the Lucene - Java Users mailing list archive at Nabble.com.
>>>>>> 
>>>>>> 
>>>>>> ---------------------------------------------------------------------
>>>>>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>>>>>> For additional commands, e-mail: java-user-help@lucene.apache.org
>>>>>> 
>>>>> 
>>>>> ---------------------------------------------------------------------
>>>>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>>>>> For additional commands, e-mail: java-user-help@lucene.apache.org
>>>>> 
>>>>> 
>>>>> 
>>>> 
>>>> 
>>> 
>>> 
>> 
>> 
> 
> 

-- 
View this message in context: http://www.nabble.com/Java-Heap-Space--Out-Of-Memory-Error-tf4376803.html#a12690737
Sent from the Lucene - Java Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: Java Heap Space -Out Of Memory Error

Posted by Sebastin <se...@gmail.com>.
HI testn,
              I used to search now 3 folders of total size 1.5.GB.still it
consumes lot of memory for every search.i used to close all the IndexReader
once i finish the search.i optimize the FIles using Luke.when i set the
IndexSearcher  object as application level object,its not possible to see
the updated records.
could you guide me how to resolve this memory problem.

testn wrote:
> 
> As I mentioned, IndexReader is the one that holds the memory. You should
> explicitly close the underlying IndexReader to make sure that the reader
> releases the memory. 
> 
> 
> 
> Sebastin wrote:
>> 
>> Hi testn,
>>               Every IndexFolder is of size 1.5 GB of size,eventhough when
>> i used to Open and close the IndexSearcher it wont release the memory for
>> all the searches.
>>               When i set the IndexSearcher object as the Application
>> Scope object its not possile for me to see current day records.
>> 
>>                  Could you give me an Idea how to trigger out this
>> problem. 
>>       
>> 
>> testn wrote:
>>> 
>>> If you know that there are only 15 days of indexes you need to search
>>> on, you just need to open only the latest 15 indexes at a time right?
>>> You can simply create a wrapper that return MultiReader which you can
>>> cache for a while and close the oldest index once the date rolls.
>>> 
>>> 
>>> Sebastin wrote:
>>>> 
>>>> HI testn,
>>>> 
>>>> it gives performance improvement while optimizing the Index. 
>>>> 
>>>> Now i seprate the IndexStore on a daily basis.(ie) 
>>>> For Every Day it create a new Index store ,sep- 08-2007,sep-09-2007
>>>> like wise it will minimize the size of the IndexStore.could you give me
>>>> an idea on how to open every day folders for every search.
>>>> 
>>>> Query I use here is,
>>>> 
>>>> 9840836588 AND dateSc:[070901 TO 070910] 
>>>> 
>>>> 07---->year (2007)
>>>> 09---->month(september)
>>>> 01----->day
>>>> 
>>>> i restrict for 15 days that it is possible to search 15 days record in
>>>> my application.at a time 10 users aare going to search every store.is
>>>> there any other better way to improve the search performance to avoid
>>>> memory problem as well as speed of the search.
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> testn wrote:
>>>>> 
>>>>> So did you see any improvement in performance?
>>>>> 
>>>>> Sebastin wrote:
>>>>>> 
>>>>>> It works finally .i use Lucene 2.2  in my application.thanks testn
>>>>>> and Mike
>>>>>> 
>>>>>> Michael McCandless-2 wrote:
>>>>>>> 
>>>>>>> 
>>>>>>> It sounds like there may be a Lucene version mismatch?  When Luke
>>>>>>> was used
>>>>>>> it was likely based on Lucene 2.2, but it sounds like an older
>>>>>>> version of
>>>>>>> Lucene is now being used to open the index?
>>>>>>> 
>>>>>>> Mike
>>>>>>> 
>>>>>>> "testn" <te...@doramail.com> wrote:
>>>>>>>> 
>>>>>>>> Should the file be "segments_8" and "segments.gen"? Why is it
>>>>>>>> "Segment"?
>>>>>>>> The
>>>>>>>> case is different.
>>>>>>>> 
>>>>>>>> 
>>>>>>>> Sebastin wrote:
>>>>>>>> > 
>>>>>>>> > java.io.IoException:File Not Found- Segments  is the error
>>>>>>>> message
>>>>>>>> > 
>>>>>>>> > testn wrote:
>>>>>>>> >> 
>>>>>>>> >> What is the error message? Probably Mike, Erick or Yonik can
>>>>>>>> help you
>>>>>>>> >> better on this since I'm no one in index area.
>>>>>>>> >> 
>>>>>>>> >> Sebastin wrote:
>>>>>>>> >>> 
>>>>>>>> >>> HI testn,
>>>>>>>> >>>              1.I optimize the Large Indexes of size 10 GB using
>>>>>>>> Luke.it
>>>>>>>> >>> optimize all the content into a single CFS file and it
>>>>>>>> generates
>>>>>>>> >>> segments.gen and segments_8 file when i search the item it
>>>>>>>> shows an
>>>>>>>> >>> error that segments file is not there.could you help me in this 
>>>>>>>> >>> 
>>>>>>>> >>> testn wrote:
>>>>>>>> >>>> 
>>>>>>>> >>>> 1. You can close the searcher once you're done. If you want to
>>>>>>>> reopen
>>>>>>>> >>>> the index, you can close and reopen only the updated 3 readers
>>>>>>>> and keep
>>>>>>>> >>>> the 2 old indexreaders and reuse it. It should reduce the time
>>>>>>>> to
>>>>>>>> >>>> reopen it.
>>>>>>>> >>>> 2. Make sure that you optimize it every once in a while
>>>>>>>> >>>> 3. You might consider separating indices in separated storage
>>>>>>>> and use
>>>>>>>> >>>> ParallelReader
>>>>>>>> >>>> 
>>>>>>>> >>>> 
>>>>>>>> >>>> 
>>>>>>>> >>>> Sebastin wrote:
>>>>>>>> >>>>> 
>>>>>>>> >>>>> The problem in my pplication are as follows:
>>>>>>>> >>>>>                  1.I am not able to see the updated records
>>>>>>>> in my
>>>>>>>> >>>>> index store because i instantiate 
>>>>>>>> >>>>> IndexReader and IndexSearcher class once that is in the first
>>>>>>>> >>>>> search.further searches use the same IndexReaders(5
>>>>>>>> Directories) and
>>>>>>>> >>>>> IndexSearcher with different queries.
>>>>>>>> >>>>> 
>>>>>>>> >>>>>                 2.My search is very very slow First 2
>>>>>>>> Directories of
>>>>>>>> >>>>> size 10 GB each which are having old index records and no
>>>>>>>> update in
>>>>>>>> >>>>> that remaining 3 Diretories are updated every second.
>>>>>>>> >>>>> 
>>>>>>>> >>>>>                 3.i am Indexing 20 million records per day so
>>>>>>>> the
>>>>>>>> >>>>> Index store gets growing and it makes search very very
>>>>>>>> slower.
>>>>>>>> >>>>>  
>>>>>>>> >>>>>                4.I am using searcherOne class as the global
>>>>>>>> >>>>> application helper class ,with the scope as APPLICATION it
>>>>>>>> consists of
>>>>>>>> >>>>> one IndexReader and IndexSearcher get set method which will
>>>>>>>> hold the
>>>>>>>> >>>>> IndexReader and IndexSearcher object after the First
>>>>>>>> Search.it is used
>>>>>>>> >>>>> for all other searches.
>>>>>>>> >>>>> 
>>>>>>>> >>>>>               5.I am using Lucene 2.2.0 version, in a WEB
>>>>>>>> Application
>>>>>>>> >>>>> which index 15 fields per document and Index 5 Fieds,store 10
>>>>>>>> Fields.i
>>>>>>>> >>>>> am not using any sort in my query.for a single query upto the
>>>>>>>> maximum
>>>>>>>> >>>>> it fetches 600 records from the index store(5 direcories)    
>>>>>>>> >>>>>                 
>>>>>>>> >>>>> 
>>>>>>>> >>>>> hossman wrote:
>>>>>>>> >>>>>> 
>>>>>>>> >>>>>> 
>>>>>>>> >>>>>> : I set IndexSearcher as the application Object after the
>>>>>>>> first
>>>>>>>> >>>>>> search.
>>>>>>>> >>>>>> 	...
>>>>>>>> >>>>>> : how can i reconstruct the new IndexSearcher for every hour
>>>>>>>> to see
>>>>>>>> >>>>>> the
>>>>>>>> >>>>>> : updated records .
>>>>>>>> >>>>>> 
>>>>>>>> >>>>>> i'm confused ... my understanding based on the comments you
>>>>>>>> made
>>>>>>>> >>>>>> below 
>>>>>>>> >>>>>> (in an earlier message) was that you already *were*
>>>>>>>> constructing a
>>>>>>>> >>>>>> new  
>>>>>>>> >>>>>> IndexSearcher once an hour -- but every time you do that,
>>>>>>>> your memory 
>>>>>>>> >>>>>> usage grows, and and that sometimes you got OOM Errors.
>>>>>>>> >>>>>> 
>>>>>>>> >>>>>> if that's not what you said, then i think you need to
>>>>>>>> explain, in
>>>>>>>> >>>>>> detail, 
>>>>>>>> >>>>>> in one message, exactly what your problem is.  And don't
>>>>>>>> assume we 
>>>>>>>> >>>>>> understand anything -- tell us *EVERYTHING* (like, for
>>>>>>>> example, what
>>>>>>>> >>>>>> the 
>>>>>>>> >>>>>> word "crore" means, how "searcherOne" is implemented, and
>>>>>>>> the answer
>>>>>>>> >>>>>> to 
>>>>>>>> >>>>>> the specfic question i asked in my last message: does your
>>>>>>>> >>>>>> application, 
>>>>>>>> >>>>>> contain anywhere in it, any code that will close anything
>>>>>>>> >>>>>> (IndexSearchers 
>>>>>>>> >>>>>> or IndexReaders) ?
>>>>>>>> >>>>>> 
>>>>>>>> >>>>>> 
>>>>>>>> >>>>>> : > : I use StandardAnalyzer.the records daily ranges from 5
>>>>>>>> crore to
>>>>>>>> >>>>>> 6 crore.
>>>>>>>> >>>>>> : > for
>>>>>>>> >>>>>> : > : every second i am updating my Index. i instantiate
>>>>>>>> >>>>>> IndexSearcher object
>>>>>>>> >>>>>> : > one
>>>>>>>> >>>>>> : > : time for all the searches. for an hour can i see the
>>>>>>>> updated
>>>>>>>> >>>>>> records in
>>>>>>>> >>>>>> : > the
>>>>>>>> >>>>>> : > : indexstore by reinstantiating IndexSearcher object.but
>>>>>>>> the
>>>>>>>> >>>>>> problem when
>>>>>>>> >>>>>> : > i
>>>>>>>> >>>>>> : > : reinstantiate IndexSearcher ,my RAM memory gets
>>>>>>>> appended.is
>>>>>>>> >>>>>> there any
>>>>>>>> >>>>>> 
>>>>>>>> >>>>>> 
>>>>>>>> >>>>>> : > IndexSearcher are you explicitly closing both the old
>>>>>>>> >>>>>> IndexSearcher as 
>>>>>>>> >>>>>> : > well as all of 4 of those old IndexReaders and the
>>>>>>>> MultiReader?
>>>>>>>> >>>>>> 
>>>>>>>> >>>>>> 
>>>>>>>> >>>>>> 
>>>>>>>> >>>>>> 
>>>>>>>> >>>>>> -Hoss
>>>>>>>> >>>>>> 
>>>>>>>> >>>>>> 
>>>>>>>> >>>>>>
>>>>>>>> ---------------------------------------------------------------------
>>>>>>>> >>>>>> To unsubscribe, e-mail:
>>>>>>>> java-user-unsubscribe@lucene.apache.org
>>>>>>>> >>>>>> For additional commands, e-mail:
>>>>>>>> java-user-help@lucene.apache.org
>>>>>>>> >>>>>> 
>>>>>>>> >>>>>> 
>>>>>>>> >>>>>> 
>>>>>>>> >>>>> 
>>>>>>>> >>>>> 
>>>>>>>> >>>> 
>>>>>>>> >>>> 
>>>>>>>> >>> 
>>>>>>>> >>> 
>>>>>>>> >> 
>>>>>>>> >> 
>>>>>>>> > 
>>>>>>>> > 
>>>>>>>> 
>>>>>>>> -- 
>>>>>>>> View this message in context:
>>>>>>>> http://www.nabble.com/Java-Heap-Space--Out-Of-Memory-Error-tf4376803.html#a12655880
>>>>>>>> Sent from the Lucene - Java Users mailing list archive at
>>>>>>>> Nabble.com.
>>>>>>>> 
>>>>>>>> 
>>>>>>>> ---------------------------------------------------------------------
>>>>>>>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>>>>>>>> For additional commands, e-mail: java-user-help@lucene.apache.org
>>>>>>>> 
>>>>>>> 
>>>>>>> ---------------------------------------------------------------------
>>>>>>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>>>>>>> For additional commands, e-mail: java-user-help@lucene.apache.org
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>>> 
>>>>> 
>>>>> 
>>>> 
>>>> 
>>> 
>>> 
>> 
>> 
> 
> 

-- 
View this message in context: http://www.nabble.com/Java-Heap-Space--Out-Of-Memory-Error-tf4376803.html#a12775620
Sent from the Lucene - Java Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: Java Heap Space -Out Of Memory Error

Posted by testn <te...@doramail.com>.
As I mentioned, IndexReader is the one that holds the memory. You should
explicitly close the underlying IndexReader to make sure that the reader
releases the memory. 



Sebastin wrote:
> 
> Hi testn,
>               Every IndexFolder is of size 1.5 GB of size,eventhough when
> i used to Open and close the IndexSearcher it wont release the memory for
> all the searches.
>               When i set the IndexSearcher object as the Application Scope
> object its not possile for me to see current day records.
> 
>                  Could you give me an Idea how to trigger out this
> problem. 
>       
> 
> testn wrote:
>> 
>> If you know that there are only 15 days of indexes you need to search on,
>> you just need to open only the latest 15 indexes at a time right? You can
>> simply create a wrapper that return MultiReader which you can cache for a
>> while and close the oldest index once the date rolls.
>> 
>> 
>> Sebastin wrote:
>>> 
>>> HI testn,
>>> 
>>> it gives performance improvement while optimizing the Index. 
>>> 
>>> Now i seprate the IndexStore on a daily basis.(ie) 
>>> For Every Day it create a new Index store ,sep- 08-2007,sep-09-2007 like
>>> wise it will minimize the size of the IndexStore.could you give me an
>>> idea on how to open every day folders for every search.
>>> 
>>> Query I use here is,
>>> 
>>> 9840836588 AND dateSc:[070901 TO 070910] 
>>> 
>>> 07---->year (2007)
>>> 09---->month(september)
>>> 01----->day
>>> 
>>> i restrict for 15 days that it is possible to search 15 days record in
>>> my application.at a time 10 users aare going to search every store.is
>>> there any other better way to improve the search performance to avoid
>>> memory problem as well as speed of the search.
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> testn wrote:
>>>> 
>>>> So did you see any improvement in performance?
>>>> 
>>>> Sebastin wrote:
>>>>> 
>>>>> It works finally .i use Lucene 2.2  in my application.thanks testn and
>>>>> Mike
>>>>> 
>>>>> Michael McCandless-2 wrote:
>>>>>> 
>>>>>> 
>>>>>> It sounds like there may be a Lucene version mismatch?  When Luke was
>>>>>> used
>>>>>> it was likely based on Lucene 2.2, but it sounds like an older
>>>>>> version of
>>>>>> Lucene is now being used to open the index?
>>>>>> 
>>>>>> Mike
>>>>>> 
>>>>>> "testn" <te...@doramail.com> wrote:
>>>>>>> 
>>>>>>> Should the file be "segments_8" and "segments.gen"? Why is it
>>>>>>> "Segment"?
>>>>>>> The
>>>>>>> case is different.
>>>>>>> 
>>>>>>> 
>>>>>>> Sebastin wrote:
>>>>>>> > 
>>>>>>> > java.io.IoException:File Not Found- Segments  is the error message
>>>>>>> > 
>>>>>>> > testn wrote:
>>>>>>> >> 
>>>>>>> >> What is the error message? Probably Mike, Erick or Yonik can help
>>>>>>> you
>>>>>>> >> better on this since I'm no one in index area.
>>>>>>> >> 
>>>>>>> >> Sebastin wrote:
>>>>>>> >>> 
>>>>>>> >>> HI testn,
>>>>>>> >>>              1.I optimize the Large Indexes of size 10 GB using
>>>>>>> Luke.it
>>>>>>> >>> optimize all the content into a single CFS file and it generates
>>>>>>> >>> segments.gen and segments_8 file when i search the item it shows
>>>>>>> an
>>>>>>> >>> error that segments file is not there.could you help me in this 
>>>>>>> >>> 
>>>>>>> >>> testn wrote:
>>>>>>> >>>> 
>>>>>>> >>>> 1. You can close the searcher once you're done. If you want to
>>>>>>> reopen
>>>>>>> >>>> the index, you can close and reopen only the updated 3 readers
>>>>>>> and keep
>>>>>>> >>>> the 2 old indexreaders and reuse it. It should reduce the time
>>>>>>> to
>>>>>>> >>>> reopen it.
>>>>>>> >>>> 2. Make sure that you optimize it every once in a while
>>>>>>> >>>> 3. You might consider separating indices in separated storage
>>>>>>> and use
>>>>>>> >>>> ParallelReader
>>>>>>> >>>> 
>>>>>>> >>>> 
>>>>>>> >>>> 
>>>>>>> >>>> Sebastin wrote:
>>>>>>> >>>>> 
>>>>>>> >>>>> The problem in my pplication are as follows:
>>>>>>> >>>>>                  1.I am not able to see the updated records in
>>>>>>> my
>>>>>>> >>>>> index store because i instantiate 
>>>>>>> >>>>> IndexReader and IndexSearcher class once that is in the first
>>>>>>> >>>>> search.further searches use the same IndexReaders(5
>>>>>>> Directories) and
>>>>>>> >>>>> IndexSearcher with different queries.
>>>>>>> >>>>> 
>>>>>>> >>>>>                 2.My search is very very slow First 2
>>>>>>> Directories of
>>>>>>> >>>>> size 10 GB each which are having old index records and no
>>>>>>> update in
>>>>>>> >>>>> that remaining 3 Diretories are updated every second.
>>>>>>> >>>>> 
>>>>>>> >>>>>                 3.i am Indexing 20 million records per day so
>>>>>>> the
>>>>>>> >>>>> Index store gets growing and it makes search very very slower.
>>>>>>> >>>>>  
>>>>>>> >>>>>                4.I am using searcherOne class as the global
>>>>>>> >>>>> application helper class ,with the scope as APPLICATION it
>>>>>>> consists of
>>>>>>> >>>>> one IndexReader and IndexSearcher get set method which will
>>>>>>> hold the
>>>>>>> >>>>> IndexReader and IndexSearcher object after the First Search.it
>>>>>>> is used
>>>>>>> >>>>> for all other searches.
>>>>>>> >>>>> 
>>>>>>> >>>>>               5.I am using Lucene 2.2.0 version, in a WEB
>>>>>>> Application
>>>>>>> >>>>> which index 15 fields per document and Index 5 Fieds,store 10
>>>>>>> Fields.i
>>>>>>> >>>>> am not using any sort in my query.for a single query upto the
>>>>>>> maximum
>>>>>>> >>>>> it fetches 600 records from the index store(5 direcories)    
>>>>>>> >>>>>                 
>>>>>>> >>>>> 
>>>>>>> >>>>> hossman wrote:
>>>>>>> >>>>>> 
>>>>>>> >>>>>> 
>>>>>>> >>>>>> : I set IndexSearcher as the application Object after the
>>>>>>> first
>>>>>>> >>>>>> search.
>>>>>>> >>>>>> 	...
>>>>>>> >>>>>> : how can i reconstruct the new IndexSearcher for every hour
>>>>>>> to see
>>>>>>> >>>>>> the
>>>>>>> >>>>>> : updated records .
>>>>>>> >>>>>> 
>>>>>>> >>>>>> i'm confused ... my understanding based on the comments you
>>>>>>> made
>>>>>>> >>>>>> below 
>>>>>>> >>>>>> (in an earlier message) was that you already *were*
>>>>>>> constructing a
>>>>>>> >>>>>> new  
>>>>>>> >>>>>> IndexSearcher once an hour -- but every time you do that,
>>>>>>> your memory 
>>>>>>> >>>>>> usage grows, and and that sometimes you got OOM Errors.
>>>>>>> >>>>>> 
>>>>>>> >>>>>> if that's not what you said, then i think you need to
>>>>>>> explain, in
>>>>>>> >>>>>> detail, 
>>>>>>> >>>>>> in one message, exactly what your problem is.  And don't
>>>>>>> assume we 
>>>>>>> >>>>>> understand anything -- tell us *EVERYTHING* (like, for
>>>>>>> example, what
>>>>>>> >>>>>> the 
>>>>>>> >>>>>> word "crore" means, how "searcherOne" is implemented, and the
>>>>>>> answer
>>>>>>> >>>>>> to 
>>>>>>> >>>>>> the specfic question i asked in my last message: does your
>>>>>>> >>>>>> application, 
>>>>>>> >>>>>> contain anywhere in it, any code that will close anything
>>>>>>> >>>>>> (IndexSearchers 
>>>>>>> >>>>>> or IndexReaders) ?
>>>>>>> >>>>>> 
>>>>>>> >>>>>> 
>>>>>>> >>>>>> : > : I use StandardAnalyzer.the records daily ranges from 5
>>>>>>> crore to
>>>>>>> >>>>>> 6 crore.
>>>>>>> >>>>>> : > for
>>>>>>> >>>>>> : > : every second i am updating my Index. i instantiate
>>>>>>> >>>>>> IndexSearcher object
>>>>>>> >>>>>> : > one
>>>>>>> >>>>>> : > : time for all the searches. for an hour can i see the
>>>>>>> updated
>>>>>>> >>>>>> records in
>>>>>>> >>>>>> : > the
>>>>>>> >>>>>> : > : indexstore by reinstantiating IndexSearcher object.but
>>>>>>> the
>>>>>>> >>>>>> problem when
>>>>>>> >>>>>> : > i
>>>>>>> >>>>>> : > : reinstantiate IndexSearcher ,my RAM memory gets
>>>>>>> appended.is
>>>>>>> >>>>>> there any
>>>>>>> >>>>>> 
>>>>>>> >>>>>> 
>>>>>>> >>>>>> : > IndexSearcher are you explicitly closing both the old
>>>>>>> >>>>>> IndexSearcher as 
>>>>>>> >>>>>> : > well as all of 4 of those old IndexReaders and the
>>>>>>> MultiReader?
>>>>>>> >>>>>> 
>>>>>>> >>>>>> 
>>>>>>> >>>>>> 
>>>>>>> >>>>>> 
>>>>>>> >>>>>> -Hoss
>>>>>>> >>>>>> 
>>>>>>> >>>>>> 
>>>>>>> >>>>>>
>>>>>>> ---------------------------------------------------------------------
>>>>>>> >>>>>> To unsubscribe, e-mail:
>>>>>>> java-user-unsubscribe@lucene.apache.org
>>>>>>> >>>>>> For additional commands, e-mail:
>>>>>>> java-user-help@lucene.apache.org
>>>>>>> >>>>>> 
>>>>>>> >>>>>> 
>>>>>>> >>>>>> 
>>>>>>> >>>>> 
>>>>>>> >>>>> 
>>>>>>> >>>> 
>>>>>>> >>>> 
>>>>>>> >>> 
>>>>>>> >>> 
>>>>>>> >> 
>>>>>>> >> 
>>>>>>> > 
>>>>>>> > 
>>>>>>> 
>>>>>>> -- 
>>>>>>> View this message in context:
>>>>>>> http://www.nabble.com/Java-Heap-Space--Out-Of-Memory-Error-tf4376803.html#a12655880
>>>>>>> Sent from the Lucene - Java Users mailing list archive at
>>>>>>> Nabble.com.
>>>>>>> 
>>>>>>> 
>>>>>>> ---------------------------------------------------------------------
>>>>>>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>>>>>>> For additional commands, e-mail: java-user-help@lucene.apache.org
>>>>>>> 
>>>>>> 
>>>>>> ---------------------------------------------------------------------
>>>>>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>>>>>> For additional commands, e-mail: java-user-help@lucene.apache.org
>>>>>> 
>>>>>> 
>>>>>> 
>>>>> 
>>>>> 
>>>> 
>>>> 
>>> 
>>> 
>> 
>> 
> 
> 

-- 
View this message in context: http://www.nabble.com/Java-Heap-Space--Out-Of-Memory-Error-tf4376803.html#a12735206
Sent from the Lucene - Java Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: Java Heap Space -Out Of Memory Error

Posted by Sebastin <se...@gmail.com>.
Hi testn,
              Every IndexFolder is of size 1.5 GB of size,eventhough when i
used to Open and close the IndexSearcher it wont release the memory for all
the searches.
              When i set the IndexSearcher object as the Application Scope
object its not possile for me to see current day records.

                 Could you give me an Idea how to trigger out this problem. 
      

testn wrote:
> 
> If you know that there are only 15 days of indexes you need to search on,
> you just need to open only the latest 15 indexes at a time right? You can
> simply create a wrapper that return MultiReader which you can cache for a
> while and close the oldest index once the date rolls.
> 
> 
> Sebastin wrote:
>> 
>> HI testn,
>> 
>> it gives performance improvement while optimizing the Index. 
>> 
>> Now i seprate the IndexStore on a daily basis.(ie) 
>> For Every Day it create a new Index store ,sep- 08-2007,sep-09-2007 like
>> wise it will minimize the size of the IndexStore.could you give me an
>> idea on how to open every day folders for every search.
>> 
>> Query I use here is,
>> 
>> 9840836588 AND dateSc:[070901 TO 070910] 
>> 
>> 07---->year (2007)
>> 09---->month(september)
>> 01----->day
>> 
>> i restrict for 15 days that it is possible to search 15 days record in my
>> application.at a time 10 users aare going to search every store.is there
>> any other better way to improve the search performance to avoid memory
>> problem as well as speed of the search.
>> 
>> 
>> 
>> 
>> 
>> 
>> testn wrote:
>>> 
>>> So did you see any improvement in performance?
>>> 
>>> Sebastin wrote:
>>>> 
>>>> It works finally .i use Lucene 2.2  in my application.thanks testn and
>>>> Mike
>>>> 
>>>> Michael McCandless-2 wrote:
>>>>> 
>>>>> 
>>>>> It sounds like there may be a Lucene version mismatch?  When Luke was
>>>>> used
>>>>> it was likely based on Lucene 2.2, but it sounds like an older version
>>>>> of
>>>>> Lucene is now being used to open the index?
>>>>> 
>>>>> Mike
>>>>> 
>>>>> "testn" <te...@doramail.com> wrote:
>>>>>> 
>>>>>> Should the file be "segments_8" and "segments.gen"? Why is it
>>>>>> "Segment"?
>>>>>> The
>>>>>> case is different.
>>>>>> 
>>>>>> 
>>>>>> Sebastin wrote:
>>>>>> > 
>>>>>> > java.io.IoException:File Not Found- Segments  is the error message
>>>>>> > 
>>>>>> > testn wrote:
>>>>>> >> 
>>>>>> >> What is the error message? Probably Mike, Erick or Yonik can help
>>>>>> you
>>>>>> >> better on this since I'm no one in index area.
>>>>>> >> 
>>>>>> >> Sebastin wrote:
>>>>>> >>> 
>>>>>> >>> HI testn,
>>>>>> >>>              1.I optimize the Large Indexes of size 10 GB using
>>>>>> Luke.it
>>>>>> >>> optimize all the content into a single CFS file and it generates
>>>>>> >>> segments.gen and segments_8 file when i search the item it shows
>>>>>> an
>>>>>> >>> error that segments file is not there.could you help me in this 
>>>>>> >>> 
>>>>>> >>> testn wrote:
>>>>>> >>>> 
>>>>>> >>>> 1. You can close the searcher once you're done. If you want to
>>>>>> reopen
>>>>>> >>>> the index, you can close and reopen only the updated 3 readers
>>>>>> and keep
>>>>>> >>>> the 2 old indexreaders and reuse it. It should reduce the time
>>>>>> to
>>>>>> >>>> reopen it.
>>>>>> >>>> 2. Make sure that you optimize it every once in a while
>>>>>> >>>> 3. You might consider separating indices in separated storage
>>>>>> and use
>>>>>> >>>> ParallelReader
>>>>>> >>>> 
>>>>>> >>>> 
>>>>>> >>>> 
>>>>>> >>>> Sebastin wrote:
>>>>>> >>>>> 
>>>>>> >>>>> The problem in my pplication are as follows:
>>>>>> >>>>>                  1.I am not able to see the updated records in
>>>>>> my
>>>>>> >>>>> index store because i instantiate 
>>>>>> >>>>> IndexReader and IndexSearcher class once that is in the first
>>>>>> >>>>> search.further searches use the same IndexReaders(5
>>>>>> Directories) and
>>>>>> >>>>> IndexSearcher with different queries.
>>>>>> >>>>> 
>>>>>> >>>>>                 2.My search is very very slow First 2
>>>>>> Directories of
>>>>>> >>>>> size 10 GB each which are having old index records and no
>>>>>> update in
>>>>>> >>>>> that remaining 3 Diretories are updated every second.
>>>>>> >>>>> 
>>>>>> >>>>>                 3.i am Indexing 20 million records per day so
>>>>>> the
>>>>>> >>>>> Index store gets growing and it makes search very very slower.
>>>>>> >>>>>  
>>>>>> >>>>>                4.I am using searcherOne class as the global
>>>>>> >>>>> application helper class ,with the scope as APPLICATION it
>>>>>> consists of
>>>>>> >>>>> one IndexReader and IndexSearcher get set method which will
>>>>>> hold the
>>>>>> >>>>> IndexReader and IndexSearcher object after the First Search.it
>>>>>> is used
>>>>>> >>>>> for all other searches.
>>>>>> >>>>> 
>>>>>> >>>>>               5.I am using Lucene 2.2.0 version, in a WEB
>>>>>> Application
>>>>>> >>>>> which index 15 fields per document and Index 5 Fieds,store 10
>>>>>> Fields.i
>>>>>> >>>>> am not using any sort in my query.for a single query upto the
>>>>>> maximum
>>>>>> >>>>> it fetches 600 records from the index store(5 direcories)    
>>>>>> >>>>>                 
>>>>>> >>>>> 
>>>>>> >>>>> hossman wrote:
>>>>>> >>>>>> 
>>>>>> >>>>>> 
>>>>>> >>>>>> : I set IndexSearcher as the application Object after the
>>>>>> first
>>>>>> >>>>>> search.
>>>>>> >>>>>> 	...
>>>>>> >>>>>> : how can i reconstruct the new IndexSearcher for every hour
>>>>>> to see
>>>>>> >>>>>> the
>>>>>> >>>>>> : updated records .
>>>>>> >>>>>> 
>>>>>> >>>>>> i'm confused ... my understanding based on the comments you
>>>>>> made
>>>>>> >>>>>> below 
>>>>>> >>>>>> (in an earlier message) was that you already *were*
>>>>>> constructing a
>>>>>> >>>>>> new  
>>>>>> >>>>>> IndexSearcher once an hour -- but every time you do that, your
>>>>>> memory 
>>>>>> >>>>>> usage grows, and and that sometimes you got OOM Errors.
>>>>>> >>>>>> 
>>>>>> >>>>>> if that's not what you said, then i think you need to explain,
>>>>>> in
>>>>>> >>>>>> detail, 
>>>>>> >>>>>> in one message, exactly what your problem is.  And don't
>>>>>> assume we 
>>>>>> >>>>>> understand anything -- tell us *EVERYTHING* (like, for
>>>>>> example, what
>>>>>> >>>>>> the 
>>>>>> >>>>>> word "crore" means, how "searcherOne" is implemented, and the
>>>>>> answer
>>>>>> >>>>>> to 
>>>>>> >>>>>> the specfic question i asked in my last message: does your
>>>>>> >>>>>> application, 
>>>>>> >>>>>> contain anywhere in it, any code that will close anything
>>>>>> >>>>>> (IndexSearchers 
>>>>>> >>>>>> or IndexReaders) ?
>>>>>> >>>>>> 
>>>>>> >>>>>> 
>>>>>> >>>>>> : > : I use StandardAnalyzer.the records daily ranges from 5
>>>>>> crore to
>>>>>> >>>>>> 6 crore.
>>>>>> >>>>>> : > for
>>>>>> >>>>>> : > : every second i am updating my Index. i instantiate
>>>>>> >>>>>> IndexSearcher object
>>>>>> >>>>>> : > one
>>>>>> >>>>>> : > : time for all the searches. for an hour can i see the
>>>>>> updated
>>>>>> >>>>>> records in
>>>>>> >>>>>> : > the
>>>>>> >>>>>> : > : indexstore by reinstantiating IndexSearcher object.but
>>>>>> the
>>>>>> >>>>>> problem when
>>>>>> >>>>>> : > i
>>>>>> >>>>>> : > : reinstantiate IndexSearcher ,my RAM memory gets
>>>>>> appended.is
>>>>>> >>>>>> there any
>>>>>> >>>>>> 
>>>>>> >>>>>> 
>>>>>> >>>>>> : > IndexSearcher are you explicitly closing both the old
>>>>>> >>>>>> IndexSearcher as 
>>>>>> >>>>>> : > well as all of 4 of those old IndexReaders and the
>>>>>> MultiReader?
>>>>>> >>>>>> 
>>>>>> >>>>>> 
>>>>>> >>>>>> 
>>>>>> >>>>>> 
>>>>>> >>>>>> -Hoss
>>>>>> >>>>>> 
>>>>>> >>>>>> 
>>>>>> >>>>>>
>>>>>> ---------------------------------------------------------------------
>>>>>> >>>>>> To unsubscribe, e-mail:
>>>>>> java-user-unsubscribe@lucene.apache.org
>>>>>> >>>>>> For additional commands, e-mail:
>>>>>> java-user-help@lucene.apache.org
>>>>>> >>>>>> 
>>>>>> >>>>>> 
>>>>>> >>>>>> 
>>>>>> >>>>> 
>>>>>> >>>>> 
>>>>>> >>>> 
>>>>>> >>>> 
>>>>>> >>> 
>>>>>> >>> 
>>>>>> >> 
>>>>>> >> 
>>>>>> > 
>>>>>> > 
>>>>>> 
>>>>>> -- 
>>>>>> View this message in context:
>>>>>> http://www.nabble.com/Java-Heap-Space--Out-Of-Memory-Error-tf4376803.html#a12655880
>>>>>> Sent from the Lucene - Java Users mailing list archive at Nabble.com.
>>>>>> 
>>>>>> 
>>>>>> ---------------------------------------------------------------------
>>>>>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>>>>>> For additional commands, e-mail: java-user-help@lucene.apache.org
>>>>>> 
>>>>> 
>>>>> ---------------------------------------------------------------------
>>>>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>>>>> For additional commands, e-mail: java-user-help@lucene.apache.org
>>>>> 
>>>>> 
>>>>> 
>>>> 
>>>> 
>>> 
>>> 
>> 
>> 
> 
> 

-- 
View this message in context: http://www.nabble.com/Java-Heap-Space--Out-Of-Memory-Error-tf4376803.html#a12729051
Sent from the Lucene - Java Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: Java Heap Space -Out Of Memory Error

Posted by testn <te...@doramail.com>.
If you know that there are only 15 days of indexes you need to search on, you
just need to open only the latest 15 indexes at a time right? You can simply
create a wrapper that return MultiReader which you can cache for a while and
close the oldest index once the date rolls.


Sebastin wrote:
> 
> HI testn,
> 
> it gives performance improvement while optimizing the Index. 
> 
> Now i seprate the IndexStore on a daily basis.(ie) 
> For Every Day it create a new Index store ,sep- 08-2007,sep-09-2007 like
> wise it will minimize the size of the IndexStore.could you give me an idea
> on how to open every day folders for every search.
> 
> Query I use here is,
> 
> 9840836588 AND dateSc:[070901 TO 070910] 
> 
> 07---->year (2007)
> 09---->month(september)
> 01----->day
> 
> i restrict for 15 days that it is possible to search 15 days record in my
> application.at a time 10 users aare going to search every store.is there
> any other better way to improve the search performance to avoid memory
> problem as well as speed of the search.
> 
> 
> 
> 
> 
> 
> testn wrote:
>> 
>> So did you see any improvement in performance?
>> 
>> Sebastin wrote:
>>> 
>>> It works finally .i use Lucene 2.2  in my application.thanks testn and
>>> Mike
>>> 
>>> Michael McCandless-2 wrote:
>>>> 
>>>> 
>>>> It sounds like there may be a Lucene version mismatch?  When Luke was
>>>> used
>>>> it was likely based on Lucene 2.2, but it sounds like an older version
>>>> of
>>>> Lucene is now being used to open the index?
>>>> 
>>>> Mike
>>>> 
>>>> "testn" <te...@doramail.com> wrote:
>>>>> 
>>>>> Should the file be "segments_8" and "segments.gen"? Why is it
>>>>> "Segment"?
>>>>> The
>>>>> case is different.
>>>>> 
>>>>> 
>>>>> Sebastin wrote:
>>>>> > 
>>>>> > java.io.IoException:File Not Found- Segments  is the error message
>>>>> > 
>>>>> > testn wrote:
>>>>> >> 
>>>>> >> What is the error message? Probably Mike, Erick or Yonik can help
>>>>> you
>>>>> >> better on this since I'm no one in index area.
>>>>> >> 
>>>>> >> Sebastin wrote:
>>>>> >>> 
>>>>> >>> HI testn,
>>>>> >>>              1.I optimize the Large Indexes of size 10 GB using
>>>>> Luke.it
>>>>> >>> optimize all the content into a single CFS file and it generates
>>>>> >>> segments.gen and segments_8 file when i search the item it shows
>>>>> an
>>>>> >>> error that segments file is not there.could you help me in this 
>>>>> >>> 
>>>>> >>> testn wrote:
>>>>> >>>> 
>>>>> >>>> 1. You can close the searcher once you're done. If you want to
>>>>> reopen
>>>>> >>>> the index, you can close and reopen only the updated 3 readers
>>>>> and keep
>>>>> >>>> the 2 old indexreaders and reuse it. It should reduce the time to
>>>>> >>>> reopen it.
>>>>> >>>> 2. Make sure that you optimize it every once in a while
>>>>> >>>> 3. You might consider separating indices in separated storage and
>>>>> use
>>>>> >>>> ParallelReader
>>>>> >>>> 
>>>>> >>>> 
>>>>> >>>> 
>>>>> >>>> Sebastin wrote:
>>>>> >>>>> 
>>>>> >>>>> The problem in my pplication are as follows:
>>>>> >>>>>                  1.I am not able to see the updated records in
>>>>> my
>>>>> >>>>> index store because i instantiate 
>>>>> >>>>> IndexReader and IndexSearcher class once that is in the first
>>>>> >>>>> search.further searches use the same IndexReaders(5 Directories)
>>>>> and
>>>>> >>>>> IndexSearcher with different queries.
>>>>> >>>>> 
>>>>> >>>>>                 2.My search is very very slow First 2
>>>>> Directories of
>>>>> >>>>> size 10 GB each which are having old index records and no update
>>>>> in
>>>>> >>>>> that remaining 3 Diretories are updated every second.
>>>>> >>>>> 
>>>>> >>>>>                 3.i am Indexing 20 million records per day so
>>>>> the
>>>>> >>>>> Index store gets growing and it makes search very very slower.
>>>>> >>>>>  
>>>>> >>>>>                4.I am using searcherOne class as the global
>>>>> >>>>> application helper class ,with the scope as APPLICATION it
>>>>> consists of
>>>>> >>>>> one IndexReader and IndexSearcher get set method which will hold
>>>>> the
>>>>> >>>>> IndexReader and IndexSearcher object after the First Search.it
>>>>> is used
>>>>> >>>>> for all other searches.
>>>>> >>>>> 
>>>>> >>>>>               5.I am using Lucene 2.2.0 version, in a WEB
>>>>> Application
>>>>> >>>>> which index 15 fields per document and Index 5 Fieds,store 10
>>>>> Fields.i
>>>>> >>>>> am not using any sort in my query.for a single query upto the
>>>>> maximum
>>>>> >>>>> it fetches 600 records from the index store(5 direcories)    
>>>>> >>>>>                 
>>>>> >>>>> 
>>>>> >>>>> hossman wrote:
>>>>> >>>>>> 
>>>>> >>>>>> 
>>>>> >>>>>> : I set IndexSearcher as the application Object after the first
>>>>> >>>>>> search.
>>>>> >>>>>> 	...
>>>>> >>>>>> : how can i reconstruct the new IndexSearcher for every hour to
>>>>> see
>>>>> >>>>>> the
>>>>> >>>>>> : updated records .
>>>>> >>>>>> 
>>>>> >>>>>> i'm confused ... my understanding based on the comments you
>>>>> made
>>>>> >>>>>> below 
>>>>> >>>>>> (in an earlier message) was that you already *were*
>>>>> constructing a
>>>>> >>>>>> new  
>>>>> >>>>>> IndexSearcher once an hour -- but every time you do that, your
>>>>> memory 
>>>>> >>>>>> usage grows, and and that sometimes you got OOM Errors.
>>>>> >>>>>> 
>>>>> >>>>>> if that's not what you said, then i think you need to explain,
>>>>> in
>>>>> >>>>>> detail, 
>>>>> >>>>>> in one message, exactly what your problem is.  And don't assume
>>>>> we 
>>>>> >>>>>> understand anything -- tell us *EVERYTHING* (like, for example,
>>>>> what
>>>>> >>>>>> the 
>>>>> >>>>>> word "crore" means, how "searcherOne" is implemented, and the
>>>>> answer
>>>>> >>>>>> to 
>>>>> >>>>>> the specfic question i asked in my last message: does your
>>>>> >>>>>> application, 
>>>>> >>>>>> contain anywhere in it, any code that will close anything
>>>>> >>>>>> (IndexSearchers 
>>>>> >>>>>> or IndexReaders) ?
>>>>> >>>>>> 
>>>>> >>>>>> 
>>>>> >>>>>> : > : I use StandardAnalyzer.the records daily ranges from 5
>>>>> crore to
>>>>> >>>>>> 6 crore.
>>>>> >>>>>> : > for
>>>>> >>>>>> : > : every second i am updating my Index. i instantiate
>>>>> >>>>>> IndexSearcher object
>>>>> >>>>>> : > one
>>>>> >>>>>> : > : time for all the searches. for an hour can i see the
>>>>> updated
>>>>> >>>>>> records in
>>>>> >>>>>> : > the
>>>>> >>>>>> : > : indexstore by reinstantiating IndexSearcher object.but
>>>>> the
>>>>> >>>>>> problem when
>>>>> >>>>>> : > i
>>>>> >>>>>> : > : reinstantiate IndexSearcher ,my RAM memory gets
>>>>> appended.is
>>>>> >>>>>> there any
>>>>> >>>>>> 
>>>>> >>>>>> 
>>>>> >>>>>> : > IndexSearcher are you explicitly closing both the old
>>>>> >>>>>> IndexSearcher as 
>>>>> >>>>>> : > well as all of 4 of those old IndexReaders and the
>>>>> MultiReader?
>>>>> >>>>>> 
>>>>> >>>>>> 
>>>>> >>>>>> 
>>>>> >>>>>> 
>>>>> >>>>>> -Hoss
>>>>> >>>>>> 
>>>>> >>>>>> 
>>>>> >>>>>>
>>>>> ---------------------------------------------------------------------
>>>>> >>>>>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>>>>> >>>>>> For additional commands, e-mail:
>>>>> java-user-help@lucene.apache.org
>>>>> >>>>>> 
>>>>> >>>>>> 
>>>>> >>>>>> 
>>>>> >>>>> 
>>>>> >>>>> 
>>>>> >>>> 
>>>>> >>>> 
>>>>> >>> 
>>>>> >>> 
>>>>> >> 
>>>>> >> 
>>>>> > 
>>>>> > 
>>>>> 
>>>>> -- 
>>>>> View this message in context:
>>>>> http://www.nabble.com/Java-Heap-Space--Out-Of-Memory-Error-tf4376803.html#a12655880
>>>>> Sent from the Lucene - Java Users mailing list archive at Nabble.com.
>>>>> 
>>>>> 
>>>>> ---------------------------------------------------------------------
>>>>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>>>>> For additional commands, e-mail: java-user-help@lucene.apache.org
>>>>> 
>>>> 
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>>>> For additional commands, e-mail: java-user-help@lucene.apache.org
>>>> 
>>>> 
>>>> 
>>> 
>>> 
>> 
>> 
> 
> 

-- 
View this message in context: http://www.nabble.com/Java-Heap-Space--Out-Of-Memory-Error-tf4376803.html#a12690100
Sent from the Lucene - Java Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: Java Heap Space -Out Of Memory Error

Posted by Sebastin <se...@gmail.com>.
HI testn,

it gives performance improvement while optimizing the Index. 

Now i seprate the IndexStore on a daily basis.(ie) 
For Every Day it create a new Index store ,sep- 08-2007,sep-09-2007 like
wise it will minimize the size of the IndexStore.could you give me an idea
on how to open every day folders for every search.

Query I use here is,

9840836588 AND dateSc:[070901 TO 070910] 

07---->year (2007)
09---->month(september)
01----->day

i restrict for 15 days that it is possible to search 15 days record in my
application.at a time 10 users aare going to search every store.is there any
other better way to improve the search performance to avoid memory problem
as well as speed of the search.






testn wrote:
> 
> So did you see any improvement in performance?
> 
> Sebastin wrote:
>> 
>> It works finally .i use Lucene 2.2  in my application.thanks testn and
>> Mike
>> 
>> Michael McCandless-2 wrote:
>>> 
>>> 
>>> It sounds like there may be a Lucene version mismatch?  When Luke was
>>> used
>>> it was likely based on Lucene 2.2, but it sounds like an older version
>>> of
>>> Lucene is now being used to open the index?
>>> 
>>> Mike
>>> 
>>> "testn" <te...@doramail.com> wrote:
>>>> 
>>>> Should the file be "segments_8" and "segments.gen"? Why is it
>>>> "Segment"?
>>>> The
>>>> case is different.
>>>> 
>>>> 
>>>> Sebastin wrote:
>>>> > 
>>>> > java.io.IoException:File Not Found- Segments  is the error message
>>>> > 
>>>> > testn wrote:
>>>> >> 
>>>> >> What is the error message? Probably Mike, Erick or Yonik can help
>>>> you
>>>> >> better on this since I'm no one in index area.
>>>> >> 
>>>> >> Sebastin wrote:
>>>> >>> 
>>>> >>> HI testn,
>>>> >>>              1.I optimize the Large Indexes of size 10 GB using
>>>> Luke.it
>>>> >>> optimize all the content into a single CFS file and it generates
>>>> >>> segments.gen and segments_8 file when i search the item it shows an
>>>> >>> error that segments file is not there.could you help me in this 
>>>> >>> 
>>>> >>> testn wrote:
>>>> >>>> 
>>>> >>>> 1. You can close the searcher once you're done. If you want to
>>>> reopen
>>>> >>>> the index, you can close and reopen only the updated 3 readers and
>>>> keep
>>>> >>>> the 2 old indexreaders and reuse it. It should reduce the time to
>>>> >>>> reopen it.
>>>> >>>> 2. Make sure that you optimize it every once in a while
>>>> >>>> 3. You might consider separating indices in separated storage and
>>>> use
>>>> >>>> ParallelReader
>>>> >>>> 
>>>> >>>> 
>>>> >>>> 
>>>> >>>> Sebastin wrote:
>>>> >>>>> 
>>>> >>>>> The problem in my pplication are as follows:
>>>> >>>>>                  1.I am not able to see the updated records in my
>>>> >>>>> index store because i instantiate 
>>>> >>>>> IndexReader and IndexSearcher class once that is in the first
>>>> >>>>> search.further searches use the same IndexReaders(5 Directories)
>>>> and
>>>> >>>>> IndexSearcher with different queries.
>>>> >>>>> 
>>>> >>>>>                 2.My search is very very slow First 2 Directories
>>>> of
>>>> >>>>> size 10 GB each which are having old index records and no update
>>>> in
>>>> >>>>> that remaining 3 Diretories are updated every second.
>>>> >>>>> 
>>>> >>>>>                 3.i am Indexing 20 million records per day so the
>>>> >>>>> Index store gets growing and it makes search very very slower.
>>>> >>>>>  
>>>> >>>>>                4.I am using searcherOne class as the global
>>>> >>>>> application helper class ,with the scope as APPLICATION it
>>>> consists of
>>>> >>>>> one IndexReader and IndexSearcher get set method which will hold
>>>> the
>>>> >>>>> IndexReader and IndexSearcher object after the First Search.it is
>>>> used
>>>> >>>>> for all other searches.
>>>> >>>>> 
>>>> >>>>>               5.I am using Lucene 2.2.0 version, in a WEB
>>>> Application
>>>> >>>>> which index 15 fields per document and Index 5 Fieds,store 10
>>>> Fields.i
>>>> >>>>> am not using any sort in my query.for a single query upto the
>>>> maximum
>>>> >>>>> it fetches 600 records from the index store(5 direcories)    
>>>> >>>>>                 
>>>> >>>>> 
>>>> >>>>> hossman wrote:
>>>> >>>>>> 
>>>> >>>>>> 
>>>> >>>>>> : I set IndexSearcher as the application Object after the first
>>>> >>>>>> search.
>>>> >>>>>> 	...
>>>> >>>>>> : how can i reconstruct the new IndexSearcher for every hour to
>>>> see
>>>> >>>>>> the
>>>> >>>>>> : updated records .
>>>> >>>>>> 
>>>> >>>>>> i'm confused ... my understanding based on the comments you made
>>>> >>>>>> below 
>>>> >>>>>> (in an earlier message) was that you already *were* constructing
>>>> a
>>>> >>>>>> new  
>>>> >>>>>> IndexSearcher once an hour -- but every time you do that, your
>>>> memory 
>>>> >>>>>> usage grows, and and that sometimes you got OOM Errors.
>>>> >>>>>> 
>>>> >>>>>> if that's not what you said, then i think you need to explain,
>>>> in
>>>> >>>>>> detail, 
>>>> >>>>>> in one message, exactly what your problem is.  And don't assume
>>>> we 
>>>> >>>>>> understand anything -- tell us *EVERYTHING* (like, for example,
>>>> what
>>>> >>>>>> the 
>>>> >>>>>> word "crore" means, how "searcherOne" is implemented, and the
>>>> answer
>>>> >>>>>> to 
>>>> >>>>>> the specfic question i asked in my last message: does your
>>>> >>>>>> application, 
>>>> >>>>>> contain anywhere in it, any code that will close anything
>>>> >>>>>> (IndexSearchers 
>>>> >>>>>> or IndexReaders) ?
>>>> >>>>>> 
>>>> >>>>>> 
>>>> >>>>>> : > : I use StandardAnalyzer.the records daily ranges from 5
>>>> crore to
>>>> >>>>>> 6 crore.
>>>> >>>>>> : > for
>>>> >>>>>> : > : every second i am updating my Index. i instantiate
>>>> >>>>>> IndexSearcher object
>>>> >>>>>> : > one
>>>> >>>>>> : > : time for all the searches. for an hour can i see the
>>>> updated
>>>> >>>>>> records in
>>>> >>>>>> : > the
>>>> >>>>>> : > : indexstore by reinstantiating IndexSearcher object.but the
>>>> >>>>>> problem when
>>>> >>>>>> : > i
>>>> >>>>>> : > : reinstantiate IndexSearcher ,my RAM memory gets
>>>> appended.is
>>>> >>>>>> there any
>>>> >>>>>> 
>>>> >>>>>> 
>>>> >>>>>> : > IndexSearcher are you explicitly closing both the old
>>>> >>>>>> IndexSearcher as 
>>>> >>>>>> : > well as all of 4 of those old IndexReaders and the
>>>> MultiReader?
>>>> >>>>>> 
>>>> >>>>>> 
>>>> >>>>>> 
>>>> >>>>>> 
>>>> >>>>>> -Hoss
>>>> >>>>>> 
>>>> >>>>>> 
>>>> >>>>>>
>>>> ---------------------------------------------------------------------
>>>> >>>>>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>>>> >>>>>> For additional commands, e-mail:
>>>> java-user-help@lucene.apache.org
>>>> >>>>>> 
>>>> >>>>>> 
>>>> >>>>>> 
>>>> >>>>> 
>>>> >>>>> 
>>>> >>>> 
>>>> >>>> 
>>>> >>> 
>>>> >>> 
>>>> >> 
>>>> >> 
>>>> > 
>>>> > 
>>>> 
>>>> -- 
>>>> View this message in context:
>>>> http://www.nabble.com/Java-Heap-Space--Out-Of-Memory-Error-tf4376803.html#a12655880
>>>> Sent from the Lucene - Java Users mailing list archive at Nabble.com.
>>>> 
>>>> 
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>>>> For additional commands, e-mail: java-user-help@lucene.apache.org
>>>> 
>>> 
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>>> For additional commands, e-mail: java-user-help@lucene.apache.org
>>> 
>>> 
>>> 
>> 
>> 
> 
> 

-- 
View this message in context: http://www.nabble.com/Java-Heap-Space--Out-Of-Memory-Error-tf4376803.html#a12687410
Sent from the Lucene - Java Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: Java Heap Space -Out Of Memory Error

Posted by testn <te...@doramail.com>.
So did you see any improvement in performance?

Sebastin wrote:
> 
> It works finally .i use Lucene 2.2  in my application.thanks testn and
> Mike
> 
> Michael McCandless-2 wrote:
>> 
>> 
>> It sounds like there may be a Lucene version mismatch?  When Luke was
>> used
>> it was likely based on Lucene 2.2, but it sounds like an older version of
>> Lucene is now being used to open the index?
>> 
>> Mike
>> 
>> "testn" <te...@doramail.com> wrote:
>>> 
>>> Should the file be "segments_8" and "segments.gen"? Why is it "Segment"?
>>> The
>>> case is different.
>>> 
>>> 
>>> Sebastin wrote:
>>> > 
>>> > java.io.IoException:File Not Found- Segments  is the error message
>>> > 
>>> > testn wrote:
>>> >> 
>>> >> What is the error message? Probably Mike, Erick or Yonik can help you
>>> >> better on this since I'm no one in index area.
>>> >> 
>>> >> Sebastin wrote:
>>> >>> 
>>> >>> HI testn,
>>> >>>              1.I optimize the Large Indexes of size 10 GB using
>>> Luke.it
>>> >>> optimize all the content into a single CFS file and it generates
>>> >>> segments.gen and segments_8 file when i search the item it shows an
>>> >>> error that segments file is not there.could you help me in this 
>>> >>> 
>>> >>> testn wrote:
>>> >>>> 
>>> >>>> 1. You can close the searcher once you're done. If you want to
>>> reopen
>>> >>>> the index, you can close and reopen only the updated 3 readers and
>>> keep
>>> >>>> the 2 old indexreaders and reuse it. It should reduce the time to
>>> >>>> reopen it.
>>> >>>> 2. Make sure that you optimize it every once in a while
>>> >>>> 3. You might consider separating indices in separated storage and
>>> use
>>> >>>> ParallelReader
>>> >>>> 
>>> >>>> 
>>> >>>> 
>>> >>>> Sebastin wrote:
>>> >>>>> 
>>> >>>>> The problem in my pplication are as follows:
>>> >>>>>                  1.I am not able to see the updated records in my
>>> >>>>> index store because i instantiate 
>>> >>>>> IndexReader and IndexSearcher class once that is in the first
>>> >>>>> search.further searches use the same IndexReaders(5 Directories)
>>> and
>>> >>>>> IndexSearcher with different queries.
>>> >>>>> 
>>> >>>>>                 2.My search is very very slow First 2 Directories
>>> of
>>> >>>>> size 10 GB each which are having old index records and no update
>>> in
>>> >>>>> that remaining 3 Diretories are updated every second.
>>> >>>>> 
>>> >>>>>                 3.i am Indexing 20 million records per day so the
>>> >>>>> Index store gets growing and it makes search very very slower.
>>> >>>>>  
>>> >>>>>                4.I am using searcherOne class as the global
>>> >>>>> application helper class ,with the scope as APPLICATION it
>>> consists of
>>> >>>>> one IndexReader and IndexSearcher get set method which will hold
>>> the
>>> >>>>> IndexReader and IndexSearcher object after the First Search.it is
>>> used
>>> >>>>> for all other searches.
>>> >>>>> 
>>> >>>>>               5.I am using Lucene 2.2.0 version, in a WEB
>>> Application
>>> >>>>> which index 15 fields per document and Index 5 Fieds,store 10
>>> Fields.i
>>> >>>>> am not using any sort in my query.for a single query upto the
>>> maximum
>>> >>>>> it fetches 600 records from the index store(5 direcories)    
>>> >>>>>                 
>>> >>>>> 
>>> >>>>> hossman wrote:
>>> >>>>>> 
>>> >>>>>> 
>>> >>>>>> : I set IndexSearcher as the application Object after the first
>>> >>>>>> search.
>>> >>>>>> 	...
>>> >>>>>> : how can i reconstruct the new IndexSearcher for every hour to
>>> see
>>> >>>>>> the
>>> >>>>>> : updated records .
>>> >>>>>> 
>>> >>>>>> i'm confused ... my understanding based on the comments you made
>>> >>>>>> below 
>>> >>>>>> (in an earlier message) was that you already *were* constructing
>>> a
>>> >>>>>> new  
>>> >>>>>> IndexSearcher once an hour -- but every time you do that, your
>>> memory 
>>> >>>>>> usage grows, and and that sometimes you got OOM Errors.
>>> >>>>>> 
>>> >>>>>> if that's not what you said, then i think you need to explain, in
>>> >>>>>> detail, 
>>> >>>>>> in one message, exactly what your problem is.  And don't assume
>>> we 
>>> >>>>>> understand anything -- tell us *EVERYTHING* (like, for example,
>>> what
>>> >>>>>> the 
>>> >>>>>> word "crore" means, how "searcherOne" is implemented, and the
>>> answer
>>> >>>>>> to 
>>> >>>>>> the specfic question i asked in my last message: does your
>>> >>>>>> application, 
>>> >>>>>> contain anywhere in it, any code that will close anything
>>> >>>>>> (IndexSearchers 
>>> >>>>>> or IndexReaders) ?
>>> >>>>>> 
>>> >>>>>> 
>>> >>>>>> : > : I use StandardAnalyzer.the records daily ranges from 5
>>> crore to
>>> >>>>>> 6 crore.
>>> >>>>>> : > for
>>> >>>>>> : > : every second i am updating my Index. i instantiate
>>> >>>>>> IndexSearcher object
>>> >>>>>> : > one
>>> >>>>>> : > : time for all the searches. for an hour can i see the
>>> updated
>>> >>>>>> records in
>>> >>>>>> : > the
>>> >>>>>> : > : indexstore by reinstantiating IndexSearcher object.but the
>>> >>>>>> problem when
>>> >>>>>> : > i
>>> >>>>>> : > : reinstantiate IndexSearcher ,my RAM memory gets appended.is
>>> >>>>>> there any
>>> >>>>>> 
>>> >>>>>> 
>>> >>>>>> : > IndexSearcher are you explicitly closing both the old
>>> >>>>>> IndexSearcher as 
>>> >>>>>> : > well as all of 4 of those old IndexReaders and the
>>> MultiReader?
>>> >>>>>> 
>>> >>>>>> 
>>> >>>>>> 
>>> >>>>>> 
>>> >>>>>> -Hoss
>>> >>>>>> 
>>> >>>>>> 
>>> >>>>>>
>>> ---------------------------------------------------------------------
>>> >>>>>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>>> >>>>>> For additional commands, e-mail: java-user-help@lucene.apache.org
>>> >>>>>> 
>>> >>>>>> 
>>> >>>>>> 
>>> >>>>> 
>>> >>>>> 
>>> >>>> 
>>> >>>> 
>>> >>> 
>>> >>> 
>>> >> 
>>> >> 
>>> > 
>>> > 
>>> 
>>> -- 
>>> View this message in context:
>>> http://www.nabble.com/Java-Heap-Space--Out-Of-Memory-Error-tf4376803.html#a12655880
>>> Sent from the Lucene - Java Users mailing list archive at Nabble.com.
>>> 
>>> 
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>>> For additional commands, e-mail: java-user-help@lucene.apache.org
>>> 
>> 
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>> For additional commands, e-mail: java-user-help@lucene.apache.org
>> 
>> 
>> 
> 
> 

-- 
View this message in context: http://www.nabble.com/Java-Heap-Space--Out-Of-Memory-Error-tf4376803.html#a12679970
Sent from the Lucene - Java Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: Java Heap Space -Out Of Memory Error

Posted by Sebastin <se...@gmail.com>.
It works finally .i use Lucene 2.2  in my application.thanks testn and Mike

Michael McCandless-2 wrote:
> 
> 
> It sounds like there may be a Lucene version mismatch?  When Luke was used
> it was likely based on Lucene 2.2, but it sounds like an older version of
> Lucene is now being used to open the index?
> 
> Mike
> 
> "testn" <te...@doramail.com> wrote:
>> 
>> Should the file be "segments_8" and "segments.gen"? Why is it "Segment"?
>> The
>> case is different.
>> 
>> 
>> Sebastin wrote:
>> > 
>> > java.io.IoException:File Not Found- Segments  is the error message
>> > 
>> > testn wrote:
>> >> 
>> >> What is the error message? Probably Mike, Erick or Yonik can help you
>> >> better on this since I'm no one in index area.
>> >> 
>> >> Sebastin wrote:
>> >>> 
>> >>> HI testn,
>> >>>              1.I optimize the Large Indexes of size 10 GB using
>> Luke.it
>> >>> optimize all the content into a single CFS file and it generates
>> >>> segments.gen and segments_8 file when i search the item it shows an
>> >>> error that segments file is not there.could you help me in this 
>> >>> 
>> >>> testn wrote:
>> >>>> 
>> >>>> 1. You can close the searcher once you're done. If you want to
>> reopen
>> >>>> the index, you can close and reopen only the updated 3 readers and
>> keep
>> >>>> the 2 old indexreaders and reuse it. It should reduce the time to
>> >>>> reopen it.
>> >>>> 2. Make sure that you optimize it every once in a while
>> >>>> 3. You might consider separating indices in separated storage and
>> use
>> >>>> ParallelReader
>> >>>> 
>> >>>> 
>> >>>> 
>> >>>> Sebastin wrote:
>> >>>>> 
>> >>>>> The problem in my pplication are as follows:
>> >>>>>                  1.I am not able to see the updated records in my
>> >>>>> index store because i instantiate 
>> >>>>> IndexReader and IndexSearcher class once that is in the first
>> >>>>> search.further searches use the same IndexReaders(5 Directories)
>> and
>> >>>>> IndexSearcher with different queries.
>> >>>>> 
>> >>>>>                 2.My search is very very slow First 2 Directories
>> of
>> >>>>> size 10 GB each which are having old index records and no update in
>> >>>>> that remaining 3 Diretories are updated every second.
>> >>>>> 
>> >>>>>                 3.i am Indexing 20 million records per day so the
>> >>>>> Index store gets growing and it makes search very very slower.
>> >>>>>  
>> >>>>>                4.I am using searcherOne class as the global
>> >>>>> application helper class ,with the scope as APPLICATION it consists
>> of
>> >>>>> one IndexReader and IndexSearcher get set method which will hold
>> the
>> >>>>> IndexReader and IndexSearcher object after the First Search.it is
>> used
>> >>>>> for all other searches.
>> >>>>> 
>> >>>>>               5.I am using Lucene 2.2.0 version, in a WEB
>> Application
>> >>>>> which index 15 fields per document and Index 5 Fieds,store 10
>> Fields.i
>> >>>>> am not using any sort in my query.for a single query upto the
>> maximum
>> >>>>> it fetches 600 records from the index store(5 direcories)    
>> >>>>>                 
>> >>>>> 
>> >>>>> hossman wrote:
>> >>>>>> 
>> >>>>>> 
>> >>>>>> : I set IndexSearcher as the application Object after the first
>> >>>>>> search.
>> >>>>>> 	...
>> >>>>>> : how can i reconstruct the new IndexSearcher for every hour to
>> see
>> >>>>>> the
>> >>>>>> : updated records .
>> >>>>>> 
>> >>>>>> i'm confused ... my understanding based on the comments you made
>> >>>>>> below 
>> >>>>>> (in an earlier message) was that you already *were* constructing a
>> >>>>>> new  
>> >>>>>> IndexSearcher once an hour -- but every time you do that, your
>> memory 
>> >>>>>> usage grows, and and that sometimes you got OOM Errors.
>> >>>>>> 
>> >>>>>> if that's not what you said, then i think you need to explain, in
>> >>>>>> detail, 
>> >>>>>> in one message, exactly what your problem is.  And don't assume we 
>> >>>>>> understand anything -- tell us *EVERYTHING* (like, for example,
>> what
>> >>>>>> the 
>> >>>>>> word "crore" means, how "searcherOne" is implemented, and the
>> answer
>> >>>>>> to 
>> >>>>>> the specfic question i asked in my last message: does your
>> >>>>>> application, 
>> >>>>>> contain anywhere in it, any code that will close anything
>> >>>>>> (IndexSearchers 
>> >>>>>> or IndexReaders) ?
>> >>>>>> 
>> >>>>>> 
>> >>>>>> : > : I use StandardAnalyzer.the records daily ranges from 5 crore
>> to
>> >>>>>> 6 crore.
>> >>>>>> : > for
>> >>>>>> : > : every second i am updating my Index. i instantiate
>> >>>>>> IndexSearcher object
>> >>>>>> : > one
>> >>>>>> : > : time for all the searches. for an hour can i see the updated
>> >>>>>> records in
>> >>>>>> : > the
>> >>>>>> : > : indexstore by reinstantiating IndexSearcher object.but the
>> >>>>>> problem when
>> >>>>>> : > i
>> >>>>>> : > : reinstantiate IndexSearcher ,my RAM memory gets appended.is
>> >>>>>> there any
>> >>>>>> 
>> >>>>>> 
>> >>>>>> : > IndexSearcher are you explicitly closing both the old
>> >>>>>> IndexSearcher as 
>> >>>>>> : > well as all of 4 of those old IndexReaders and the
>> MultiReader?
>> >>>>>> 
>> >>>>>> 
>> >>>>>> 
>> >>>>>> 
>> >>>>>> -Hoss
>> >>>>>> 
>> >>>>>> 
>> >>>>>>
>> ---------------------------------------------------------------------
>> >>>>>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>> >>>>>> For additional commands, e-mail: java-user-help@lucene.apache.org
>> >>>>>> 
>> >>>>>> 
>> >>>>>> 
>> >>>>> 
>> >>>>> 
>> >>>> 
>> >>>> 
>> >>> 
>> >>> 
>> >> 
>> >> 
>> > 
>> > 
>> 
>> -- 
>> View this message in context:
>> http://www.nabble.com/Java-Heap-Space--Out-Of-Memory-Error-tf4376803.html#a12655880
>> Sent from the Lucene - Java Users mailing list archive at Nabble.com.
>> 
>> 
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>> For additional commands, e-mail: java-user-help@lucene.apache.org
>> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> For additional commands, e-mail: java-user-help@lucene.apache.org
> 
> 
> 

-- 
View this message in context: http://www.nabble.com/Java-Heap-Space--Out-Of-Memory-Error-tf4376803.html#a12668746
Sent from the Lucene - Java Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: Java Heap Space -Out Of Memory Error

Posted by Michael McCandless <lu...@mikemccandless.com>.
It sounds like there may be a Lucene version mismatch?  When Luke was used
it was likely based on Lucene 2.2, but it sounds like an older version of
Lucene is now being used to open the index?

Mike

"testn" <te...@doramail.com> wrote:
> 
> Should the file be "segments_8" and "segments.gen"? Why is it "Segment"?
> The
> case is different.
> 
> 
> Sebastin wrote:
> > 
> > java.io.IoException:File Not Found- Segments  is the error message
> > 
> > testn wrote:
> >> 
> >> What is the error message? Probably Mike, Erick or Yonik can help you
> >> better on this since I'm no one in index area.
> >> 
> >> Sebastin wrote:
> >>> 
> >>> HI testn,
> >>>              1.I optimize the Large Indexes of size 10 GB using Luke.it
> >>> optimize all the content into a single CFS file and it generates
> >>> segments.gen and segments_8 file when i search the item it shows an
> >>> error that segments file is not there.could you help me in this 
> >>> 
> >>> testn wrote:
> >>>> 
> >>>> 1. You can close the searcher once you're done. If you want to reopen
> >>>> the index, you can close and reopen only the updated 3 readers and keep
> >>>> the 2 old indexreaders and reuse it. It should reduce the time to
> >>>> reopen it.
> >>>> 2. Make sure that you optimize it every once in a while
> >>>> 3. You might consider separating indices in separated storage and use
> >>>> ParallelReader
> >>>> 
> >>>> 
> >>>> 
> >>>> Sebastin wrote:
> >>>>> 
> >>>>> The problem in my pplication are as follows:
> >>>>>                  1.I am not able to see the updated records in my
> >>>>> index store because i instantiate 
> >>>>> IndexReader and IndexSearcher class once that is in the first
> >>>>> search.further searches use the same IndexReaders(5 Directories) and
> >>>>> IndexSearcher with different queries.
> >>>>> 
> >>>>>                 2.My search is very very slow First 2 Directories of
> >>>>> size 10 GB each which are having old index records and no update in
> >>>>> that remaining 3 Diretories are updated every second.
> >>>>> 
> >>>>>                 3.i am Indexing 20 million records per day so the
> >>>>> Index store gets growing and it makes search very very slower.
> >>>>>  
> >>>>>                4.I am using searcherOne class as the global
> >>>>> application helper class ,with the scope as APPLICATION it consists of
> >>>>> one IndexReader and IndexSearcher get set method which will hold the
> >>>>> IndexReader and IndexSearcher object after the First Search.it is used
> >>>>> for all other searches.
> >>>>> 
> >>>>>               5.I am using Lucene 2.2.0 version, in a WEB Application
> >>>>> which index 15 fields per document and Index 5 Fieds,store 10 Fields.i
> >>>>> am not using any sort in my query.for a single query upto the maximum
> >>>>> it fetches 600 records from the index store(5 direcories)    
> >>>>>                 
> >>>>> 
> >>>>> hossman wrote:
> >>>>>> 
> >>>>>> 
> >>>>>> : I set IndexSearcher as the application Object after the first
> >>>>>> search.
> >>>>>> 	...
> >>>>>> : how can i reconstruct the new IndexSearcher for every hour to see
> >>>>>> the
> >>>>>> : updated records .
> >>>>>> 
> >>>>>> i'm confused ... my understanding based on the comments you made
> >>>>>> below 
> >>>>>> (in an earlier message) was that you already *were* constructing a
> >>>>>> new  
> >>>>>> IndexSearcher once an hour -- but every time you do that, your memory 
> >>>>>> usage grows, and and that sometimes you got OOM Errors.
> >>>>>> 
> >>>>>> if that's not what you said, then i think you need to explain, in
> >>>>>> detail, 
> >>>>>> in one message, exactly what your problem is.  And don't assume we 
> >>>>>> understand anything -- tell us *EVERYTHING* (like, for example, what
> >>>>>> the 
> >>>>>> word "crore" means, how "searcherOne" is implemented, and the answer
> >>>>>> to 
> >>>>>> the specfic question i asked in my last message: does your
> >>>>>> application, 
> >>>>>> contain anywhere in it, any code that will close anything
> >>>>>> (IndexSearchers 
> >>>>>> or IndexReaders) ?
> >>>>>> 
> >>>>>> 
> >>>>>> : > : I use StandardAnalyzer.the records daily ranges from 5 crore to
> >>>>>> 6 crore.
> >>>>>> : > for
> >>>>>> : > : every second i am updating my Index. i instantiate
> >>>>>> IndexSearcher object
> >>>>>> : > one
> >>>>>> : > : time for all the searches. for an hour can i see the updated
> >>>>>> records in
> >>>>>> : > the
> >>>>>> : > : indexstore by reinstantiating IndexSearcher object.but the
> >>>>>> problem when
> >>>>>> : > i
> >>>>>> : > : reinstantiate IndexSearcher ,my RAM memory gets appended.is
> >>>>>> there any
> >>>>>> 
> >>>>>> 
> >>>>>> : > IndexSearcher are you explicitly closing both the old
> >>>>>> IndexSearcher as 
> >>>>>> : > well as all of 4 of those old IndexReaders and the MultiReader?
> >>>>>> 
> >>>>>> 
> >>>>>> 
> >>>>>> 
> >>>>>> -Hoss
> >>>>>> 
> >>>>>> 
> >>>>>> ---------------------------------------------------------------------
> >>>>>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> >>>>>> For additional commands, e-mail: java-user-help@lucene.apache.org
> >>>>>> 
> >>>>>> 
> >>>>>> 
> >>>>> 
> >>>>> 
> >>>> 
> >>>> 
> >>> 
> >>> 
> >> 
> >> 
> > 
> > 
> 
> -- 
> View this message in context:
> http://www.nabble.com/Java-Heap-Space--Out-Of-Memory-Error-tf4376803.html#a12655880
> Sent from the Lucene - Java Users mailing list archive at Nabble.com.
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> For additional commands, e-mail: java-user-help@lucene.apache.org
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: Java Heap Space -Out Of Memory Error

Posted by testn <te...@doramail.com>.
Should the file be "segments_8" and "segments.gen"? Why is it "Segment"? The
case is different.


Sebastin wrote:
> 
> java.io.IoException:File Not Found- Segments  is the error message
> 
> testn wrote:
>> 
>> What is the error message? Probably Mike, Erick or Yonik can help you
>> better on this since I'm no one in index area.
>> 
>> Sebastin wrote:
>>> 
>>> HI testn,
>>>              1.I optimize the Large Indexes of size 10 GB using Luke.it
>>> optimize all the content into a single CFS file and it generates
>>> segments.gen and segments_8 file when i search the item it shows an
>>> error that segments file is not there.could you help me in this 
>>> 
>>> testn wrote:
>>>> 
>>>> 1. You can close the searcher once you're done. If you want to reopen
>>>> the index, you can close and reopen only the updated 3 readers and keep
>>>> the 2 old indexreaders and reuse it. It should reduce the time to
>>>> reopen it.
>>>> 2. Make sure that you optimize it every once in a while
>>>> 3. You might consider separating indices in separated storage and use
>>>> ParallelReader
>>>> 
>>>> 
>>>> 
>>>> Sebastin wrote:
>>>>> 
>>>>> The problem in my pplication are as follows:
>>>>>                  1.I am not able to see the updated records in my
>>>>> index store because i instantiate 
>>>>> IndexReader and IndexSearcher class once that is in the first
>>>>> search.further searches use the same IndexReaders(5 Directories) and
>>>>> IndexSearcher with different queries.
>>>>> 
>>>>>                 2.My search is very very slow First 2 Directories of
>>>>> size 10 GB each which are having old index records and no update in
>>>>> that remaining 3 Diretories are updated every second.
>>>>> 
>>>>>                 3.i am Indexing 20 million records per day so the
>>>>> Index store gets growing and it makes search very very slower.
>>>>>  
>>>>>                4.I am using searcherOne class as the global
>>>>> application helper class ,with the scope as APPLICATION it consists of
>>>>> one IndexReader and IndexSearcher get set method which will hold the
>>>>> IndexReader and IndexSearcher object after the First Search.it is used
>>>>> for all other searches.
>>>>> 
>>>>>               5.I am using Lucene 2.2.0 version, in a WEB Application
>>>>> which index 15 fields per document and Index 5 Fieds,store 10 Fields.i
>>>>> am not using any sort in my query.for a single query upto the maximum
>>>>> it fetches 600 records from the index store(5 direcories)    
>>>>>                 
>>>>> 
>>>>> hossman wrote:
>>>>>> 
>>>>>> 
>>>>>> : I set IndexSearcher as the application Object after the first
>>>>>> search.
>>>>>> 	...
>>>>>> : how can i reconstruct the new IndexSearcher for every hour to see
>>>>>> the
>>>>>> : updated records .
>>>>>> 
>>>>>> i'm confused ... my understanding based on the comments you made
>>>>>> below 
>>>>>> (in an earlier message) was that you already *were* constructing a
>>>>>> new  
>>>>>> IndexSearcher once an hour -- but every time you do that, your memory 
>>>>>> usage grows, and and that sometimes you got OOM Errors.
>>>>>> 
>>>>>> if that's not what you said, then i think you need to explain, in
>>>>>> detail, 
>>>>>> in one message, exactly what your problem is.  And don't assume we 
>>>>>> understand anything -- tell us *EVERYTHING* (like, for example, what
>>>>>> the 
>>>>>> word "crore" means, how "searcherOne" is implemented, and the answer
>>>>>> to 
>>>>>> the specfic question i asked in my last message: does your
>>>>>> application, 
>>>>>> contain anywhere in it, any code that will close anything
>>>>>> (IndexSearchers 
>>>>>> or IndexReaders) ?
>>>>>> 
>>>>>> 
>>>>>> : > : I use StandardAnalyzer.the records daily ranges from 5 crore to
>>>>>> 6 crore.
>>>>>> : > for
>>>>>> : > : every second i am updating my Index. i instantiate
>>>>>> IndexSearcher object
>>>>>> : > one
>>>>>> : > : time for all the searches. for an hour can i see the updated
>>>>>> records in
>>>>>> : > the
>>>>>> : > : indexstore by reinstantiating IndexSearcher object.but the
>>>>>> problem when
>>>>>> : > i
>>>>>> : > : reinstantiate IndexSearcher ,my RAM memory gets appended.is
>>>>>> there any
>>>>>> 
>>>>>> 
>>>>>> : > IndexSearcher are you explicitly closing both the old
>>>>>> IndexSearcher as 
>>>>>> : > well as all of 4 of those old IndexReaders and the MultiReader?
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> -Hoss
>>>>>> 
>>>>>> 
>>>>>> ---------------------------------------------------------------------
>>>>>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>>>>>> For additional commands, e-mail: java-user-help@lucene.apache.org
>>>>>> 
>>>>>> 
>>>>>> 
>>>>> 
>>>>> 
>>>> 
>>>> 
>>> 
>>> 
>> 
>> 
> 
> 

-- 
View this message in context: http://www.nabble.com/Java-Heap-Space--Out-Of-Memory-Error-tf4376803.html#a12655880
Sent from the Lucene - Java Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: Java Heap Space -Out Of Memory Error

Posted by Sebastin <se...@gmail.com>.
java.io.IoException:File Not Found- Segments  is the error message

testn wrote:
> 
> What is the error message? Probably Mike, Erick or Yonik can help you
> better on this since I'm no one in index area.
> 
> Sebastin wrote:
>> 
>> HI testn,
>>              1.I optimize the Large Indexes of size 10 GB using Luke.it
>> optimize all the content into a single CFS file and it generates
>> segments.gen and segments_8 file when i search the item it shows an error
>> that segments file is not there.could you help me in this 
>> 
>> testn wrote:
>>> 
>>> 1. You can close the searcher once you're done. If you want to reopen
>>> the index, you can close and reopen only the updated 3 readers and keep
>>> the 2 old indexreaders and reuse it. It should reduce the time to reopen
>>> it.
>>> 2. Make sure that you optimize it every once in a while
>>> 3. You might consider separating indices in separated storage and use
>>> ParallelReader
>>> 
>>> 
>>> 
>>> Sebastin wrote:
>>>> 
>>>> The problem in my pplication are as follows:
>>>>                  1.I am not able to see the updated records in my index
>>>> store because i instantiate 
>>>> IndexReader and IndexSearcher class once that is in the first
>>>> search.further searches use the same IndexReaders(5 Directories) and
>>>> IndexSearcher with different queries.
>>>> 
>>>>                 2.My search is very very slow First 2 Directories of
>>>> size 10 GB each which are having old index records and no update in
>>>> that remaining 3 Diretories are updated every second.
>>>> 
>>>>                 3.i am Indexing 20 million records per day so the Index
>>>> store gets growing and it makes search very very slower.
>>>>  
>>>>                4.I am using searcherOne class as the global application
>>>> helper class ,with the scope as APPLICATION it consists of one
>>>> IndexReader and IndexSearcher get set method which will hold the
>>>> IndexReader and IndexSearcher object after the First Search.it is used
>>>> for all other searches.
>>>> 
>>>>               5.I am using Lucene 2.2.0 version, in a WEB Application
>>>> which index 15 fields per document and Index 5 Fieds,store 10 Fields.i
>>>> am not using any sort in my query.for a single query upto the maximum
>>>> it fetches 600 records from the index store(5 direcories)    
>>>>                 
>>>> 
>>>> hossman wrote:
>>>>> 
>>>>> 
>>>>> : I set IndexSearcher as the application Object after the first
>>>>> search.
>>>>> 	...
>>>>> : how can i reconstruct the new IndexSearcher for every hour to see
>>>>> the
>>>>> : updated records .
>>>>> 
>>>>> i'm confused ... my understanding based on the comments you made below 
>>>>> (in an earlier message) was that you already *were* constructing a new  
>>>>> IndexSearcher once an hour -- but every time you do that, your memory 
>>>>> usage grows, and and that sometimes you got OOM Errors.
>>>>> 
>>>>> if that's not what you said, then i think you need to explain, in
>>>>> detail, 
>>>>> in one message, exactly what your problem is.  And don't assume we 
>>>>> understand anything -- tell us *EVERYTHING* (like, for example, what
>>>>> the 
>>>>> word "crore" means, how "searcherOne" is implemented, and the answer
>>>>> to 
>>>>> the specfic question i asked in my last message: does your
>>>>> application, 
>>>>> contain anywhere in it, any code that will close anything
>>>>> (IndexSearchers 
>>>>> or IndexReaders) ?
>>>>> 
>>>>> 
>>>>> : > : I use StandardAnalyzer.the records daily ranges from 5 crore to
>>>>> 6 crore.
>>>>> : > for
>>>>> : > : every second i am updating my Index. i instantiate IndexSearcher
>>>>> object
>>>>> : > one
>>>>> : > : time for all the searches. for an hour can i see the updated
>>>>> records in
>>>>> : > the
>>>>> : > : indexstore by reinstantiating IndexSearcher object.but the
>>>>> problem when
>>>>> : > i
>>>>> : > : reinstantiate IndexSearcher ,my RAM memory gets appended.is
>>>>> there any
>>>>> 
>>>>> 
>>>>> : > IndexSearcher are you explicitly closing both the old
>>>>> IndexSearcher as 
>>>>> : > well as all of 4 of those old IndexReaders and the MultiReader?
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> -Hoss
>>>>> 
>>>>> 
>>>>> ---------------------------------------------------------------------
>>>>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>>>>> For additional commands, e-mail: java-user-help@lucene.apache.org
>>>>> 
>>>>> 
>>>>> 
>>>> 
>>>> 
>>> 
>>> 
>> 
>> 
> 
> 

-- 
View this message in context: http://www.nabble.com/Java-Heap-Space--Out-Of-Memory-Error-tf4376803.html#a12655013
Sent from the Lucene - Java Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: Java Heap Space -Out Of Memory Error

Posted by testn <te...@doramail.com>.
What is the error message? Probably Mike, Erick or Yonik can help you better
on this since I'm no one in index area.

Sebastin wrote:
> 
> HI testn,
>              1.I optimize the Large Indexes of size 10 GB using Luke.it
> optimize all the content into a single CFS file and it generates
> segments.gen and segments_8 file when i search the item it shows an error
> that segments file is not there.could you help me in this 
> 
> testn wrote:
>> 
>> 1. You can close the searcher once you're done. If you want to reopen the
>> index, you can close and reopen only the updated 3 readers and keep the 2
>> old indexreaders and reuse it. It should reduce the time to reopen it.
>> 2. Make sure that you optimize it every once in a while
>> 3. You might consider separating indices in separated storage and use
>> ParallelReader
>> 
>> 
>> 
>> Sebastin wrote:
>>> 
>>> The problem in my pplication are as follows:
>>>                  1.I am not able to see the updated records in my index
>>> store because i instantiate 
>>> IndexReader and IndexSearcher class once that is in the first
>>> search.further searches use the same IndexReaders(5 Directories) and
>>> IndexSearcher with different queries.
>>> 
>>>                 2.My search is very very slow First 2 Directories of
>>> size 10 GB each which are having old index records and no update in that
>>> remaining 3 Diretories are updated every second.
>>> 
>>>                 3.i am Indexing 20 million records per day so the Index
>>> store gets growing and it makes search very very slower.
>>>  
>>>                4.I am using searcherOne class as the global application
>>> helper class ,with the scope as APPLICATION it consists of one
>>> IndexReader and IndexSearcher get set method which will hold the
>>> IndexReader and IndexSearcher object after the First Search.it is used
>>> for all other searches.
>>> 
>>>               5.I am using Lucene 2.2.0 version, in a WEB Application
>>> which index 15 fields per document and Index 5 Fieds,store 10 Fields.i
>>> am not using any sort in my query.for a single query upto the maximum it
>>> fetches 600 records from the index store(5 direcories)    
>>>                 
>>> 
>>> hossman wrote:
>>>> 
>>>> 
>>>> : I set IndexSearcher as the application Object after the first search.
>>>> 	...
>>>> : how can i reconstruct the new IndexSearcher for every hour to see the
>>>> : updated records .
>>>> 
>>>> i'm confused ... my understanding based on the comments you made below 
>>>> (in an earlier message) was that you already *were* constructing a new  
>>>> IndexSearcher once an hour -- but every time you do that, your memory 
>>>> usage grows, and and that sometimes you got OOM Errors.
>>>> 
>>>> if that's not what you said, then i think you need to explain, in
>>>> detail, 
>>>> in one message, exactly what your problem is.  And don't assume we 
>>>> understand anything -- tell us *EVERYTHING* (like, for example, what
>>>> the 
>>>> word "crore" means, how "searcherOne" is implemented, and the answer to 
>>>> the specfic question i asked in my last message: does your application, 
>>>> contain anywhere in it, any code that will close anything
>>>> (IndexSearchers 
>>>> or IndexReaders) ?
>>>> 
>>>> 
>>>> : > : I use StandardAnalyzer.the records daily ranges from 5 crore to 6
>>>> crore.
>>>> : > for
>>>> : > : every second i am updating my Index. i instantiate IndexSearcher
>>>> object
>>>> : > one
>>>> : > : time for all the searches. for an hour can i see the updated
>>>> records in
>>>> : > the
>>>> : > : indexstore by reinstantiating IndexSearcher object.but the
>>>> problem when
>>>> : > i
>>>> : > : reinstantiate IndexSearcher ,my RAM memory gets appended.is there
>>>> any
>>>> 
>>>> 
>>>> : > IndexSearcher are you explicitly closing both the old IndexSearcher
>>>> as 
>>>> : > well as all of 4 of those old IndexReaders and the MultiReader?
>>>> 
>>>> 
>>>> 
>>>> 
>>>> -Hoss
>>>> 
>>>> 
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>>>> For additional commands, e-mail: java-user-help@lucene.apache.org
>>>> 
>>>> 
>>>> 
>>> 
>>> 
>> 
>> 
> 
> 

-- 
View this message in context: http://www.nabble.com/Java-Heap-Space--Out-Of-Memory-Error-tf4376803.html#a12652816
Sent from the Lucene - Java Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: Java Heap Space -Out Of Memory Error

Posted by Sebastin <se...@gmail.com>.
HI testn,
             1.I optimize the Large Indexes of size 10 GB using Luke.it
optimize all the content into a single CFS file and it generates
segments.gen and segments_8 file when i search the item it shows an error
that segments file is not there.could you help me in this 

testn wrote:
> 
> 1. You can close the searcher once you're done. If you want to reopen the
> index, you can close and reopen only the updated 3 readers and keep the 2
> old indexreaders and reuse it. It should reduce the time to reopen it.
> 2. Make sure that you optimize it every once in a while
> 3. You might consider separating indices in separated storage and use
> ParallelReader
> 
> 
> 
> Sebastin wrote:
>> 
>> The problem in my pplication are as follows:
>>                  1.I am not able to see the updated records in my index
>> store because i instantiate 
>> IndexReader and IndexSearcher class once that is in the first
>> search.further searches use the same IndexReaders(5 Directories) and
>> IndexSearcher with different queries.
>> 
>>                 2.My search is very very slow First 2 Directories of size
>> 10 GB each which are having old index records and no update in that
>> remaining 3 Diretories are updated every second.
>> 
>>                 3.i am Indexing 20 million records per day so the Index
>> store gets growing and it makes search very very slower.
>>  
>>                4.I am using searcherOne class as the global application
>> helper class ,with the scope as APPLICATION it consists of one
>> IndexReader and IndexSearcher get set method which will hold the
>> IndexReader and IndexSearcher object after the First Search.it is used
>> for all other searches.
>> 
>>               5.I am using Lucene 2.2.0 version, in a WEB Application
>> which index 15 fields per document and Index 5 Fieds,store 10 Fields.i am
>> not using any sort in my query.for a single query upto the maximum it
>> fetches 600 records from the index store(5 direcories)    
>>                 
>> 
>> hossman wrote:
>>> 
>>> 
>>> : I set IndexSearcher as the application Object after the first search.
>>> 	...
>>> : how can i reconstruct the new IndexSearcher for every hour to see the
>>> : updated records .
>>> 
>>> i'm confused ... my understanding based on the comments you made below 
>>> (in an earlier message) was that you already *were* constructing a new  
>>> IndexSearcher once an hour -- but every time you do that, your memory 
>>> usage grows, and and that sometimes you got OOM Errors.
>>> 
>>> if that's not what you said, then i think you need to explain, in
>>> detail, 
>>> in one message, exactly what your problem is.  And don't assume we 
>>> understand anything -- tell us *EVERYTHING* (like, for example, what the 
>>> word "crore" means, how "searcherOne" is implemented, and the answer to 
>>> the specfic question i asked in my last message: does your application, 
>>> contain anywhere in it, any code that will close anything
>>> (IndexSearchers 
>>> or IndexReaders) ?
>>> 
>>> 
>>> : > : I use StandardAnalyzer.the records daily ranges from 5 crore to 6
>>> crore.
>>> : > for
>>> : > : every second i am updating my Index. i instantiate IndexSearcher
>>> object
>>> : > one
>>> : > : time for all the searches. for an hour can i see the updated
>>> records in
>>> : > the
>>> : > : indexstore by reinstantiating IndexSearcher object.but the problem
>>> when
>>> : > i
>>> : > : reinstantiate IndexSearcher ,my RAM memory gets appended.is there
>>> any
>>> 
>>> 
>>> : > IndexSearcher are you explicitly closing both the old IndexSearcher
>>> as 
>>> : > well as all of 4 of those old IndexReaders and the MultiReader?
>>> 
>>> 
>>> 
>>> 
>>> -Hoss
>>> 
>>> 
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>>> For additional commands, e-mail: java-user-help@lucene.apache.org
>>> 
>>> 
>>> 
>> 
>> 
> 
> 

-- 
View this message in context: http://www.nabble.com/Java-Heap-Space--Out-Of-Memory-Error-tf4376803.html#a12650012
Sent from the Lucene - Java Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: Java Heap Space -Out Of Memory Error

Posted by testn <te...@doramail.com>.
1. You can close the searcher once you're done. If you want to reopen the
index, you can close and reopen only the updated 3 readers and keep the 2
old indexreaders and reuse it. It should reduce the time to reopen it.
2. Make sure that you optimize it every once in a while
3. You might consider separating indices in separated storage and use
ParallelReader



Sebastin wrote:
> 
> The problem in my pplication are as follows:
>                  1.I am not able to see the updated records in my index
> store because i instantiate 
> IndexReader and IndexSearcher class once that is in the first
> search.further searches use the same IndexReaders(5 Directories) and
> IndexSearcher with different queries.
> 
>                 2.My search is very very slow First 2 Directories of size
> 10 GB each which are having old index records and no update in that
> remaining 3 Diretories are updated every second.
> 
>                 3.i am Indexing 20 million records per day so the Index
> store gets growing and it makes search very very slower.
>  
>                4.I am using searcherOne class as the global application
> helper class ,with the scope as APPLICATION it consists of one IndexReader
> and IndexSearcher get set method which will hold the IndexReader and
> IndexSearcher object after the First Search.it is used for all other
> searches.
> 
>               5.I am using Lucene 2.2.0 version, in a WEB Application
> which index 15 fields per document and Index 5 Fieds,store 10 Fields.i am
> not using any sort in my query.for a single query upto the maximum it
> fetches 600 records from the index store(5 direcories)    
>                 
> 
> hossman wrote:
>> 
>> 
>> : I set IndexSearcher as the application Object after the first search.
>> 	...
>> : how can i reconstruct the new IndexSearcher for every hour to see the
>> : updated records .
>> 
>> i'm confused ... my understanding based on the comments you made below 
>> (in an earlier message) was that you already *were* constructing a new  
>> IndexSearcher once an hour -- but every time you do that, your memory 
>> usage grows, and and that sometimes you got OOM Errors.
>> 
>> if that's not what you said, then i think you need to explain, in detail, 
>> in one message, exactly what your problem is.  And don't assume we 
>> understand anything -- tell us *EVERYTHING* (like, for example, what the 
>> word "crore" means, how "searcherOne" is implemented, and the answer to 
>> the specfic question i asked in my last message: does your application, 
>> contain anywhere in it, any code that will close anything (IndexSearchers 
>> or IndexReaders) ?
>> 
>> 
>> : > : I use StandardAnalyzer.the records daily ranges from 5 crore to 6
>> crore.
>> : > for
>> : > : every second i am updating my Index. i instantiate IndexSearcher
>> object
>> : > one
>> : > : time for all the searches. for an hour can i see the updated
>> records in
>> : > the
>> : > : indexstore by reinstantiating IndexSearcher object.but the problem
>> when
>> : > i
>> : > : reinstantiate IndexSearcher ,my RAM memory gets appended.is there
>> any
>> 
>> 
>> : > IndexSearcher are you explicitly closing both the old IndexSearcher
>> as 
>> : > well as all of 4 of those old IndexReaders and the MultiReader?
>> 
>> 
>> 
>> 
>> -Hoss
>> 
>> 
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>> For additional commands, e-mail: java-user-help@lucene.apache.org
>> 
>> 
>> 
> 
> 

-- 
View this message in context: http://www.nabble.com/Java-Heap-Space--Out-Of-Memory-Error-tf4376803.html#a12595489
Sent from the Lucene - Java Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: Java Heap Space -Out Of Memory Error

Posted by Sebastin <se...@gmail.com>.
The problem in my pplication are as follows:
                 1.I am not able to see the updated records in my index
store because i instantiate 
IndexReader and IndexSearcher class once that is in the first search.further
searches use the same IndexReaders(5 Directories) and IndexSearcher with
different queries.

                2.My search is very very slow First 2 Directories of size 10
GB each which are having old index records and no update in that remaining 3
Diretories are updated every second.

                3.i am Indexing 20 million records per day so the Index
store gets growing and it makes search very very slower.
 
               4.I am using searcherOne class as the global application
helper class ,with the scope as APPLICATION it consists of one IndexReader
and IndexSearcher get set method which will hold the IndexReader and
IndexSearcher object after the First Search.it is used for all other
searches.

              5.I am using Lucene 2.2.0 version, in a WEB Application which
index 15 fields per document and Index 5 Fieds,store 10 Fields.i am not
using any sort in my query.for a single query upto the maximum it fetches
600 records from the index store(5 direcories)    
                

hossman wrote:
> 
> 
> : I set IndexSearcher as the application Object after the first search.
> 	...
> : how can i reconstruct the new IndexSearcher for every hour to see the
> : updated records .
> 
> i'm confused ... my understanding based on the comments you made below 
> (in an earlier message) was that you already *were* constructing a new  
> IndexSearcher once an hour -- but every time you do that, your memory 
> usage grows, and and that sometimes you got OOM Errors.
> 
> if that's not what you said, then i think you need to explain, in detail, 
> in one message, exactly what your problem is.  And don't assume we 
> understand anything -- tell us *EVERYTHING* (like, for example, what the 
> word "crore" means, how "searcherOne" is implemented, and the answer to 
> the specfic question i asked in my last message: does your application, 
> contain anywhere in it, any code that will close anything (IndexSearchers 
> or IndexReaders) ?
> 
> 
> : > : I use StandardAnalyzer.the records daily ranges from 5 crore to 6
> crore.
> : > for
> : > : every second i am updating my Index. i instantiate IndexSearcher
> object
> : > one
> : > : time for all the searches. for an hour can i see the updated records
> in
> : > the
> : > : indexstore by reinstantiating IndexSearcher object.but the problem
> when
> : > i
> : > : reinstantiate IndexSearcher ,my RAM memory gets appended.is there
> any
> 
> 
> : > IndexSearcher are you explicitly closing both the old IndexSearcher as 
> : > well as all of 4 of those old IndexReaders and the MultiReader?
> 
> 
> 
> 
> -Hoss
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> For additional commands, e-mail: java-user-help@lucene.apache.org
> 
> 
> 

-- 
View this message in context: http://www.nabble.com/Java-Heap-Space--Out-Of-Memory-Error-tf4376803.html#a12528983
Sent from the Lucene - Java Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: Java Heap Space -Out Of Memory Error

Posted by Chris Hostetter <ho...@fucit.org>.
: I set IndexSearcher as the application Object after the first search.
	...
: how can i reconstruct the new IndexSearcher for every hour to see the
: updated records .

i'm confused ... my understanding based on the comments you made below 
(in an earlier message) was that you already *were* constructing a new  
IndexSearcher once an hour -- but every time you do that, your memory 
usage grows, and and that sometimes you got OOM Errors.

if that's not what you said, then i think you need to explain, in detail, 
in one message, exactly what your problem is.  And don't assume we 
understand anything -- tell us *EVERYTHING* (like, for example, what the 
word "crore" means, how "searcherOne" is implemented, and the answer to 
the specfic question i asked in my last message: does your application, 
contain anywhere in it, any code that will close anything (IndexSearchers 
or IndexReaders) ?


: > : I use StandardAnalyzer.the records daily ranges from 5 crore to 6 crore.
: > for
: > : every second i am updating my Index. i instantiate IndexSearcher object
: > one
: > : time for all the searches. for an hour can i see the updated records in
: > the
: > : indexstore by reinstantiating IndexSearcher object.but the problem when
: > i
: > : reinstantiate IndexSearcher ,my RAM memory gets appended.is there any


: > IndexSearcher are you explicitly closing both the old IndexSearcher as 
: > well as all of 4 of those old IndexReaders and the MultiReader?




-Hoss


---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: Java Heap Space -Out Of Memory Error

Posted by Sebastin <se...@gmail.com>.
i wont close the IndexReader after the First Search.when i instantiate
IndexSearcher object will it display the updated records in that directories

Sebastin wrote:
> 
> I set IndexSearcher as the application Object after the first search.
> 
> here is my code:
> 
>                              if(searcherOne.isOpen()==(true)){       
>      
>                                          Directory compressDir2 = 
>                      
> FSDirectory.getDirectory(compressionSourceDir02,false);
>                                          IndexReader compressedSource2 =
> IndexReader.open(compressDir2);
>                                        Directory compressDir3 = 
>                                             
> FSDirectory.getDirectory(compressionSourceDir03,false);
>                                          IndexReader compressedSource3 =
> IndexReader.open(compressDir3);
>                                          Directory compressDir4 = 
>                                             
> FSDirectory.getDirectory(compressionSourceDir04,false);
>                                          IndexReader compressedSource4 =
> IndexReader.open(compressDir4); 
>                                         
>           
>                                          
>         IndexReader[] readArray =
> {compressedSource2,compressedSource3,compressedSource4};
>         //merged reader
>         IndexReader mergedReader = new MultiReader(readArray);
>         IndexSearcher is = new IndexSearcher(mergedReader);
>                                 
> BooleanQuery.setMaxClauseCount(1000000000);
>         searcherOne.setIndexSearch(is);
>                                          searcherOne.setOpen(false);
>                                  BigInteger _l = new BigInteger(mobile1,
> 10);
>                                  _mobile = _l.toString(36);
>                                  QueryParser parser = 
>                                      new
> QueryParser(AppConstants.CONTENTS, new StandardAnalyzer());
>                                  
>                                  
>                                 searchQuery= 
>                                        new
> StringBuffer().append(_mobile).append(" AND dateSc:["
> ).append(fromDate).append(" TO ").append(toDate).append("]").append("
> ").append("AND").append(" ").append(callTyp).toString();
>                                     
>                                  
>                                  
>                                  
>                                  Query callDetailquery =
> parser.parse(searchQuery);
>                                  
>                                  hits = is.search(callDetailquery);
>                                  System.out.println("FirstSearch");
>                                  
>                                  
>                              }
>       // System.out.println("No Of MAXIMUM dOCUMENTS : " +is.maxDoc());
>       else{
>           
>        is=searcherOne.getIndexSearch();
>       
>         BigInteger _l = new BigInteger(mobile1, 10);
>         _mobile = _l.toString(36);
>    
>         BooleanQuery.setMaxClauseCount(1000000000);
>         QueryParser parser = 
>             new QueryParser(AppConstants.CONTENTS, new
> StandardAnalyzer());
>         
>           searchQuery=new StringBuffer().append(_mobile).append("
> ").append(" AND dateSc:[" ).append(fromDate).append(" TO
> ").append(toDate).append("]").append(" ").append("AND").append("
> ").append(callTyp).toString(); 
>          
>               
>            
>        
>         
>         
>          callDetailquery = parser.parse(searchQuery);
>         
>       hits = is.search(callDetailquery);
>              
> 
> how can i reconstruct the new IndexSearcher for every hour to see the
> updated records .
>                                      
> 
> 
> hossman wrote:
>> 
>> 
>> : I use StandardAnalyzer.the records daily ranges from 5 crore to 6
>> crore. for
>> : every second i am updating my Index. i instantiate IndexSearcher object
>> one
>> : time for all the searches. for an hour can i see the updated records in
>> the
>> : indexstore by reinstantiating IndexSearcher object.but the problem when
>> i
>> : reinstantiate IndexSearcher ,my RAM memory gets appended.is there any
>> 
>> skimming hte code below, you are opening an IndexSearcher over a 
>> MultiReader over 4 seperate IndexReaders ... when you instantiate a new 
>> IndexSearcher are you explicitly closing both the old IndexSearcher as 
>> well as all of 4 of those old IndexReaders and the MultiReader?
>> 
>> closing an IndexSearcher will only close the underlying Reader if it 
>> opened it .. and a MultiReader constructed from other IndexReaders will 
>> never close them.
>> 
>> -Hoss
>> 
>> 
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>> For additional commands, e-mail: java-user-help@lucene.apache.org
>> 
>> 
>> 
> 
> 

-- 
View this message in context: http://www.nabble.com/Java-Heap-Space--Out-Of-Memory-Error-tf4376803.html#a12516112
Sent from the Lucene - Java Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: Java Heap Space -Out Of Memory Error

Posted by Sebastin <se...@gmail.com>.
I set IndexSearcher as the application Object after the first search.

here is my code:

                             if(searcherOne.isOpen()==(true)){       
     
                                         Directory compressDir2 = 
                     
FSDirectory.getDirectory(compressionSourceDir02,false);
                                         IndexReader compressedSource2 =
IndexReader.open(compressDir2);
                                       Directory compressDir3 = 
                                            
FSDirectory.getDirectory(compressionSourceDir03,false);
                                         IndexReader compressedSource3 =
IndexReader.open(compressDir3);
                                         Directory compressDir4 = 
                                            
FSDirectory.getDirectory(compressionSourceDir04,false);
                                         IndexReader compressedSource4 =
IndexReader.open(compressDir4); 
                                        
          
                                         
        IndexReader[] readArray =
{compressedSource2,compressedSource3,compressedSource4};
        //merged reader
        IndexReader mergedReader = new MultiReader(readArray);
        IndexSearcher is = new IndexSearcher(mergedReader);
                                 BooleanQuery.setMaxClauseCount(1000000000);
        searcherOne.setIndexSearch(is);
                                         searcherOne.setOpen(false);
                                 BigInteger _l = new BigInteger(mobile1,
10);
                                 _mobile = _l.toString(36);
                                 QueryParser parser = 
                                     new QueryParser(AppConstants.CONTENTS,
new StandardAnalyzer());
                                 
                                 
                                searchQuery= 
                                       new
StringBuffer().append(_mobile).append(" AND dateSc:["
).append(fromDate).append(" TO ").append(toDate).append("]").append("
").append("AND").append(" ").append(callTyp).toString();
                                    
                                 
                                 
                                 
                                 Query callDetailquery =
parser.parse(searchQuery);
                                 
                                 hits = is.search(callDetailquery);
                                 System.out.println("FirstSearch");
                                 
                                 
                             }
      // System.out.println("No Of MAXIMUM dOCUMENTS : " +is.maxDoc());
      else{
          
       is=searcherOne.getIndexSearch();
      
        BigInteger _l = new BigInteger(mobile1, 10);
        _mobile = _l.toString(36);
   
        BooleanQuery.setMaxClauseCount(1000000000);
        QueryParser parser = 
            new QueryParser(AppConstants.CONTENTS, new StandardAnalyzer());
        
          searchQuery=new StringBuffer().append(_mobile).append("
").append(" AND dateSc:[" ).append(fromDate).append(" TO
").append(toDate).append("]").append(" ").append("AND").append("
").append(callTyp).toString(); 
         
              
           
       
        
        
         callDetailquery = parser.parse(searchQuery);
        
      hits = is.search(callDetailquery);
             

how can i reconstruct the new IndexSearcher for every hour to see the
updated records .
                                     


hossman wrote:
> 
> 
> : I use StandardAnalyzer.the records daily ranges from 5 crore to 6 crore.
> for
> : every second i am updating my Index. i instantiate IndexSearcher object
> one
> : time for all the searches. for an hour can i see the updated records in
> the
> : indexstore by reinstantiating IndexSearcher object.but the problem when
> i
> : reinstantiate IndexSearcher ,my RAM memory gets appended.is there any
> 
> skimming hte code below, you are opening an IndexSearcher over a 
> MultiReader over 4 seperate IndexReaders ... when you instantiate a new 
> IndexSearcher are you explicitly closing both the old IndexSearcher as 
> well as all of 4 of those old IndexReaders and the MultiReader?
> 
> closing an IndexSearcher will only close the underlying Reader if it 
> opened it .. and a MultiReader constructed from other IndexReaders will 
> never close them.
> 
> -Hoss
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> For additional commands, e-mail: java-user-help@lucene.apache.org
> 
> 
> 

-- 
View this message in context: http://www.nabble.com/Java-Heap-Space--Out-Of-Memory-Error-tf4376803.html#a12515624
Sent from the Lucene - Java Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: Java Heap Space -Out Of Memory Error

Posted by Chris Hostetter <ho...@fucit.org>.
: I use StandardAnalyzer.the records daily ranges from 5 crore to 6 crore. for
: every second i am updating my Index. i instantiate IndexSearcher object one
: time for all the searches. for an hour can i see the updated records in the
: indexstore by reinstantiating IndexSearcher object.but the problem when i
: reinstantiate IndexSearcher ,my RAM memory gets appended.is there any

skimming hte code below, you are opening an IndexSearcher over a 
MultiReader over 4 seperate IndexReaders ... when you instantiate a new 
IndexSearcher are you explicitly closing both the old IndexSearcher as 
well as all of 4 of those old IndexReaders and the MultiReader?

closing an IndexSearcher will only close the underlying Reader if it 
opened it .. and a MultiReader constructed from other IndexReaders will 
never close them.

-Hoss


---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: Java Heap Space -Out Of Memory Error

Posted by Sebastin <se...@gmail.com>.
I use StandardAnalyzer.the records daily ranges from 5 crore to 6 crore. for
every second i am updating my Index. i instantiate IndexSearcher object one
time for all the searches. for an hour can i see the updated records in the
indexstore by reinstantiating IndexSearcher object.but the problem when i
reinstantiate IndexSearcher ,my RAM memory gets appended.is there any
possibility to control the memory at that time without affecting
performance.

testn wrote:
> 
> A couple things to make sure:
> 1. When you open IndexWriter, what is the analyzer you use?
> StandardAnalyzer?
> 2. How many records are there?
> 3. Could you also check number of terms in your indices? If there are too
> many terms, you could consider chop something in smaller piece for
> example... store area code and phone number separately if the numbers are
> pretty distributed.
> 
> 
> Sebastin wrote:
>> 
>> Hi testn,
>>            here is my index details:
>>                             Index fields :5 fields
>>                             Store Fileds:10 fields
>>            
>> 
>> Index code:              
>> 
>>   contents=new StringBuilder().append(compCallingPartyNumber).append("
>> ").append(compCalledPartyNumber).append("
>> ").append(compImsiNumber).append(" ").append(callType).toString();
>> 
>> 
>> records=new StringBuilder().append(compCallingPartyNumber).append("
>> ").append(compCalledPartyNumber).append("
>> ").append(compchargDur).append(" ").append(compTimeSc).append("
>> ").append(compImsiNumber).append(" ").append(outgoingRoute).append("
>> ").append(incomingRoute).append(" ").append(cgiLocation).toString();
>> 
>> 
>>                                Document document = new Document();
>>                                     document.add(new Field("contents",
>>                                                            contents,
>>                                                           
>> Field.Store.NO,
>>                                                           
>> Field.Index.TOKENIZED));
>>                                     document.add(new Field("fil", filen,
>>                                                           
>> Field.Store.NO,
>>                                                           
>> Field.Index.TOKENIZED));
>>                                     document.add(new Field("records",
>> records,
>>                                                           
>> Field.Store.YES,
>>                                                           
>> Field.Index.NO));
>>                                     document.add(new Field("dateSc",
>> dateSc,
>>                                                           
>> Field.Store.YES,
>>                                                           
>> Field.Index.TOKENIZED));
>>                                     
>>                                     indexWriter.addDocument(document);
>> 
>> inputs for the document:
>> 
>> compCallingPartyNumber="9840836588";
>> compCalledPartyNumber="9840861114";
>> compImsiNumber="984510005469874";
>> callType="1";
>> compChargDur="98456";
>> compTimeSc="984";
>> outgoingRoute="i987j";
>> incomingRoute="poi09";
>> cgiLocation="dft1234567";
>> 
>> here is my search code:
>> 
>>                                          Directory indexDir2 = 
>>                       FSDirectory.getDirectory(indexSourceDir02,false);
>>                                          IndexReader indexSource2 =
>> IndexReader.open(indexDir2);
>>                                        Directory indexDir3 = 
>>                                             
>> FSDirectory.getDirectory(indexSourceDir03,false);
>>                                          IndexReader indexSource3 =
>> IndexReader.open(indexDir3);
>>                                          Directory indexDir4 = 
>>                                             
>> FSDirectory.getDirectory(indexSourceDir04,false);
>>                                          IndexReader indexSource4 =
>> IndexReader.open(indexDir4); 
>>                                         
>>            
>>                                          
>>         IndexReader[] readArray =
>> {indexSource2,indexSource3,indexSource4};
>>         //merged reader
>>         IndexReader mergedReader = new MultiReader(readArray);
>>         IndexSearcher is = new IndexSearcher(mergedReader);
>>         
>>                                 
>>                                  QueryParser parser = 
>>                                      new QueryParser("contents" ,new
>> StandardAnalyzer());
>>                                  
>>                                  
>>                                 String searchQuery= 
>>                                        new
>> StringBuffer().append(inputNo).append(" AND dateSc:["
>> ).append(fromDate).append(" TO ").append(toDate).append("]").append("
>> ").append("AND").append(" ").append(callTyp).toString();
>>                                    
>>                                  
>>                                  
>>                                  Query callDetailquery =
>> parser.parse(searchQuery);
>>                                  
>>                                  hits = is.search(callDetailquery); 
>> 
>> 
>> 
>> 
>>                                              
>>                                        
>>                                           
>>          
>> 
>> testn wrote:
>>> 
>>> Can you provide more info about your index? How many documents, fields
>>> and what is the average document length?
>>> 
>>> 
>>> Sebastin wrote:
>>>> 
>>>> Hi testn,
>>>>            i index the dateSc as 070904(2007/09/04) format.i am not
>>>> using any timestamp here.how can we effectively reopen the
>>>> IndexSearcher  for an hour and save the memory because my index gets
>>>> updated every minute.
>>>> 
>>>> testn wrote:
>>>>> 
>>>>> Check out Wiki for more information at
>>>>> http://wiki.apache.org/jakarta-lucene/LargeScaleDateRangeProcessing
>>>>> 
>>>>> 
>>>>> 
>>>>> Sebastin wrote:
>>>>>> 
>>>>>> Hi All,
>>>>>>        i used to search 3 Lucene Index store of size 6 GB,10 GB,10 GB
>>>>>> of records using MultiReader class.
>>>>>> 
>>>>>> here is the following code snippet:
>>>>>> 
>>>>>> 
>>>>>>      
>>>>>>                                          Directory indexDir2 = 
>>>>>>                      
>>>>>> FSDirectory.getDirectory(indexSourceDir02,false);
>>>>>>                                          IndexReader indexSource2 =
>>>>>> IndexReader.open(indexDir2);
>>>>>>                                        Directory indexDir3 = 
>>>>>>                                             
>>>>>> FSDirectory.getDirectory(indexSourceDir03,false);
>>>>>>                                          IndexReader indexSource3 =
>>>>>> IndexReader.open(indexDir3);
>>>>>>                                          Directory indexDir4 = 
>>>>>>                                             
>>>>>> FSDirectory.getDirectory(indexSourceDir04,false);
>>>>>>                                          IndexReader indexSource4 =
>>>>>> IndexReader.open(indexDir4); 
>>>>>>                                         
>>>>>>            
>>>>>>                                          
>>>>>>         IndexReader[] readArray =
>>>>>> {indexSource2,indexSource3,indexSource4};
>>>>>>         //merged reader
>>>>>>         IndexReader mergedReader = new MultiReader(readArray);
>>>>>>         IndexSearcher is = new IndexSearcher(mergedReader);
>>>>>>         
>>>>>>                                 
>>>>>>                                  QueryParser parser = 
>>>>>>                                      new QueryParser("contents" ,new
>>>>>> StandardAnalyzer());
>>>>>>                                  
>>>>>>                                  
>>>>>>                                 String searchQuery= 
>>>>>>                                        new
>>>>>> StringBuffer().append(inputNo).append(" AND dateSc:["
>>>>>> ).append(fromDate).append(" TO ").append(toDate).append("]").append("
>>>>>> ").append("AND").append(" ").append(callTyp).toString();
>>>>>>                                    
>>>>>>                                  
>>>>>>                                  
>>>>>>                                  Query callDetailquery =
>>>>>> parser.parse(searchQuery);
>>>>>>                                  
>>>>>>                                  hits = is.search(callDetailquery); 
>>>>>> 
>>>>>> 
>>>>>> it takes 300 MB of RAM for every search and it is very very slow is
>>>>>> there any other way to control the Memory and to make search faster.i
>>>>>> use SINGLETON  to use the IndexSearcher as a one time used object for
>>>>>> all the instances.
>>>>>> 
>>>>> 
>>>>> 
>>>> 
>>>> 
>>> 
>>> 
>> 
>> 
> 
> 

-- 
View this message in context: http://www.nabble.com/Java-Heap-Space--Out-Of-Memory-Error-tf4376803.html#a12500824
Sent from the Lucene - Java Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: Java Heap Space -Out Of Memory Error

Posted by testn <te...@doramail.com>.
A couple things to make sure:
1. When you open IndexWriter, what is the analyzer you use?
StandardAnalyzer?
2. How many records are there?
3. Could you also check number of terms in your indices? If there are too
many terms, you could consider chop something in smaller piece for
example... store area code and phone number separately if the numbers are
pretty distributed.


Sebastin wrote:
> 
> Hi testn,
>            here is my index details:
>                             Index fields :5 fields
>                             Store Fileds:10 fields
>            
> 
> Index code:              
> 
>   contents=new StringBuilder().append(compCallingPartyNumber).append("
> ").append(compCalledPartyNumber).append("
> ").append(compImsiNumber).append(" ").append(callType).toString();
> 
> 
> records=new StringBuilder().append(compCallingPartyNumber).append("
> ").append(compCalledPartyNumber).append(" ").append(compchargDur).append("
> ").append(compTimeSc).append(" ").append(compImsiNumber).append("
> ").append(outgoingRoute).append(" ").append(incomingRoute).append("
> ").append(cgiLocation).toString();
> 
> 
>                                Document document = new Document();
>                                     document.add(new Field("contents",
>                                                            contents,
>                                                            Field.Store.NO,
>                                                           
> Field.Index.TOKENIZED));
>                                     document.add(new Field("fil", filen,
>                                                            Field.Store.NO,
>                                                           
> Field.Index.TOKENIZED));
>                                     document.add(new Field("records",
> records,
>                                                           
> Field.Store.YES,
>                                                           
> Field.Index.NO));
>                                     document.add(new Field("dateSc",
> dateSc,
>                                                           
> Field.Store.YES,
>                                                           
> Field.Index.TOKENIZED));
>                                     
>                                     indexWriter.addDocument(document);
> 
> inputs for the document:
> 
> compCallingPartyNumber="9840836588";
> compCalledPartyNumber="9840861114";
> compImsiNumber="984510005469874";
> callType="1";
> compChargDur="98456";
> compTimeSc="984";
> outgoingRoute="i987j";
> incomingRoute="poi09";
> cgiLocation="dft1234567";
> 
> here is my search code:
> 
>                                          Directory indexDir2 = 
>                       FSDirectory.getDirectory(indexSourceDir02,false);
>                                          IndexReader indexSource2 =
> IndexReader.open(indexDir2);
>                                        Directory indexDir3 = 
>                                             
> FSDirectory.getDirectory(indexSourceDir03,false);
>                                          IndexReader indexSource3 =
> IndexReader.open(indexDir3);
>                                          Directory indexDir4 = 
>                                             
> FSDirectory.getDirectory(indexSourceDir04,false);
>                                          IndexReader indexSource4 =
> IndexReader.open(indexDir4); 
>                                         
>            
>                                          
>         IndexReader[] readArray =
> {indexSource2,indexSource3,indexSource4};
>         //merged reader
>         IndexReader mergedReader = new MultiReader(readArray);
>         IndexSearcher is = new IndexSearcher(mergedReader);
>         
>                                 
>                                  QueryParser parser = 
>                                      new QueryParser("contents" ,new
> StandardAnalyzer());
>                                  
>                                  
>                                 String searchQuery= 
>                                        new
> StringBuffer().append(inputNo).append(" AND dateSc:["
> ).append(fromDate).append(" TO ").append(toDate).append("]").append("
> ").append("AND").append(" ").append(callTyp).toString();
>                                    
>                                  
>                                  
>                                  Query callDetailquery =
> parser.parse(searchQuery);
>                                  
>                                  hits = is.search(callDetailquery); 
> 
> 
> 
> 
>                                              
>                                        
>                                           
>          
> 
> testn wrote:
>> 
>> Can you provide more info about your index? How many documents, fields
>> and what is the average document length?
>> 
>> 
>> Sebastin wrote:
>>> 
>>> Hi testn,
>>>            i index the dateSc as 070904(2007/09/04) format.i am not
>>> using any timestamp here.how can we effectively reopen the IndexSearcher 
>>> for an hour and save the memory because my index gets updated every
>>> minute.
>>> 
>>> testn wrote:
>>>> 
>>>> Check out Wiki for more information at
>>>> http://wiki.apache.org/jakarta-lucene/LargeScaleDateRangeProcessing
>>>> 
>>>> 
>>>> 
>>>> Sebastin wrote:
>>>>> 
>>>>> Hi All,
>>>>>        i used to search 3 Lucene Index store of size 6 GB,10 GB,10 GB
>>>>> of records using MultiReader class.
>>>>> 
>>>>> here is the following code snippet:
>>>>> 
>>>>> 
>>>>>      
>>>>>                                          Directory indexDir2 = 
>>>>>                      
>>>>> FSDirectory.getDirectory(indexSourceDir02,false);
>>>>>                                          IndexReader indexSource2 =
>>>>> IndexReader.open(indexDir2);
>>>>>                                        Directory indexDir3 = 
>>>>>                                             
>>>>> FSDirectory.getDirectory(indexSourceDir03,false);
>>>>>                                          IndexReader indexSource3 =
>>>>> IndexReader.open(indexDir3);
>>>>>                                          Directory indexDir4 = 
>>>>>                                             
>>>>> FSDirectory.getDirectory(indexSourceDir04,false);
>>>>>                                          IndexReader indexSource4 =
>>>>> IndexReader.open(indexDir4); 
>>>>>                                         
>>>>>            
>>>>>                                          
>>>>>         IndexReader[] readArray =
>>>>> {indexSource2,indexSource3,indexSource4};
>>>>>         //merged reader
>>>>>         IndexReader mergedReader = new MultiReader(readArray);
>>>>>         IndexSearcher is = new IndexSearcher(mergedReader);
>>>>>         
>>>>>                                 
>>>>>                                  QueryParser parser = 
>>>>>                                      new QueryParser("contents" ,new
>>>>> StandardAnalyzer());
>>>>>                                  
>>>>>                                  
>>>>>                                 String searchQuery= 
>>>>>                                        new
>>>>> StringBuffer().append(inputNo).append(" AND dateSc:["
>>>>> ).append(fromDate).append(" TO ").append(toDate).append("]").append("
>>>>> ").append("AND").append(" ").append(callTyp).toString();
>>>>>                                    
>>>>>                                  
>>>>>                                  
>>>>>                                  Query callDetailquery =
>>>>> parser.parse(searchQuery);
>>>>>                                  
>>>>>                                  hits = is.search(callDetailquery); 
>>>>> 
>>>>> 
>>>>> it takes 300 MB of RAM for every search and it is very very slow is
>>>>> there any other way to control the Memory and to make search faster.i
>>>>> use SINGLETON  to use the IndexSearcher as a one time used object for
>>>>> all the instances.
>>>>> 
>>>> 
>>>> 
>>> 
>>> 
>> 
>> 
> 
> 

-- 
View this message in context: http://www.nabble.com/Java-Heap-Space--Out-Of-Memory-Error-tf4376803.html#a12496515
Sent from the Lucene - Java Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: Java Heap Space -Out Of Memory Error

Posted by Sebastin <se...@gmail.com>.
Hi testn,
           here is my index details:
                            Index fields :5 fields
                            Store Fileds:10 fields
           

Index code:              

  contents=new StringBuilder().append(compCallingPartyNumber).append("
").append(compCalledPartyNumber).append(" ").append(compImsiNumber).append("
").append(callType).toString();


records=new StringBuilder().append(compCallingPartyNumber).append("
").append(compCalledPartyNumber).append(" ").append(compchargDur).append("
").append(compTimeSc).append(" ").append(compImsiNumber).append("
").append(outgoingRoute).append(" ").append(incomingRoute).append("
").append(cgiLocation).toString();


                               Document document = new Document();
                                    document.add(new Field("contents",
                                                           contents,
                                                           Field.Store.NO,
                                                          
Field.Index.TOKENIZED));
                                    document.add(new Field("fil", filen,
                                                           Field.Store.NO,
                                                          
Field.Index.TOKENIZED));
                                    document.add(new Field("records",
records,
                                                           Field.Store.YES,
                                                           Field.Index.NO));
                                    document.add(new Field("dateSc", dateSc,
                                                           Field.Store.YES,
                                                          
Field.Index.TOKENIZED));
                                    
                                    indexWriter.addDocument(document);

inputs for the document:

compCallingPartyNumber="9840836588";
compCalledPartyNumber="9840861114";
compImsiNumber="984510005469874";
callType="1";
compChargDur="98456";
compTimeSc="984";
outgoingRoute="i987j";
incomingRoute="poi09";
cgiLocation="dft1234567";

here is my search code:

                                         Directory indexDir2 = 
                      FSDirectory.getDirectory(indexSourceDir02,false);
                                         IndexReader indexSource2 =
IndexReader.open(indexDir2);
                                       Directory indexDir3 = 
                                            
FSDirectory.getDirectory(indexSourceDir03,false);
                                         IndexReader indexSource3 =
IndexReader.open(indexDir3);
                                         Directory indexDir4 = 
                                            
FSDirectory.getDirectory(indexSourceDir04,false);
                                         IndexReader indexSource4 =
IndexReader.open(indexDir4); 
                                        
           
                                         
        IndexReader[] readArray = {indexSource2,indexSource3,indexSource4};
        //merged reader
        IndexReader mergedReader = new MultiReader(readArray);
        IndexSearcher is = new IndexSearcher(mergedReader);
        
                                
                                 QueryParser parser = 
                                     new QueryParser("contents" ,new
StandardAnalyzer());
                                 
                                 
                                String searchQuery= 
                                       new
StringBuffer().append(inputNo).append(" AND dateSc:["
).append(fromDate).append(" TO ").append(toDate).append("]").append("
").append("AND").append(" ").append(callTyp).toString();
                                   
                                 
                                 
                                 Query callDetailquery =
parser.parse(searchQuery);
                                 
                                 hits = is.search(callDetailquery); 




                                             
                                       
                                          
         

testn wrote:
> 
> Can you provide more info about your index? How many documents, fields and
> what is the average document length?
> 
> 
> Sebastin wrote:
>> 
>> Hi testn,
>>            i index the dateSc as 070904(2007/09/04) format.i am not using
>> any timestamp here.how can we effectively reopen the IndexSearcher  for
>> an hour and save the memory because my index gets updated every minute.
>> 
>> testn wrote:
>>> 
>>> Check out Wiki for more information at
>>> http://wiki.apache.org/jakarta-lucene/LargeScaleDateRangeProcessing
>>> 
>>> 
>>> 
>>> Sebastin wrote:
>>>> 
>>>> Hi All,
>>>>        i used to search 3 Lucene Index store of size 6 GB,10 GB,10 GB
>>>> of records using MultiReader class.
>>>> 
>>>> here is the following code snippet:
>>>> 
>>>> 
>>>>      
>>>>                                          Directory indexDir2 = 
>>>>                       FSDirectory.getDirectory(indexSourceDir02,false);
>>>>                                          IndexReader indexSource2 =
>>>> IndexReader.open(indexDir2);
>>>>                                        Directory indexDir3 = 
>>>>                                             
>>>> FSDirectory.getDirectory(indexSourceDir03,false);
>>>>                                          IndexReader indexSource3 =
>>>> IndexReader.open(indexDir3);
>>>>                                          Directory indexDir4 = 
>>>>                                             
>>>> FSDirectory.getDirectory(indexSourceDir04,false);
>>>>                                          IndexReader indexSource4 =
>>>> IndexReader.open(indexDir4); 
>>>>                                         
>>>>            
>>>>                                          
>>>>         IndexReader[] readArray =
>>>> {indexSource2,indexSource3,indexSource4};
>>>>         //merged reader
>>>>         IndexReader mergedReader = new MultiReader(readArray);
>>>>         IndexSearcher is = new IndexSearcher(mergedReader);
>>>>         
>>>>                                 
>>>>                                  QueryParser parser = 
>>>>                                      new QueryParser("contents" ,new
>>>> StandardAnalyzer());
>>>>                                  
>>>>                                  
>>>>                                 String searchQuery= 
>>>>                                        new
>>>> StringBuffer().append(inputNo).append(" AND dateSc:["
>>>> ).append(fromDate).append(" TO ").append(toDate).append("]").append("
>>>> ").append("AND").append(" ").append(callTyp).toString();
>>>>                                    
>>>>                                  
>>>>                                  
>>>>                                  Query callDetailquery =
>>>> parser.parse(searchQuery);
>>>>                                  
>>>>                                  hits = is.search(callDetailquery); 
>>>> 
>>>> 
>>>> it takes 300 MB of RAM for every search and it is very very slow is
>>>> there any other way to control the Memory and to make search faster.i
>>>> use SINGLETON  to use the IndexSearcher as a one time used object for
>>>> all the instances.
>>>> 
>>> 
>>> 
>> 
>> 
> 
> 

-- 
View this message in context: http://www.nabble.com/Java-Heap-Space--Out-Of-Memory-Error-tf4376803.html#a12492218
Sent from the Lucene - Java Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: Java Heap Space -Out Of Memory Error

Posted by testn <te...@doramail.com>.
Can you provide more info about your index? How many documents, fields and
what is the average document length?


Sebastin wrote:
> 
> Hi testn,
>            i index the dateSc as 070904(2007/09/04) format.i am not using
> any timestamp here.how can we effectively reopen the IndexSearcher  for an
> hour and save the memory because my index gets updated every minute.
> 
> testn wrote:
>> 
>> Check out Wiki for more information at
>> http://wiki.apache.org/jakarta-lucene/LargeScaleDateRangeProcessing
>> 
>> 
>> 
>> Sebastin wrote:
>>> 
>>> Hi All,
>>>        i used to search 3 Lucene Index store of size 6 GB,10 GB,10 GB of
>>> records using MultiReader class.
>>> 
>>> here is the following code snippet:
>>> 
>>> 
>>>      
>>>                                          Directory indexDir2 = 
>>>                       FSDirectory.getDirectory(indexSourceDir02,false);
>>>                                          IndexReader indexSource2 =
>>> IndexReader.open(indexDir2);
>>>                                        Directory indexDir3 = 
>>>                                             
>>> FSDirectory.getDirectory(indexSourceDir03,false);
>>>                                          IndexReader indexSource3 =
>>> IndexReader.open(indexDir3);
>>>                                          Directory indexDir4 = 
>>>                                             
>>> FSDirectory.getDirectory(indexSourceDir04,false);
>>>                                          IndexReader indexSource4 =
>>> IndexReader.open(indexDir4); 
>>>                                         
>>>            
>>>                                          
>>>         IndexReader[] readArray =
>>> {indexSource2,indexSource3,indexSource4};
>>>         //merged reader
>>>         IndexReader mergedReader = new MultiReader(readArray);
>>>         IndexSearcher is = new IndexSearcher(mergedReader);
>>>         
>>>                                 
>>>                                  QueryParser parser = 
>>>                                      new QueryParser("contents" ,new
>>> StandardAnalyzer());
>>>                                  
>>>                                  
>>>                                 String searchQuery= 
>>>                                        new
>>> StringBuffer().append(inputNo).append(" AND dateSc:["
>>> ).append(fromDate).append(" TO ").append(toDate).append("]").append("
>>> ").append("AND").append(" ").append(callTyp).toString();
>>>                                    
>>>                                  
>>>                                  
>>>                                  Query callDetailquery =
>>> parser.parse(searchQuery);
>>>                                  
>>>                                  hits = is.search(callDetailquery); 
>>> 
>>> 
>>> it takes 300 MB of RAM for every search and it is very very slow is
>>> there any other way to control the Memory and to make search faster.i
>>> use SINGLETON  to use the IndexSearcher as a one time used object for
>>> all the instances.
>>> 
>> 
>> 
> 
> 

-- 
View this message in context: http://www.nabble.com/Java-Heap-Space--Out-Of-Memory-Error-tf4376803.html#a12484208
Sent from the Lucene - Java Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: Java Heap Space -Out Of Memory Error

Posted by Sebastin <se...@gmail.com>.
Hi testn,
           i index the dateSc as 070904(2007/09/04) format.i am not using
any timestamp here.how can we effectively reopen the IndexSearcher  for an
hour and save the memory because my index gets updated every minute.

testn wrote:
> 
> Check out Wiki for more information at
> http://wiki.apache.org/jakarta-lucene/LargeScaleDateRangeProcessing
> 
> 
> 
> Sebastin wrote:
>> 
>> Hi All,
>>        i used to search 3 Lucene Index store of size 6 GB,10 GB,10 GB of
>> records using MultiReader class.
>> 
>> here is the following code snippet:
>> 
>> 
>>      
>>                                          Directory indexDir2 = 
>>                       FSDirectory.getDirectory(indexSourceDir02,false);
>>                                          IndexReader indexSource2 =
>> IndexReader.open(indexDir2);
>>                                        Directory indexDir3 = 
>>                                             
>> FSDirectory.getDirectory(indexSourceDir03,false);
>>                                          IndexReader indexSource3 =
>> IndexReader.open(indexDir3);
>>                                          Directory indexDir4 = 
>>                                             
>> FSDirectory.getDirectory(indexSourceDir04,false);
>>                                          IndexReader indexSource4 =
>> IndexReader.open(indexDir4); 
>>                                         
>>            
>>                                          
>>         IndexReader[] readArray =
>> {indexSource2,indexSource3,indexSource4};
>>         //merged reader
>>         IndexReader mergedReader = new MultiReader(readArray);
>>         IndexSearcher is = new IndexSearcher(mergedReader);
>>         
>>                                 
>>                                  QueryParser parser = 
>>                                      new QueryParser("contents" ,new
>> StandardAnalyzer());
>>                                  
>>                                  
>>                                 String searchQuery= 
>>                                        new
>> StringBuffer().append(inputNo).append(" AND dateSc:["
>> ).append(fromDate).append(" TO ").append(toDate).append("]").append("
>> ").append("AND").append(" ").append(callTyp).toString();
>>                                    
>>                                  
>>                                  
>>                                  Query callDetailquery =
>> parser.parse(searchQuery);
>>                                  
>>                                  hits = is.search(callDetailquery); 
>> 
>> 
>> it takes 300 MB of RAM for every search and it is very very slow is there
>> any other way to control the Memory and to make search faster.i use
>> SINGLETON  to use the IndexSearcher as a one time used object for all the
>> instances.
>> 
> 
> 

-- 
View this message in context: http://www.nabble.com/Java-Heap-Space--Out-Of-Memory-Error-tf4376803.html#a12478804
Sent from the Lucene - Java Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Re: Java Heap Space -Out Of Memory Error

Posted by testn <te...@doramail.com>.
Check out Wiki for more information at
http://wiki.apache.org/jakarta-lucene/LargeScaleDateRangeProcessing



Sebastin wrote:
> 
> Hi All,
>        i used to search 3 Lucene Index store of size 6 GB,10 GB,10 GB of
> records using MultiReader class.
> 
> here is the following code snippet:
> 
> 
>      
>                                          Directory indexDir2 = 
>                       FSDirectory.getDirectory(indexSourceDir02,false);
>                                          IndexReader indexSource2 =
> IndexReader.open(indexDir2);
>                                        Directory indexDir3 = 
>                                             
> FSDirectory.getDirectory(indexSourceDir03,false);
>                                          IndexReader indexSource3 =
> IndexReader.open(indexDir3);
>                                          Directory indexDir4 = 
>                                             
> FSDirectory.getDirectory(indexSourceDir04,false);
>                                          IndexReader indexSource4 =
> IndexReader.open(indexDir4); 
>                                         
>            
>                                          
>         IndexReader[] readArray =
> {indexSource2,indexSource3,indexSource4};
>         //merged reader
>         IndexReader mergedReader = new MultiReader(readArray);
>         IndexSearcher is = new IndexSearcher(mergedReader);
>         
>                                 
>                                  QueryParser parser = 
>                                      new QueryParser("contents" ,new
> StandardAnalyzer());
>                                  
>                                  
>                                 String searchQuery= 
>                                        new
> StringBuffer().append(inputNo).append(" AND dateSc:["
> ).append(fromDate).append(" TO ").append(toDate).append("]").append("
> ").append("AND").append(" ").append(callTyp).toString();
>                                    
>                                  
>                                  
>                                  Query callDetailquery =
> parser.parse(searchQuery);
>                                  
>                                  hits = is.search(callDetailquery); 
> 
> 
> it takes 300 MB of RAM for every search and it is very very slow is there
> any other way to control the Memory and to make search faster.i use
> SINGLETON  to use the IndexSearcher as a one time used object for all the
> instances.
> 

-- 
View this message in context: http://www.nabble.com/Java-Heap-Space--Out-Of-Memory-Error-tf4376803.html#a12476607
Sent from the Lucene - Java Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org