You are viewing a plain text version of this content. The canonical link for it is here.
Posted to java-user@lucene.apache.org by David Spencer <da...@tropo.com> on 2004/09/13 20:03:49 UTC
OptimizeIt -- Re: force gc idiom - Re: OutOfMemory example
Jiří Kuhn wrote:
> This doesn't work either!
You're right.
I'm running under JDK1.5 and trying larger values for -Xmx and it still
fails.
Running under (Borlands) OptimzeIt shows the number of Terms and
Terminfos (both in org.apache.lucene.index) increase every time thru the
loop, by several hundred instances each.
I can trace thru some Term instances on the reference graph of
OptimizeIt but it's unclear to me what's right. One *guess* is that
maybe the WeakHashMap in either SegmentReader or FieldCacheImpl is the
problem.
>
> Lets concentrate on the first version of my code. I believe that the code should run endlesly (I have said it before: in version 1.4 final it does).
>
> Jiri.
>
> -----Original Message-----
> From: David Spencer [mailto:dave-lucene-user@tropo.com]
> Sent: Monday, September 13, 2004 5:34 PM
> To: Lucene Users List
> Subject: force gc idiom - Re: OutOfMemory example
>
>
> Jiří Kuhn wrote:
>
>
>>Thanks for the bug's id, it seems like my problem and I have a stand-alone code with main().
>>
>>What about slow garbage collector? This looks for me as wrong suggestion.
>
>
>
> I've seen this written up before (javaworld?) as a way to probably
> "force" GC instead of just a System.gc() call. I think the 2nd gc() call
> is supposed to clean up junk from the runFinalization() call...
>
> System.gc();
> Thread.sleep( 100);
> System.runFinalization();
> Thread.sleep( 100);
> System.gc();
>
>
>>Let change the code once again:
>>
>>...
>> public static void main(String[] args) throws IOException, InterruptedException
>> {
>> Directory directory = create_index();
>>
>> for (int i = 1; i < 100; i++) {
>> System.err.println("loop " + i + ", index version: " + IndexReader.getCurrentVersion(directory));
>> search_index(directory);
>> add_to_index(directory, i);
>> System.gc();
>> Thread.sleep(1000);// whatever value you want
>> }
>> }
>>...
>>
>>and in the 4th iteration java.lang.OutOfMemoryError appears again.
>>
>>Jiri.
>>
>>
>>-----Original Message-----
>>From: John Moylan [mailto:johnm@rte.ie]
>>Sent: Monday, September 13, 2004 4:53 PM
>>To: Lucene Users List
>>Subject: Re: OutOfMemory example
>>
>>
>>http://issues.apache.org/bugzilla/show_bug.cgi?id=30628
>>
>>you can close the index, but the Garbage Collector still needs to
>>reclaim the memory and it may be taking longer than your loop to do so.
>>
>>John
>>
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: lucene-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: lucene-user-help@jakarta.apache.org
>
---------------------------------------------------------------------
To unsubscribe, e-mail: lucene-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: lucene-user-help@jakarta.apache.org
Re: OptimizeIt -- Re: force gc idiom - Re: OutOfMemory example
Posted by "Kevin A. Burton" <bu...@newsmonster.org>.
David Spencer wrote:
> Jiří Kuhn wrote:
>
>> This doesn't work either!
>
>
> You're right.
> I'm running under JDK1.5 and trying larger values for -Xmx and it
> still fails.
>
> Running under (Borlands) OptimzeIt shows the number of Terms and
> Terminfos (both in org.apache.lucene.index) increase every time thru
> the loop, by several hundred instances each.
Yes... I'm running into a similar situation on JDK 1.4.2 with Lucene
1.3... I used the JMP debugger and all my memory is taken by Terms and
TermInfo...
> I can trace thru some Term instances on the reference graph of
> OptimizeIt but it's unclear to me what's right. One *guess* is that
> maybe the WeakHashMap in either SegmentReader or FieldCacheImpl is the
> problem.
Kevin
--
Please reply using PGP.
http://peerfear.org/pubkey.asc
NewsMonster - http://www.newsmonster.org/
Kevin A. Burton, Location - San Francisco, CA, Cell - 415.595.9965
AIM/YIM - sfburtonator, Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412
IRC - freenode.net #infoanarchy | #p2p-hackers | #newsmonster
---------------------------------------------------------------------
To unsubscribe, e-mail: lucene-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: lucene-user-help@jakarta.apache.org