You are viewing a plain text version of this content. The canonical link for it is here.
Posted to java-user@lucene.apache.org by Rasik Pandey <ra...@ajlsm.com> on 2004/02/20 12:23:17 UTC

RE : MultiReader

Hello,

> I just committed one!  This was really already there, in
> SegmentsReader,
> but it was not public and needed a few minor changes.  Enjoy.
> 
> Doug

Great....thanks! Do you feel as though, managing an index made up of numerous smaller indices is an effective use of the MultiReader and MultiSearcher? Ignoring for a moment the potential of causing a "Too many open files" error, I feel as though it may be decent/reasonable way in which one could ensure overall index integrity by managing smaller parts.

As a side note, regarding the "Too many open files" issue, has anyone noticed that this could be related to the JVM? For instance, I have a coworker who tried to run a number of "optimized" indexes in a JVM instance and a received the "Too many open files" error. With the same number of available file descriptors (on linux ulimit = ulimited), he split the number of indicies over too JVM instances his problem disappeared.  He also tested the problem by increasing the available memory to the JVM instance, via the -Xmx parameter, with all indicies running in one JVM instance and again the problem disappeared. I think the issue deserves more testing to pin-point the exact problem, but I was just wondering if anyone has already experienced anything similar or if this information could be of use to anyone, in which case we should probably start a new thread dedicated to this issue.


Regards,
Rasik

 


---------------------------------------------------------------------
To unsubscribe, e-mail: lucene-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: lucene-user-help@jakarta.apache.org


RE: open files under linux

Posted by Stephen Eaton <se...@gateway.net.au>.
Easiest way is to use sysctl to view and change the max files setting. For
some reason the max-files is set to 8000 or something small (is for mandrake
anyway)

Sysctl fs.file-nr to view what is currently in use and what the max is set
for.  It reports file usage in the format of xxx yyy zzz  where xxx= max
that has been used by the system, yyy=currently being used, zzz=max
allocated.  So yyy should never get near zzz, if it does then you will get
the out of file errors.  Try using the command when you are getting the
issue and see what the system values are. 

To change use
Sysctl -w fs.file-max="32768"  to give it something decent.

Should solve your problems.

Stephen...


> -----Original Message-----
> From: Morus Walter [mailto:morus.walter@tanto.de] 
> Sent: Friday, 20 February 2004 7:41 PM
> To: Lucene Users List
> Subject: open files under linux
> 
> Rasik Pandey writes:
>  
> > As a side note, regarding the "Too many open files" issue, 
> has anyone noticed that this could be related to the JVM? For 
> instance, I have a coworker who tried to run a number of 
> "optimized" indexes in a JVM instance and a received the "Too 
> many open files" error. With the same number of available 
> file descriptors (on linux ulimit = ulimited), he split the 
> number of indicies over too JVM instances his problem 
> disappeared.  He also tested the problem by increasing the 
> available memory to the JVM instance, via the -Xmx parameter, 
> with all indicies running in one JVM instance and again the 
> problem disappeared. I think the issue deserves more testing 
> to pin-point the exact problem, but I was just wondering if 
> anyone has already experienced anything similar or if this 
> information could be of use to anyone, in which case we 
> should probably start a new thread dedicated to this issue.
> > 
> The limit is per process. Two JVM make two processes.
> (There's a per system limit too, but it's much higher; I 
> think you find it in /proc/sys/fs/file-max and it's default 
> value depends on the amount of memory the system has)
> 
> AFAIK there's no way of setting openfiles to unlimited. At 
> least neither bash nor tcsh accepts that.
> But it should not be a problem to set it to very high values.
> And you should be able to increase the system wide limit by 
> writing to /proc/sys/fs/file-max as long as you have enough memory.
> 
> I never used this, though.
> 
> Morus
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: lucene-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: lucene-user-help@jakarta.apache.org
> 
> 
> __________ NOD32 1.628 (20040218) Information __________
> 
> This message was checked by NOD32 antivirus system.
> http://www.nod32.com
> 
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: lucene-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: lucene-user-help@jakarta.apache.org


open files under linux

Posted by Morus Walter <mo...@tanto.de>.
Rasik Pandey writes:
 
> As a side note, regarding the "Too many open files" issue, has anyone noticed that this could be related to the JVM? For instance, I have a coworker who tried to run a number of "optimized" indexes in a JVM instance and a received the "Too many open files" error. With the same number of available file descriptors (on linux ulimit = ulimited), he split the number of indicies over too JVM instances his problem disappeared.  He also tested the problem by increasing the available memory to the JVM instance, via the -Xmx parameter, with all indicies running in one JVM instance and again the problem disappeared. I think the issue deserves more testing to pin-point the exact problem, but I was just wondering if anyone has already experienced anything similar or if this information could be of use to anyone, in which case we should probably start a new thread dedicated to this issue.
> 
The limit is per process. Two JVM make two processes.
(There's a per system limit too, but it's much higher; I think you find
it in /proc/sys/fs/file-max and it's default value depends on the amount
of memory the system has)

AFAIK there's no way of setting openfiles to unlimited. At least neither
bash nor tcsh accepts that.
But it should not be a problem to set it to very high values.
And you should be able to increase the system wide limit by writing to
/proc/sys/fs/file-max as long as you have enough memory.

I never used this, though.

Morus

---------------------------------------------------------------------
To unsubscribe, e-mail: lucene-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: lucene-user-help@jakarta.apache.org