You are viewing a plain text version of this content. The canonical link for it is here.
Posted to embperl@perl.apache.org by Brian Burke <bb...@lssi.net> on 2002/02/06 16:50:26 UTC

file not found error message

I have a problem that I think may be related to Embperl.  I did some Google
searches, and found a few others who had reported similar problems under
Embperl, so I thought I'd pose this to this list for suggestions.

I have a server that is up and running, taking about 50k-100k hits a day.
For the most part, it is working fine.  About every 2-3 weeks, I start getting a
lot of messages like this in my error_log:
[19498]ERR:  30: Line 1: Not found /valid/path/to/some/file.html

If I restart apache, these errors go away, at least for a while.

Any ideas as to what could be causing this?  I'm running Apache/1.3.14 (Unix)
mod_perl/1.24_01 mod_ssl/2.7.1 OpenSSL/0.9.6 with Embperl-1.3.1.

thanks,
Brian
--
______________________________________
Brian Burke
bburke@lssi.net
______________________________________



---------------------------------------------------------------------
To unsubscribe, e-mail: embperl-unsubscribe@perl.apache.org
For additional commands, e-mail: embperl-help@perl.apache.org


Re: file not found error message

Posted by Brian Burke <bb...@lssi.net>.
Thanks everyone for your suggestions.  You have given me enough ammo to
attack the problem!

Brian


Ed Grimm wrote:

> It depends on your OS (and I forget which one you said you were using),
> but generally, there is.  Normally, this is given by ulimit -Sn, but
> I've seen systems that have another value that won't show up with
> ulimit.  And it's possible that your shell environment has a different
> limit than the apache environment, as your shell init process can mess
> with ulimit -Sn, and apache doesn't go through this normally.
>
> Note that lsof will not list any files which have been deleted since
> opening; I don't know if Apache uses this trick for temporary cache, but
> I have seen it used by a number of programs.
>
> Ed
>
> On Wed, 6 Feb 2002, Brian Burke wrote:
>
> > I'm thinking that maybe I'm running into a user limit (httpd) rather
> > than a process limit.  I show only 60 or so open handles per httpd
> > process, with a system limit of 1024.  Is there such a thing as a user
> > limit?  I know there are limits on the number of user processes, but I'm
> > not sure about open file handles.
> >
> > I may try to attack the problem short-term by having apache throttle
> > back to less httpd's when idle, and lowering MaxRequestPerChild to
> > have the children die earler.
> >
> > Brian
> >
> >
> > On Thu, 7 Feb 2002, Axel Beckert wrote:
> >
> > > Hi!
> > >
> > > On Wed, Feb 06, 2002 at 04:48:06PM -0500, Brian Burke wrote:
> > > > When I run ulimit -Hn and ulimit -Sn, the system shows I can have
> > > > 1024 open handles. Does that mean if I run lsof | fgrep httpd | wc
> > > > -l and it is close to 1024, I have a problem?
> > >
> > > Only, if you run Apache with the -X flag (one process only, some kind
> > > of debugging state), because 'lsof | fgrep httpd' would match all
> > > httpd processes. And even, when I grepped after the pid of one httpd
> > > process I not always got near the ulimit with wc -l. My guess is, that
> > > probably there is the right timing for the lsof needed.
> > >
> > > I tried the following:
> > >
> > >                 lsof | fgrep httpd | sort -k9
> > >
> > > (maybe you need to use another value than 9, depends on the parameters
> > > to lsof) to sort by the path of the open files. If you see one file
> > > very often (tens per httpd process), that's usually the one which
> > > causes the trouble. In my case it was the magic file, so I knew I had
> > > search in or around File::MMagic for the problem.
> > >
> > > But due to with Apache (1.x) each child can only handle one request a
> > > time, something must go really wrong to reach that limit with a single
> > > request. (The Solaris limit of 64 was easier to reach... ;-)
> > >
> > >             Regards, Axel
> > >
> >
> > --
> > ______________________________________
> > Brian Burke
> > bburke@lssi.net
> > ______________________________________
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: embperl-unsubscribe@perl.apache.org
> > For additional commands, e-mail: embperl-help@perl.apache.org
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: embperl-unsubscribe@perl.apache.org
> For additional commands, e-mail: embperl-help@perl.apache.org

--
______________________________________
Brian Burke
bburke@lssi.net
_____________________________________



---------------------------------------------------------------------
To unsubscribe, e-mail: embperl-unsubscribe@perl.apache.org
For additional commands, e-mail: embperl-help@perl.apache.org


Re: file not found error message

Posted by Ed Grimm <ed...@asgard.rsc.raytheon.com>.
It depends on your OS (and I forget which one you said you were using),
but generally, there is.  Normally, this is given by ulimit -Sn, but
I've seen systems that have another value that won't show up with
ulimit.  And it's possible that your shell environment has a different
limit than the apache environment, as your shell init process can mess
with ulimit -Sn, and apache doesn't go through this normally.

Note that lsof will not list any files which have been deleted since
opening; I don't know if Apache uses this trick for temporary cache, but
I have seen it used by a number of programs.

Ed

On Wed, 6 Feb 2002, Brian Burke wrote:

> I'm thinking that maybe I'm running into a user limit (httpd) rather
> than a process limit.  I show only 60 or so open handles per httpd 
> process, with a system limit of 1024.  Is there such a thing as a user 
> limit?  I know there are limits on the number of user processes, but I'm 
> not sure about open file handles.
> 
> I may try to attack the problem short-term by having apache throttle
> back to less httpd's when idle, and lowering MaxRequestPerChild to
> have the children die earler.
> 
> Brian
> 
> 
> On Thu, 7 Feb 2002, Axel Beckert wrote:
> 
> > Hi!
> > 
> > On Wed, Feb 06, 2002 at 04:48:06PM -0500, Brian Burke wrote:
> > > When I run ulimit -Hn and ulimit -Sn, the system shows I can have
> > > 1024 open handles. Does that mean if I run lsof | fgrep httpd | wc
> > > -l and it is close to 1024, I have a problem?
> > 
> > Only, if you run Apache with the -X flag (one process only, some kind
> > of debugging state), because 'lsof | fgrep httpd' would match all
> > httpd processes. And even, when I grepped after the pid of one httpd
> > process I not always got near the ulimit with wc -l. My guess is, that
> > probably there is the right timing for the lsof needed.
> > 
> > I tried the following: 
> > 
> > 		    lsof | fgrep httpd | sort -k9
> > 
> > (maybe you need to use another value than 9, depends on the parameters
> > to lsof) to sort by the path of the open files. If you see one file
> > very often (tens per httpd process), that's usually the one which
> > causes the trouble. In my case it was the magic file, so I knew I had
> > search in or around File::MMagic for the problem.
> > 
> > But due to with Apache (1.x) each child can only handle one request a
> > time, something must go really wrong to reach that limit with a single
> > request. (The Solaris limit of 64 was easier to reach... ;-)
> > 
> > 		Regards, Axel
> > 
> 
> -- 
> ______________________________________             
> Brian Burke
> bburke@lssi.net
> ______________________________________
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: embperl-unsubscribe@perl.apache.org
> For additional commands, e-mail: embperl-help@perl.apache.org
> 



---------------------------------------------------------------------
To unsubscribe, e-mail: embperl-unsubscribe@perl.apache.org
For additional commands, e-mail: embperl-help@perl.apache.org


Re: file not found error message

Posted by Brian Burke <bb...@lssi.net>.

I'm thinking that maybe I'm running into a user limit (httpd) rather
than a process limit.  I show only 60 or so open handles per httpd 
process, with a system limit of 1024.  Is there such a thing as a user 
limit?  I know there are limits on the number of user processes, but I'm 
not sure about open file handles.

I may try to attack the problem short-term by having apache throttle
back to less httpd's when idle, and lowering MaxRequestPerChild to
have the children die earler.

Brian


On Thu, 7 Feb 2002, Axel Beckert wrote:

> Hi!
> 
> On Wed, Feb 06, 2002 at 04:48:06PM -0500, Brian Burke wrote:
> > When I run ulimit -Hn and ulimit -Sn, the system shows I can have
> > 1024 open handles. Does that mean if I run lsof | fgrep httpd | wc
> > -l and it is close to 1024, I have a problem?
> 
> Only, if you run Apache with the -X flag (one process only, some kind
> of debugging state), because 'lsof | fgrep httpd' would match all
> httpd processes. And even, when I grepped after the pid of one httpd
> process I not always got near the ulimit with wc -l. My guess is, that
> probably there is the right timing for the lsof needed.
> 
> I tried the following: 
> 
> 		    lsof | fgrep httpd | sort -k9
> 
> (maybe you need to use another value than 9, depends on the parameters
> to lsof) to sort by the path of the open files. If you see one file
> very often (tens per httpd process), that's usually the one which
> causes the trouble. In my case it was the magic file, so I knew I had
> search in or around File::MMagic for the problem.
> 
> But due to with Apache (1.x) each child can only handle one request a
> time, something must go really wrong to reach that limit with a single
> request. (The Solaris limit of 64 was easier to reach... ;-)
> 
> 		Regards, Axel
> 

-- 
______________________________________             
Brian Burke
bburke@lssi.net
______________________________________


---------------------------------------------------------------------
To unsubscribe, e-mail: embperl-unsubscribe@perl.apache.org
For additional commands, e-mail: embperl-help@perl.apache.org


Re: file not found error message

Posted by Axel Beckert <ab...@deuxchevaux.org>.
Hi!

On Wed, Feb 06, 2002 at 04:48:06PM -0500, Brian Burke wrote:
> When I run ulimit -Hn and ulimit -Sn, the system shows I can have
> 1024 open handles. Does that mean if I run lsof | fgrep httpd | wc
> -l and it is close to 1024, I have a problem?

Only, if you run Apache with the -X flag (one process only, some kind
of debugging state), because 'lsof | fgrep httpd' would match all
httpd processes. And even, when I grepped after the pid of one httpd
process I not always got near the ulimit with wc -l. My guess is, that
probably there is the right timing for the lsof needed.

I tried the following: 

		    lsof | fgrep httpd | sort -k9

(maybe you need to use another value than 9, depends on the parameters
to lsof) to sort by the path of the open files. If you see one file
very often (tens per httpd process), that's usually the one which
causes the trouble. In my case it was the magic file, so I knew I had
search in or around File::MMagic for the problem.

But due to with Apache (1.x) each child can only handle one request a
time, something must go really wrong to reach that limit with a single
request. (The Solaris limit of 64 was easier to reach... ;-)

		Regards, Axel
-- 
Axel Beckert - abe@deuxchevaux.org - http://abe.home.pages.de/

---------------------------------------------------------------------
To unsubscribe, e-mail: embperl-unsubscribe@perl.apache.org
For additional commands, e-mail: embperl-help@perl.apache.org


Re: file not found error message

Posted by Brian Burke <bb...@lssi.net>.
Thanks Axel.  This very well could be my problem.

When I run ulimit -Hn and ulimit -Sn, the system shows I can have 1024 open
handles.  Does that mean if I run lsof | fgrep httpd | wc -l and it is close to 1024,
I have a problem?

Brian


Axel Beckert wrote:

> Hi!
>
> On Wed, Feb 06, 2002 at 10:50:26AM -0500, Brian Burke wrote:
> > I have a server that is up and running, taking about 50k-100k hits a
> > day. For the most part, it is working fine. About every 2-3 weeks, I
> > start getting a lot of messages like this in my error_log:
> >
> > [19498]ERR:  30: Line 1: Not found /valid/path/to/some/file.html
> >
> > If I restart apache, these errors go away, at least for a while.
> >
> > Any ideas as to what could be causing this?
>
> On Wed, Feb 06, 2002 at 10:01:05AM -0600, erik wrote:
> > <AOL>me too.</AOL>
>
> I had this problem, too. It also causes 403s, if some directorys have
> .htaccesses.
>
> The basic problem is 'ulimit -Sn', the number of allowed file handles
> per process. On Solaris (where I had that problem, it defaults to 64,
> which is IMHO quite small.)
>
> If set this higher, it may solve the problem. But beware that some
> software isn't written for filehandles being higher that 256 or 1024.
>
> For more details on that issue see
> http://www.rational.com/technotes/clearcase_html/ClearCase_html/technote_344.html.
>
> If it doesn't solve the problem, look through the use PERL code for
> unclosed file handles. It doesn't need to be your code, which is
> wrong, but some other PERL-Module, you're using.
>
> E.g. in my case the trouble maker was File::MMagic. The problem was
> solved, when I rewrote my code, so that 'new File::MMagic' was used
> once per embperl page and not once per request:
>
> [! use File::MMagic;
>    $CLEANUP{mime_magic} = 0;
>    $mime_magic = new File::MMagic('/opt/local/apache/conf/magic'); !]
>
> > Except that I don't have to wait weeks under serious load.
>
> Before setting up the ulimit I got this after a few hours, that's right.
>
> > I get it on my dev box. But very sporadically. Hitting reload on my
> > browser usually clears it, if not 'apachectl restart' does.
>
> Hitting reload clears it, if the persisent connection from your
> browser is closed and the new connection is established to a new
> child, which probably hadn't that many file handles than that one
> before.
>
> P.S.: In this case it probably would be interesting to know about your
> operating system, 'ulimit -Sn' and 'ulimit -Hn'.
>
> P.P.S.: This kind of problems are best debugged with something like
> 'lsof | fgrep httpd' (lsof = list open files). For Solaris e.g. you
> get lsof as a package from www.sunfreeware.com, and for Linux it should
> be included in every distribution.
>
>                 Regards, Axel
> --
> Axel Beckert - abe@deuxchevaux.org - http://abe.home.pages.de/
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: embperl-unsubscribe@perl.apache.org
> For additional commands, e-mail: embperl-help@perl.apache.org

--
______________________________________
Brian Burke
bburke@lssi.net
______________________________________



---------------------------------------------------------------------
To unsubscribe, e-mail: embperl-unsubscribe@perl.apache.org
For additional commands, e-mail: embperl-help@perl.apache.org


Re: file not found error message

Posted by Axel Beckert <ab...@deuxchevaux.org>.
Hi!

On Wed, Feb 06, 2002 at 10:50:26AM -0500, Brian Burke wrote:
> I have a server that is up and running, taking about 50k-100k hits a
> day. For the most part, it is working fine. About every 2-3 weeks, I
> start getting a lot of messages like this in my error_log:
>
> [19498]ERR:  30: Line 1: Not found /valid/path/to/some/file.html
> 
> If I restart apache, these errors go away, at least for a while.
> 
> Any ideas as to what could be causing this?

On Wed, Feb 06, 2002 at 10:01:05AM -0600, erik wrote:
> <AOL>me too.</AOL>

I had this problem, too. It also causes 403s, if some directorys have
.htaccesses.

The basic problem is 'ulimit -Sn', the number of allowed file handles
per process. On Solaris (where I had that problem, it defaults to 64,
which is IMHO quite small.)

If set this higher, it may solve the problem. But beware that some
software isn't written for filehandles being higher that 256 or 1024.

For more details on that issue see
http://www.rational.com/technotes/clearcase_html/ClearCase_html/technote_344.html.

If it doesn't solve the problem, look through the use PERL code for
unclosed file handles. It doesn't need to be your code, which is
wrong, but some other PERL-Module, you're using.

E.g. in my case the trouble maker was File::MMagic. The problem was
solved, when I rewrote my code, so that 'new File::MMagic' was used
once per embperl page and not once per request:

[! use File::MMagic;
   $CLEANUP{mime_magic} = 0;
   $mime_magic = new File::MMagic('/opt/local/apache/conf/magic'); !]

> Except that I don't have to wait weeks under serious load. 

Before setting up the ulimit I got this after a few hours, that's right.

> I get it on my dev box. But very sporadically. Hitting reload on my
> browser usually clears it, if not 'apachectl restart' does.

Hitting reload clears it, if the persisent connection from your
browser is closed and the new connection is established to a new
child, which probably hadn't that many file handles than that one
before.

P.S.: In this case it probably would be interesting to know about your
operating system, 'ulimit -Sn' and 'ulimit -Hn'.

P.P.S.: This kind of problems are best debugged with something like
'lsof | fgrep httpd' (lsof = list open files). For Solaris e.g. you
get lsof as a package from www.sunfreeware.com, and for Linux it should
be included in every distribution.

		Regards, Axel
-- 
Axel Beckert - abe@deuxchevaux.org - http://abe.home.pages.de/

---------------------------------------------------------------------
To unsubscribe, e-mail: embperl-unsubscribe@perl.apache.org
For additional commands, e-mail: embperl-help@perl.apache.org