You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@httpd.apache.org by ca...@t-systems.it on 2003/10/30 15:31:34 UTC
[users@httpd] 256 file descriptor max?
Sorry if this is a faq, I can't find a *satisfactory* answer elsewhere.
I have an Apache 1.3.19 on Solaris 9 OS; inside it there is a little buggy
application, which opens - and keeps open - many sockets.
When the count of file descriptors in a httpd process gets to 256, Apache
gives up serving files and it prints an error: "(24)Too many open files:
file permissions deny server access: myfile" in the error log.
I have no control on the application wasting descriptors, so my only option
is trying to increase the max limit to a manageable value; but even if I
change the OS limit of file descriptors with "ulimit -n 1024" and restart
Apache, it keeps stopping at 256.
Having read http://access1.sun.com/technotes/01406.html, I think it should
be a standard I/O library limit, due to the use, by Apache, of the fopen()
routine, which is inherently limited to 256 fd.
My question is: is there anybody out there with my same need? Are these 256
descriptors enough for everybody?
Or it does exist a workaround?
TIA
Carlo
---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
" from the digest: users-digest-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org
Re: [users@httpd] 256 file descriptor max?
Posted by Luis Gallegos <lg...@todo1.com>.
On Thu, 2003-10-30 at 09:31, carlo.montanari@t-systems.it wrote:
> Sorry if this is a faq, I can't find a *satisfactory* answer elsewhere.
> I have an Apache 1.3.19 on Solaris 9 OS; inside it there is a little buggy
> application, which opens - and keeps open - many sockets.
> When the count of file descriptors in a httpd process gets to 256, Apache
> gives up serving files and it prints an error: "(24)Too many open files:
> file permissions deny server access: myfile" in the error log.
> I have no control on the application wasting descriptors, so my only option
> is trying to increase the max limit to a manageable value; but even if I
> change the OS limit of file descriptors with "ulimit -n 1024" and restart
> Apache, it keeps stopping at 256.
> Having read http://access1.sun.com/technotes/01406.html, I think it should
> be a standard I/O library limit, due to the use, by Apache, of the fopen()
> routine, which is inherently limited to 256 fd.
> My question is: is there anybody out there with my same need? Are these 256
> descriptors enough for everybody?
> Or it does exist a workaround?
>
Use the ServerLimit directive in httpd.conf ie.
ServerLimit=400
Luis
---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
" from the digest: users-digest-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org
[users@httpd] Apache tuning for high performance
Posted by Richard Correia <ri...@ugamsolutions.com>.
Hi,
I have a webserver with 512MB SDRAM and Intel 733Mhz processor.
Server version: Apache/1.3.23 (Unix)
My httpd.conf says...
Timeout 300
KeepAlive On
MaxKeepAliveRequests 100
KeepAliveTimeout 15
MinSpareServers 5
MaxSpareServers 10
StartServers 5
MaxClients 150
MaxRequestsPerChild 0
For next four months, I will be hosting a small JSP script for which I
am expecting
5,12,00,000 hits. This page will be serving a cookie and small html
text.
Most hits will come on Monday and Friday during day time.
So if I have 5000 hits in a minute, will this configuration able to
handle that ?
I need to understand whether my apache is capable of taking this load.
What effect I will have if I made keepAlive=OFF and Timeout=50 Seconds.
Please suggest me how do I evaluate my server based on this requirement.
Sincerely,
Rich
---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
" from the digest: users-digest-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org
Re: [users@httpd] 256 file descriptor max?
Posted by "John P. Dodge" <do...@cruciate.ca.boeing.com>.
On Thu, 30 Oct 2003 carlo.montanari@t-systems.it wrote:
> I have no control on the application wasting descriptors, so my only option
> is trying to increase the max limit to a manageable value; but even if I
> change the OS limit of file descriptors with "ulimit -n 1024" and restart
> descriptors enough for everybody?
> Or it does exist a workaround?
>
> TIA
>
> Carlo
>
For Solaris there is a system parameter you can set in /etc/system.
set rlim_fd_max = 4096
set rlim_fd_cur = 2048
You will need to do a reconfigure boot after modifying /etc/system.
either:
touch /reconfigure
reboot
or
reboot -- -r
----------------------------------------
"Mon aƩroglisseur est plein d'anguilles"
John P. Dodge
Boeing Shared Services
---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
" from the digest: users-digest-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org