You are viewing a plain text version of this content. The canonical link for it is here.
Posted to bugs@httpd.apache.org by bu...@apache.org on 2006/06/14 07:46:30 UTC

DO NOT REPLY [Bug 39807] - large files / filesystem corruption can cause apache2 to eat up all available memory

DO NOT REPLY TO THIS EMAIL, BUT PLEASE POST YOUR BUG�
RELATED COMMENTS THROUGH THE WEB INTERFACE AVAILABLE AT
<http://issues.apache.org/bugzilla/show_bug.cgi?id=39807>.
ANY REPLY MADE TO THIS MESSAGE WILL NOT BE COLLECTED AND�
INSERTED IN THE BUG DATABASE.

http://issues.apache.org/bugzilla/show_bug.cgi?id=39807


chip@force-elite.com changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|NEW                         |RESOLVED
         Resolution|                            |INVALID




------- Additional Comments From chip@force-elite.com  2006-06-14 05:46 -------
The out of memory is from trying to split the huge file into buckets of
AP_MAX_SENDFILE size to sendfile each section of the file.  I don't believe
there is anything we can do in this case, except to call the OOM abort function
in APR, which has already been done in trunk.

If you had a 64bit machine/OS/httpd (or where sizeof(apr_off_t) <=
sizeof(apr_size_t)), I believe it would work since it would attempt to
sendfile() it all in one bucket, rather than millions AP_MAX_SENDFILE size buckets.

-- 
Configure bugmail: http://issues.apache.org/bugzilla/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug, or are watching the assignee.

---------------------------------------------------------------------
To unsubscribe, e-mail: bugs-unsubscribe@httpd.apache.org
For additional commands, e-mail: bugs-help@httpd.apache.org