You are viewing a plain text version of this content. The canonical link for it is here.
Posted to modperl-cvs@perl.apache.org by au...@apache.org on 2006/07/05 20:15:21 UTC

svn commit: r419305 - /perl/Apache-SizeLimit/trunk/README

Author: autarch
Date: Wed Jul  5 11:15:21 2006
New Revision: 419305

URL: http://svn.apache.org/viewvc?rev=419305&view=rev
Log:
Update README with pod2text. I wish this were automated cause I'm sure
I'll forget on a semi-regular basis.

Modified:
    perl/Apache-SizeLimit/trunk/README

Modified: perl/Apache-SizeLimit/trunk/README
URL: http://svn.apache.org/viewvc/perl/Apache-SizeLimit/trunk/README?rev=419305&r1=419304&r2=419305&view=diff
==============================================================================
--- perl/Apache-SizeLimit/trunk/README (original)
+++ perl/Apache-SizeLimit/trunk/README Wed Jul  5 11:15:21 2006
@@ -1,293 +1,287 @@
 NAME
-
-Apache::SizeLimit - Because size does matter.
+    Apache::SizeLimit - Because size does matter.
 
 SYNOPSIS
+        <Perl>
+         Apache::SizeLimit->set_max_process_size(150_000);   # Max size in KB
+         Apache::SizeLimit->set_min_shared_size(10_000);     # Min share in KB
+         Apache::SizeLimit->set_max_unshared_size(120_000);  # Max unshared size in KB
+        </Perl>
 
-    <Perl>
-     $Apache::SizeLimit::MAX_UNSHARED_SIZE = 120000; # 120MB
-    </Perl>
-
-    PerlCleanupHandler Apache::SizeLimit
+        PerlCleanupHandler Apache::SizeLimit
 
 DESCRIPTION
-
-This module allows you to kill off Apache httpd processes if they grow
-too large. You can make the decision to kill a process based on its
-overall size, by setting a minimum limit on shared memory, or a
-maximum on unshared memory.
-
-You can set limits for each of these sizes, and if any limit is not
-met, the process will be killed.
-
-You can also limit the frequency that these sizes are checked so that
-this module only checks every N requests.
-
-This module is highly platform dependent, please read the CAVEATS
-section.
+    This module allows you to kill off Apache httpd processes if they grow
+    too large. You can make the decision to kill a process based on its
+    overall size, by setting a minimum limit on shared memory, or a maximum
+    on unshared memory.
+
+    You can set limits for each of these sizes, and if any limit is
+    exceeded, the process will be killed.
+
+    You can also limit the frequency that these sizes are checked so that
+    this module only checks every N requests.
+
+    This module is highly platform dependent, please read the "PER-PLATFORM
+    BEHAVIOR" section for details. It is possible that this module simply
+    does not support your platform.
 
 API
+    You can set set the size limits from a Perl module or script loaded by
+    Apache by calling the appropriate class method on "Apache::SizeLimit":
 
-You can set set the size limits from a Perl module or script loaded by
-Apache:
-
-    use Apache::SizeLimit;
-
-    Apache::SizeLimit::setmax(150_000);           # Max size in KB
-    Apache::SizeLimit::setmin(10_000);            # Min share in KB
-    Apache::SizeLimit::setmax_unshared(120_000);  # Max unshared size in KB
-
-Then in your Apache configuration, make Apache::SizeLimit a
-C<PerlCleanupHandler>:
+    * Apache::SizeLimit->set_max_process_size($size)
+        This sets the maximum size of the process, including both shared and
+        unshared memory.
 
-    PerlCleanupHandler Apache::SizeLimit
+    * Apache::SizeLimit->set_max_unshared_size($size)
+        This sets the maximum amount of *unshared* memory the process can
+        use.
 
-If you want to use C<Apache::SizeLimit> from a registry script, you
-must call one of the above functions for every request:
+    * Apache::SizeLimit->set_min_shared_size($size)
+        This sets the minimum amount of shared memory the process must have.
 
-    use Apache::SizeLimit
+    The two methods related to shared memory size are effectively a no-op if
+    the module cannot determine the shared memory size for your platform.
+    See "PER-PLATFORM BEHAVIOR" for more details.
 
-    main();
+  Running the handler()
+    There are several ways to make this module actually run the code to kill
+    a process.
 
-    sub {
-        Apache::SizeLimit::setmax(150_000);
+    The simplest is to make "Apache::SizeLimit" a "PerlCleanupHandler" in
+    your Apache config:
 
-        # handle request
-    };
+        PerlCleanupHandler Apache::SizeLimit
 
-Calling any one of C<setmax()>, C<setmin()>, or C<setmax_unshared()>
-will install C<Apache::SizeLimit> as a cleanup handler, if it's not
-already installed.
+    This will ensure that "Apache::SizeLimit->handler()" is called run for
+    all requests.
 
-If you want to combine this module with a cleanup handler of your own,
-make sure that C<Apache::SizeLimit> is the last handler run:
+    If you want to combine this module with a cleanup handler of your own,
+    make sure that "Apache::SizeLimit" is the last handler run:
 
-    PerlCleanupHandler  Apache::SizeLimit My::CleanupHandler
+        PerlCleanupHandler  Apache::SizeLimit My::CleanupHandler
 
-Remember, mod_perl will run stacked handlers from right to left, as
-they're defined in your configuration.
+    Remember, mod_perl will run stacked handlers from right to left, as
+    they're defined in your configuration.
 
-You can explicitly call the C<Apache::SizeLimit::handler()> function
-from your own handler:
+    You can also explicitly call the "Apache::SizeLimit->handler()" function
+    from your own cleanup handler:
 
-    package My::CleanupHandler
+        package My::CleanupHandler
 
-    sub handler {
-        my $r = shift;
+        sub handler {
+            my $r = shift;
 
-        # do my thing
+            # Causes File::Temp to remove any temp dirs created during the
+            # request
+            File::Temp::cleanup();
 
-        return Apache::SizeLimit::handler($r);
-    }
+            return Apache::SizeLimit->handler($r);
+        }
 
-Since checking the process size can take a few system calls on some
-platforms (e.g. linux), you may want to only check the process size
-every N times. To do so, simple set the
-C<$Apache::SizeLimit::CHECK_EVERY_N_REQUESTS> global.
+    * Apache::SizeLimit->add_cleanup_handler($r)
+        You can call this method inside a request to run
+        "Apache::SizeLimit"'s "handler()" method for just that request. If
+        this method is called repeatedly, it ensures that it only every adds
+        one cleanup handler.
 
-    $Apache::SizeLimit::CHECK_EVERY_N_REQUESTS = 2;
+  Checking Every N Requests
+    Since checking the process size can take a few system calls on some
+    platforms (e.g. linux), you may not want to check the process size for
+    every request.
 
-Now C<Apache::SizeLimit> will only check the process size on every
-other request.
+    * Apache::SizeLimit->set_check_interval($interval)
+        Calling this causes "Apache::SizeLimit" to only check the process
+        size every $interval requests. If you want this to affect all
+        processes, make sure to call this during server startup.
 
-Deprecated API
-
-Previous versions of this module documented three globals for defining
-memory size limits:
-
-  $Apache::SizeLimit::MAX_PROCESS_SIZE
-
-  $Apache::SizeLimit::MIN_SHARE_SIZE
-
-  $Apache::SizeLimit::MAX_UNSHARED_SIZE
-
-Direct use of these globals is deprecated, but will continue to work
-for the foreseeable future.
+SHARED MEMORY OPTIONS
+    In addition to simply checking the total size of a process, this module
+    can factor in how much of the memory used by the process is actually
+    being shared by copy-on-write. If you don't understand how memory is
+    shared in this way, take a look at the mod_perl docs at
+    http://perl.apache.org/docs/.
+
+    You can take advantage of the shared memory information by setting a
+    minimum shared size and/or a maximum unshared size. Experience on one
+    heavily trafficked mod_perl site showed that setting maximum unshared
+    size and leaving the others unset is the most effective policy. This is
+    because it only kills off processes that are truly using too much
+    physical RAM, allowing most processes to live longer and reducing the
+    process churn rate.
+
+PER-PLATFORM BEHAVIOR
+    This module is highly platform dependent, since finding the size of a
+    process is different for each OS, and some platforms may not be
+    supported. In particular, the limits on minimum shared memory and
+    maximum shared memory are currently only supported on Linux and BSD. If
+    you can contribute support for another OS, patches are very welcome.
+
+    Currently supported OSes:
+
+  linux
+    For linux we read the process size out of /proc/self/statm. If you are
+    worried about performance, you can consider using
+    "Apache::SizeLimit->set_check_interval()" to reduce how often this read
+    happens.
+
+    As of linux 2.6, /proc/self/statm does not report the amount of memory
+    shared by the copy-on-write mechanism as shared memory. This means that
+    decisions made based on shared memory as reported by that interface are
+    inherently wrong.
+
+    However, as of the 2.6.14 release of the kernel, there is
+    /proc/self/smaps entry for each process. /proc/self/smaps reports
+    various sizes for each memory segment of a process and allows us to
+    count the amount of shared memory correctly.
+
+    If "Apache::SizeLimit" detects a kernel that supports /proc/self/smaps
+    and the "Linux::Smaps" module is installed it will use that module
+    instead of /proc/self/statm.
+
+    Reading /proc/self/smaps is expensive compared to /proc/self/statm. It
+    must look at each page table entry of a process. Further, on
+    multiprocessor systems the access is synchronized with spinlocks. Again,
+    you might consider using "Apache::SizeLimit->set_check_interval()".
+
+   Copy-on-write and Shared Memory
+    The following example shows the effect of copy-on-write:
+
+      <Perl>
+        require Apache::SizeLimit;
+        package X;
+        use strict;
+        use Apache::Constants qw(OK);
+
+        my $x = "a" x (1024*1024);
+
+        sub handler {
+          my $r = shift;
+          my ($size, $shared) = $Apache::SizeLimit->_check_size();
+          $x =~ tr/a/b/;
+          my ($size2, $shared2) = $Apache::SizeLimit->_check_size();
+          $r->content_type('text/plain');
+          $r->print("1: size=$size shared=$shared\n");
+          $r->print("2: size=$size2 shared=$shared2\n");
+          return OK;
+        }
+      </Perl>
+
+      <Location /X>
+        SetHandler modperl
+        PerlResponseHandler X
+      </Location>
+
+    The parent Apache process allocates memory for the string in $x. The
+    "tr"-command then overwrites all "a" with "b" if the handler is called
+    with an argument. This write is done in place, thus, the process size
+    doesn't change. Only $x is not shared anymore by means of copy-on-write
+    between the parent and the child.
+
+    If /proc/self/smaps is available curl shows:
+
+      r2@s93:~/work/mp2> curl http://localhost:8181/X
+      1: size=13452 shared=7456
+      2: size=13452 shared=6432
+
+    Shared memory has lost 1024 kB. The process' overall size remains
+    unchanged.
+
+    Without /proc/self/smaps it says:
+
+      r2@s93:~/work/mp2> curl http://localhost:8181/X
+      1: size=13052 shared=3628
+      2: size=13052 shared=3636
+
+    One can see the kernel lies about the shared memory. It simply doesn't
+    count copy-on-write pages as shared.
+
+  solaris 2.6 and above
+    For solaris we simply retrieve the size of /proc/self/as, which contains
+    the address-space image of the process, and convert to KB. Shared memory
+    calculations are not supported.
+
+    NOTE: This is only known to work for solaris 2.6 and above. Evidently
+    the /proc filesystem has changed between 2.5.1 and 2.6. Can anyone
+    confirm or deny?
+
+  BSD (and OSX)
+    Uses "BSD::Resource::getrusage()" to determine process size. This is
+    pretty efficient (a lot more efficient than reading it from the /proc fs
+    anyway).
+
+  AIX?
+    Uses "BSD::Resource::getrusage()" to determine process size. Not sure if
+    the shared memory calculations will work or not. AIX users?
+
+  Win32
+    Uses "Win32::API" to access process memory information. "Win32::API" can
+    be installed under ActiveState perl using the supplied ppm utility.
+
+  Everything Else
+    If your platform is not supported, then please send a patch to check the
+    process size. The more portable/efficient/correct the solution the
+    better, of course.
 
 ABOUT THIS MODULE
+    This module was written in response to questions on the mod_perl mailing
+    list on how to tell the httpd process to exit if it gets too big.
 
-This module was written in response to questions on the mod_perl
-mailing list on how to tell the httpd process to exit if it gets too
-big.
-
-Actually, there are two big reasons your httpd children will grow.
-First, your code could have a bug that causes the process to increase
-in size very quickly. Second, you could just be doing operations that
-require a lot of memory for each request. Since Perl does not give
-memory back to the system after using it, the process size can grow
-quite large.
-
-This module will not really help you with the first problem. For that
-you should probably look into C<Apache::Resource> or some other means
-of setting a limit on the data size of your program.  BSD-ish systems
-have C<setrlimit()>, which will kill your memory gobbling processes.
-However, it is a little violent, terminating your process in
-mid-request.
-
-This module attempts to solve the second situation, where your process
-slowly grows over time. It checks memory usage after every request,
-and if it exceeds a threshold, exits gracefully.
-
-By using this module, you should be able to discontinue using the
-Apache configuration directive B<MaxRequestsPerChild>, although for
-some folks, using both in combination does the job.
-
-SHARED MEMORY OPTIONS
+    Actually, there are two big reasons your httpd children will grow.
+    First, your code could have a bug that causes the process to increase in
+    size very quickly. Second, you could just be doing operations that
+    require a lot of memory for each request. Since Perl does not give
+    memory back to the system after using it, the process size can grow
+    quite large.
+
+    This module will not really help you with the first problem. For that
+    you should probably look into "Apache::Resource" or some other means of
+    setting a limit on the data size of your program. BSD-ish systems have
+    "setrlimit()", which will kill your memory gobbling processes. However,
+    it is a little violent, terminating your process in mid-request.
+
+    This module attempts to solve the second situation, where your process
+    slowly grows over time. It checks memory usage after every request, and
+    if it exceeds a threshold, exits gracefully.
+
+    By using this module, you should be able to discontinue using the Apache
+    configuration directive MaxRequestsPerChild, although for some folks,
+    using both in combination does the job.
+
+DEPRECATED APIS
+    Previous versions of this module documented three globals for defining
+    memory size limits:
+
+    * $Apache::SizeLimit::MAX_PROCESS_SIZE
+    * $Apache::SizeLimit::MIN_SHARE_SIZE
+    * $Apache::SizeLimit::MAX_UNSHARED_SIZE
+    * $Apache::SizeLimit::CHECK_EVERY_N_REQUESTS
+    * $Apache::SizeLimit::USE_SMAPS
+
+    Direct use of these globals is deprecated, but will continue to work for
+    the foreseeable future.
+
+    It also documented three functions for use from registry scripts:
+
+    * Apache::SizeLimit::setmax()
+    * Apache::SizeLimit::setmin()
+    * Apache::SizeLimit::setmax_unshared()
 
-In addition to simply checking the total size of a process, this
-module can factor in how much of the memory used by the process is
-actually being shared by copy-on-write. If you don't understand how
-memory is shared in this way, take a look at the mod_perl Guide at
-http://perl.apache.org/guide/.
-
-You can take advantage of the shared memory information by setting a
-minimum shared size and/or a maximum unshared size. Experience on one
-heavily trafficked mod_perl site showed that setting maximum unshared
-size and leaving the others unset is the most effective policy. This
-is because it only kills off processes that are truly using too much
-physical RAM, allowing most processes to live longer and reducing the
-process churn rate.
-
-CAVEATS
-
-This module is highly platform dependent, since finding the size of a
-process is different for each OS, and some platforms may not be
-supported. In particular, the limits on minimum shared memory and
-maximum shared memory are currently only supported on Linux and BSD.
-If you can contribute support for another OS, patches are very
-welcome.
-
-Currently supported OSes:
-
-linux
-
-For linux we read the process size out of F</proc/self/statm>.  This
-is a little slow, but usually not too bad. If you are worried about
-performance, try only setting up the the exit handler inside CGIs
-(with the C<setmax()> function), and see if the CHECK_EVERY_N_REQUESTS
-option is of benefit.
-
-Since linux 2.6 F</proc/self/statm> does not report the amount of
-memory shared by the copy-on-write mechanism as shared memory. Hence
-decisions made on the basis of C<MAX_UNSHARED_SIZE> or
-C<MIN_SHARE_SIZE> are inherently wrong.
-
-To correct this situation, as of the 2.6.14 release of the kernel,
-there is F</proc/self/smaps> entry for each
-process. F</proc/self/smaps> reports various sizes for each memory
-segment of a process and allows us to count the amount of shared
-memory correctly.
-
-If C<Apache::SizeLimit> detects a kernel that supports
-F</proc/self/smaps> and if the C<Linux::Smaps> module is installed it
-will use them instead of F</proc/self/statm>. You can prevent
-C<Apache::SizeLimit> from using F</proc/self/smaps> and turn on the
-old behaviour by setting C<$Apache::SizeLimit::USE_SMAPS> to 0.
-
-C<Apache::SizeLimit> itself will C<$Apache::SizeLimit::USE_SMAPS> to 0
-if it cannot load C<Linux::Smaps> or if your kernel does not support
-F</proc/self/smaps>. Thus, you can check it to determine what is
-actually used.
-
-NOTE: Reading F</proc/self/smaps> is expensive compared to
-F</proc/self/statm>. It must look at each page table entry of a process.
-Further, on multiprocessor systems the access is synchronized with
-spinlocks. Hence, you are encouraged to set the C<CHECK_EVERY_N_REQUESTS>
-option.
-
-The following example shows the effect of copy-on-write:
-
-  <Perl>
-    require Apache::SizeLimit;
-    package X;
-    use strict;
-    use Apache::Constants qw(OK);
-
-    my $x= "a" x (1024*1024);
-
-    sub handler {
-      my $r = shift;
-      my ($size, $shared) = $Apache::SizeLimit::check_size();
-      $x =~ tr/a/b/;
-      my ($size2, $shared2) = $Apache::SizeLimit::check_size();
-      $r->content_type('text/plain');
-      $r->print("1: size=$size shared=$shared\n");
-      $r->print("2: size=$size2 shared=$shared2\n");
-      return OK;
-    }
-  </Perl>
-
-  <Location /X>
-    SetHandler modperl
-    PerlResponseHandler X
-  </Location>
-
-The parent apache allocates a megabyte for the string in C<$x>. The
-C<tr>-command then overwrites all "a" with "b" if the handler is
-called with an argument. This write is done in place, thus, the
-process size doesn't change. Only C<$x> is not shared anymore by
-means of copy-on-write between the parent and the child.
-
-If F</proc/self/smaps> is available curl shows:
-
-  r2@s93:~/work/mp2> curl http://localhost:8181/X
-  1: size=13452 shared=7456
-  2: size=13452 shared=6432
-
-Shared memory has lost 1024 kB. The process' overall size remains unchanged.
-
-Without F</proc/self/smaps> it says:
-
-  r2@s93:~/work/mp2> curl http://localhost:8181/X
-  1: size=13052 shared=3628
-  2: size=13052 shared=3636
-
-One can see the kernel lies about the shared memory. It simply doesn't
-count copy-on-write pages as shared.
-
-solaris 2.6 and above
-
-For solaris we simply retrieve the size of F</proc/self/as>, which
-contains the address-space image of the process, and convert to KB.
-Shared memory calculations are not supported.
-
-NOTE: This is only known to work for solaris 2.6 and above. Evidently
-the F</proc> filesystem has changed between 2.5.1 and 2.6. Can anyone
-confirm or deny?
-
-*bsd*
-
-Uses C<BSD::Resource::getrusage()> to determine process size.  This is
-pretty efficient (a lot more efficient than reading it from the
-F</proc> fs anyway).
-
-AIX?
-
-Uses C<BSD::Resource::getrusage()> to determine process size.  Not
-sure if the shared memory calculations will work or not.  AIX users?
-
-Win32
-
-Uses C<Win32::API> to access process memory information.
-C<Win32::API> can be installed under ActiveState perl using the
-supplied ppm utility.
-
-If your platform is not supported, then please send a patch to check
-the process size. The more portable/efficient/correct the solution the
-better, of course.
+    Besides setting the appropriate limit, these functions *also* add a
+    cleanup handler to the current request.
 
 AUTHOR
+    Doug Bagley <do...@bagley.org>, channeling Procrustes.
 
-Doug Bagley <do...@bagley.org>, channeling Procrustes.
-
-Brian Moseley <ix...@maz.org>: Solaris 2.6 support
+    Brian Moseley <ix...@maz.org>: Solaris 2.6 support
 
-Doug Steinwand and Perrin Harkins <pe...@elem.com>: added support 
-    for shared memory and additional diagnostic info
+    Doug Steinwand and Perrin Harkins <pe...@elem.com>: added support for
+    shared memory and additional diagnostic info
 
-Matt Phillips <mp...@virage.com> and Mohamed Hendawi
-<mh...@virage.com>: Win32 support
+    Matt Phillips <mp...@virage.com> and Mohamed Hendawi
+    <mh...@virage.com>: Win32 support
 
-Dave Rolsky <au...@urth.org>, maintenance and fixes outside of
-mod_perl tree (0.06).
+    Dave Rolsky <au...@urth.org>, maintenance and fixes outside of
+    mod_perl tree (0.9+).