You are viewing a plain text version of this content. The canonical link for it is here.
Posted to docs-cvs@perl.apache.org by st...@apache.org on 2001/12/27 12:15:04 UTC

cvs commit: modperl-docs/src/docs/2.0/devel/testing testing.pod

stas        01/12/27 03:15:04

  Added:       src/docs/2.0/devel/testing testing.pod
  Removed:     src/docs/2.0/devel/writing_tests writing_tests.pod
  Log:
  moving writing_tests to testing, since now the doc is covering more than
  just writing tests.
  
  Revision  Changes    Path
  1.1                  modperl-docs/src/docs/2.0/devel/testing/testing.pod
  
  Index: testing.pod
  ===================================================================
  =head1 NAME
  
  Running and Developing Tests with the C<Apache::Test> Framework
  
  =head1 Introduction
  
  This chapter is talking about the C<Apache::Test> framework, and in
  particular explains how to:
  
  =over
  
  =item 1 run existing tests
  
  =item 2 setup a testing environment for a new project
  
  =item 3 develop new tests
  
  =back
  
  But first let's introduce the C<Apache::Test> framework.
  
  The C<Apache::Test> framework is designed for easy writing of tests
  that has to be run under Apache webserver (not necessarily
  mod_perl). Originally designed for the mod_perl Apache module, it was
  extended to be used for any Apache module.
  
  The tests themselves are written in Perl, and the framework will provide an
  extensive functionality which makes the tests writing a simple and
  therefore enjoyable process.
  
  If you have ever written or looked at the tests most Perl modules come
  with, C<Apache::Test> uses the same concept. The script I<t/TEST> is
  running all the files ending with I<.t> it finds in the I<t/>
  directory. When executed a typical test prints the following:
  
    1..3     # going to run 3 tests
    ok 1     # the first  test has passed
    ok 2     # the second test has passed
    not ok 3 # the third  test has failed
  
  Every C<ok> or C<not ok> is followed by the number which tells which
  sub-test has succeeded or failed.
  
  I<t/TEST> uses the C<Test::Harness> module which intercepts the
  C<STDOUT> stream, parses it and at the end of the tests print the
  results of the tests running: how many tests and sub-tests were run,
  how many succeeded, skipped or failed.
  
  Some tests may be skipped by printing:
  
    1..0 # all tests in this file are going to be skipped.
  
  Usually a test may be skipped when some feature is optional and/or
  prerequisites are not installed on the system, but this is not
  critical for the usefulness of the test. Once you test that you cannot
  proceed with the tests and it's not a must pass test, you just skip
  it.
  
  =head2 Verbose Testing
  
  By default print() statements in the test script are filtered out by
  C<Test::Harness>.  if you want the test to print what it does (if you
  decide to debug some test) use C<-verbose> option. So for example if
  your test does this:
  
    print "# testing : feature foo\n";
    print "# expected: $expected\n";
    print "# received: $received\n";
    ok $expected eq $received;
  
  in the normal mode, you won't see any of these prints. But if you run
  the test with I<t/TEST -verbose>, you will see something like this:
  
    # testing : feature foo
    # expected: 2
    # received: 2
    ok 2
  
  When you develop the test you should always put the debug statements
  there, and once the test works for you do not comment out or delete
  these debug statements. This is because if some user reports a failure
  in some test, you can ask him to run the failing test in the verbose
  mode and send you back the report. It'll be much easier to understand
  what the problem is if you get these debug printings from the user.
  
  In the section L<"Writing Tests"> several helper functions which make
  the tests writing easier are discussed.
  
  For more details about the C<Test::Harness> module please refer to its
  manpage. Also see the C<Test> manpage about Perl's test suite.
  
  =head1 Prerequisites
  
  In order to use C<Apache::Test> it has to be installed first.
  
  Install C<Apache::Test> using the familiar procedure:
  
    % cd Apache-Test
    % perl Makefile.PL
    % make && make test && make install
  
  If you install mod_perl 2.x, you get C<Apache::Test> installed as
  well.
  
  =head1 Running Tests
  
  It's much easier to copy-cat things, than creating from scratch.  It's
  much easier to develop tests, when you have some existing system that
  you can test, see how it works and build your own testing environment
  in a similar fashion. Therefore let's first look at how the existing
  test enviroments work.
  
  You can look at the modperl-2.0's or httpd-test's (I<perl-framework>)
  testing environments which both use C<Apache::Test> for their test
  suites.
  
  =head2 Testing Options
  
  Run:
  
    % t/TEST -help
  
  to get the list of options you can use during testing. They are
  covered in the rest of this document.
  
  =head2 Basic Testing
  
  Running tests is just like for any CPAN Perl module; first we generate
  the I<Makefile> file and build everything with I<make>:
  
    % perl Makefile.PL [options]
    % make
  
  Now we can do the testing. You can run the tests in two ways. The
  first one is usual:
  
    % make test
  
  but it adds quite an overhead, since it has to check that everything
  is up to date (the usual C<make> source change control). Therefore you
  have to run it only once after C<make> and for re-running the tests
  it's faster to run the tests directly via:
  
    % t/TEST
  
  When C<make test> or C<t/TEST> are run, all tests found in the I<t>
  directory (files ending with I<.t> are recognized as tests) will be
  run.
  
  =head2 Individual Testing
  
  To run a single test, simple specify it at the command line. For
  example to run the test file I<t/protocol/echo.t>, execute:
  
    % t/TEST protocol/echo
  
  Notice that you don't have to add the I<t/> prefix and I<.t> extension
  for the test filenames if you specify them explicitly, but you can
  have these as well. Therefore the following are all valid commands:
  
    % t/TEST   protocol/echo.t
    % t/TEST t/protocol/echo
    % t/TEST t/protocol/echo.t
  
  The server will be stopped if it was already running and a new one
  will be started before running the I<t/protocol/echo.t> test. At the
  end of the test the server will be shut down.
  
  When you run specific tests you may want to run them in the verbose
  mode, and depending on how the test was written, you may get more
  debug information under this mode. This mode is turned on with
  I<-verbose> option:
  
    % t/TEST -verbose protocol/echo
  
  You can run groups of tests at once. This command:
  
    % ./t/TEST modules protocol/echo
  
  will run all the tests in I<t/modules/> directory, followed by
  I<t/protocol/echo.t> test.
  
  
  =head2 Repetitive Testing
  
  By default when you run the test without I<-run-tests> option, the
  server will be started before the testing and stopped at the end. If
  during a debugging process you need to re-run tests without a need to
  restart the server, you can start the server once:
  
    % t/TEST -start-httpd
  
  and then run the test(s) with I<-run-tests> option many times:
  
    % t/TEST -run-tests
  
  without waiting for the server to restart.
  
  When you are done with tests, stop the server with:
  
    % t/TEST -stop-httpd
  
  When the server is started you can modify I<.t> files and rerun the
  tests without restarting the server. However if you modify response
  handlers, you must restart the server for changes to take an
  effect. If C<Apache::Reload> is used and configured to automatically
  reload the handlers when they change you don't have to restart the
  server. For example to automatically reload all C<TestDirective::*>
  modules when they change on the disk, add to I<t/conf/extra.conf.in>:
  
    PerlModule Apache::Reload
    PerlInitHandler Apache::Reload
    PerlSetVar ReloadAll Off
    PerlSetVar ReloadModules "TestDirective::*"
  
  and restart the server.
  
  The I<-start-httpd> option always stops the server first if any is
  running. In case you have a server runnning on the same port, (for
  example if you develop the a few tests at the same time in different
  trees), you should run the server on a different port. C<Apache::Test>
  will try to automatically pick a free port, but you can explicitly
  tell on which port to run, using the I<-port> configuration option:
  
  META: -port=select is not yet committed!
  
    % t/TEST -start-httpd -port 8799
  
  or by setting an evironment variable C<APACHE_PORT> to the desired
  value before starting the server.
  
  Normally when I<t/TEST> is run without specifying the tests to run,
  the tests will be sorted alphabetically. If tests are explicitly
  passed as arguments to I<t/TEST> they will be run in a specified
  order.
  
  =head2 Verbose Testing
  
  In case something goes wrong you should run the tests in the verbose
  mode:
  
    % t/TEST -verbose
  
  In this case the test may print useful information, like what values
  it expects and what values it receives, given that the test is written
  to report these. In the silent mode (without C<-verbose>) these
  printouts are filtered out by C<Test::Harness>. When running in the
  I<verbose> mode usually it's a good idea to run only problematic tests
  to minimize the size of the generated output.
  
  When debugging problems it helps to keep the I<error_log> file open in
  another console, and see the debug output in the real time via
  tail(1):
  
    % tail -f t/logs/error_log
  
  Of course this file gets created only when the server starts, so you
  cannot run tail(1) on it before the server starts. Every time C<t/TEST
  -clean> is run, I<t/logs/error_log> gets deleted, therefore you have
  to run the tail(1) command again, when the server is started.
  
  =head2 Stress Testing
  
  =head3 The Problem
  
  When we try to test a stateless machine (i.e. all tests are
  independent), running all tests once ensures that all tested things
  properly work. However when a state machine is tested (i.e. where a
  run of one test may influence another test) it's not enough to run all
  the tests once to know that the tested features actually work. It's
  quite possible that if the same tests are run in a different order
  and/or repeated a few times, some tests may fail.  This usually
  happens when some tests don't restore the system under test to its
  pristine state at the end of the run, which may influence other tests
  which rely on the fact that they start on pristine state, when in fact
  it's not true anymore. In fact it's possible that a single test may
  fail when run twice or three times in a sequence.
  
  =head3 The Solution
  
  To reduce the possibility of such dependency errors, it's important to
  run random testing repeated many times with many different
  pseudo-random engine initialization seeds. Of course if no failures
  get spotted that doesn't mean that there are no tests
  inter-dependencies, unless all possible combinations were run
  (exhaustive approach). Therefore it's possible that some problems may
  still be seen in production, but this testing greatly minimizes such a
  possibility.
  
  The C<Apache::Test> framework provides a few options useful for stress
  testing.
  
  =over
  
  =item -times
  
  You can run the tests N times by using the I<-times> option. For
  example to run all the tests 3 times specify:
  
    % t/TEST -times=3
  
  =item -order
  
  It's possible that certain tests aren't cleaning up after themselves
  and modify the state of the server, which may influence other
  tests. But since normally all the tests are run in the same order, the
  potential problem may not be discovered until the code is used in
  production, where the real world testing hits the problem. Therefore
  in order to try to detect as many problems as possible during the
  testing process, it's may be useful to run tests in different orders.
  
  This if of course mosly useful in conjunction with I<-times=N> option.
  
  Assuming that we have tests a, b and c:
  
  =over
  
  =item * -order=rotate
  
  rotate the tests: a, b, c, a, b, c
  
  =item * -order=repeat
  
  repeat the tests: a, a, b, b, c, c
  
  =item * -order=random
  
  run in the random order, e.g.: a, c, c, b, a, b
  
  In this mode the seed picked by srand() is printed to C<STDOUT>, so it
  then can be used to rerun the tests in exactly the same order
  (remember to log the output).
  
  =item * -order=SEED
  
  used to initialize the pseudo-random algorithm, which allows to
  reproduce the same sequence of tests. For example if we run:
  
    % t/TEST -order=random -times=5
  
  and the seed 234559 is used, we can repeat the same order of tests, by
  running:
  
    % t/TEST -order=234559 -times=5
  
  Alternatively, the environment variable C<APACHE_TEST_SEED> can be set
  to the value of a seed when I<-order=random> is used. e.g. under
  bash(1):
  
    % APACHE_TEST_SEED=234559 t/TEST -order=random -times=5
  
  or with any shell program if you have the C<env(1)> utility:
  
    $ env APACHE_TEST_SEED=234559 t/TEST -order=random -times=5
  
  =back
  
  =back
  
  =head3 Resolving Sequence Problems
  
  When this kind of testing is used and a failure is detected there are
  two problems:
  
  =over
  
  =item 1
  
  First is to be able to reproduce the problem so if we think we fixed
  it, we could verify the fix. This one is easy, just remember the
  sequence of tests run till the failed test and rerun the same sequence
  once again after the problem has been fixed.
  
  =item 2
  
  Second is to be able to understand the cause of the problem. If during
  the random test the failure has happened after running 400 tests, how
  can we possibly know which previously running tests has caused to the
  failure of the test 401. Chances are that most of the tests were clean
  and don't have inter-dependency problem. Therefore it'd be very
  helpful if we could reduce the long sequence to a minimum. Preferably
  1 or 2 tests. That's when we can try to understand the cause of the
  detected problem.
  
  =back
  
  =head3 Apache::TestSmoke Solution
  
  C<Apache::TestSmoke> attempts to solve both problems. When it's run,
  at the end of each iteration it reports the minimal sequence of tests
  causing a failure. This doesn't always succeed, but works in many
  cases.
  
  You should create a small script to drive C<Apache::TestSmoke>,
  usually I<t/SMOKE.PL>. If you don't have it already, create it:
  
    file:t/SMOKE.PL
    ---------------
    #!perl
    
    use strict;
    use warnings FATAL => 'all';
    
    use FindBin;
    use lib "$FindBin::Bin/../Apache-Test/lib";
    use lib "$FindBin::Bin/../lib";
    
    use Apache::TestSmoke ();
    
    Apache::TestSmoke->new(@ARGV)->run;
  
  Usually I<Makefile.PL> converts it into I<t/SMOKE> while adjusting the
  perl path, but you can create I<t/SMOKE> in first place as well.
  
  I<t/SMOKE> performs the following operations:
  
  =over
  
  =item 1
  
  Runs the tests randomly until the first failure is detected. Or
  non-randomly if the option I<-order> is set to I<repeat> or I<rotate>.
  
  =item 2
  
  Then it tries to reduce that sequence of tests to a minimum, and this
  sequence still causes to the same failure.
  
  =item 3
  
  It reports all the successful reductions as it goes to STDOUT and
  report file of the format: smoke-report-<date>.txt.
  
  In addition the systems build parameters are logged into the report
  file, so the detected problems could be reproduced.
  
  =item 4
  
  Goto 1 and run again using a new random seed, which potentially should
  detect different failures.
  
  =back
  
  Currently for each reduction path, the following reduction algorithms
  are applied:
  
  =over
  
  =item 1
  
  Binary search: first try the upper half then the lower.
  
  =item 2
  
  Random window: randomize the left item, then the right item and return
  the items between these two points.
  
  =back
  
  You can get the usage information by executing:
  
    % t/SMOKE -help
  
  By default you don't need to supply any arguments to run it, simply
  execute:
  
    % t/SMOKE
  
  If you want to work on certain tests you can specify them in the same
  way you do with I<t/TEST>:
  
    % t/SMOKE foo/bar foo/tar
  
  If you already have a sequence of tests that you want to reduce
  (perhaps because a previous run of the smoke testing didn't reduce the
  sequence enough to be able to diagnose the problem), you can request
  to do just that:
  
    % t/SMOKE -order=rotate -times=1 foo/bar foo/tar
  
  I<-order=rotate> is used just to override the default
  I<-order=random>, since in this case we want to preserve the order. We
  also specify I<-times=1> for the same reason (override the default
  which is 50).
  
  You can override the number of srand() iterations to perform (read:
  how many times to randomize the sequence), the number of times to
  repeat the tests (the default is 10) and the path to the file to use
  for reports:
  
    % t/SMOKE -times=5 -iterations=20 -report=../myreport.txt
  
  Finally, any other options passed will be forwarded to C<t/TEST> as
  is.
  
  =head2 RunTime Configuration Overriding
  
  After the server is configured during C<make test> or with C<t/TEST
  -config>, it's possible to explicitly override certain configuration
  parameters. The override-able parameters are listed when executing:
  
    % t/TEST -help
  
  Probably the most useful parameters are:
  
  =over
  
  =item * -preamble
  
  configuration directives to add at the beginning of I<httpd.conf>.
  For example to turn the tracing on:
  
    % t/TEST -preamble "PerlTrace all"
  
  =item * -postamble
  
  configuration directives to add at the end of I<httpd.conf>. For
  example to load a certain Perl module:
  
    % t/TEST -postamble "PerlModule MyDebugMode"
  
  =item * -user
  
  run as user I<nobody>:
  
    % t/TEST -user nobody
  
  =item * -port
  
  run on a different port:
  
    % t/TEST -port 8799
  
  =item * -servername
  
  run on a different server:
  
    % t/TEST -servername test.example.com
  
  =item * -httpd
  
  configure an httpd other than the default (that apxs figures out):
  
    % t/TEST -httpd ~/httpd-2.0/httpd
  
  =item * -apxs
  
  switch to another apxs:
  
    % t/TEST -apxs ~/httpd-2.0-prefork/bin/apxs
  
  =back
  
  For a complete list of override-able configuration parameters see the
  output of C<t/TEST -help>.
  
  =head2 Request Generation and Response Options
  
  We have mentioned already the most useful run-time options. Here are
  some other options that you may find useful during testing.
  
  =over
  
  =item * -ping
  
  Ping the server to see whether it runs
  
    % t/TEST -ping
  
  Ping the server and wait until the server starts, report waiting time.
  
    % t/TEST -ping=block
  
  This can be useful in conjunction with I<-run-tests> option during debugging:
  
    % t/TEST -ping=block -run-tests
  
  normally, I<-run-tests> will immediately quit if it detects that the server
  is not running, but with I<-ping=block> in effect, it'll wait
  indefinitely for the server to start up.
  
  =item * -head
  
  Issue a C<HEAD> request. For example to request I</server-info>:
  
    % t/TEST -head /server-info
  
  =item * -get
  
  Request the body of a certain URL via C<GET>.
  
    % t/TEST -get /server-info
  
  If no URL is specified C</> is used.
  
  ALso you can issue a C<GET> request but to get only headers as a
  response (e.g. useful to just check C<Content-length>)
  
    % t/TEST -head -get /server-info
  
  C<GET> URL with authentication credentials:
  
    % t/TEST -get /server-info -username dougm -password domination
  
  (please keep the password secret!)
  
  =item * -post
  
  Generate a C<POST> request.
  
  Read content to C<POST> from string:
  
    % t/TEST -post /TestApache::post -content 'name=dougm&company=covalent'
  
  Read content to C<POST> from C<STDIN>:
  
    % t/TEST -post /TestApache::post -content - < foo.txt
  
  Generate a content body of 1024 bytes in length:
  
    % t/TEST -post /TestApache::post -content x1024
  
  The same but print only the response headers, e.g. useful to just
  check C<Content-length>:
  
    % t/TEST -post -head /TestApache::post -content x1024
  
  =item * -header
  
  Add headers to (-get|-post|-head) request:
  
    % t/TEST -get -header X-Test=10 -header X-Host=example.com /server-info
  
  =item * -ssl
  
  Run all tests through mod_ssl:
  
    % t/TEST -ssl
  
  =item * -http11
  
  Run all tests with HTTP/1.1 (C<KeepAlive>) requests:
  
    % t/TEST -http11
  
  =item * -proxy
  
  Run all tests through mod_proxy:
  
    % t/TEST -proxy
  
  =item * 
  
  =back
  
  The debugging options I<-debug> and I<-breakpoint> are covered in the
  L<Debugging> section.
  
  For a complete list of available switches see the output of C<t/TEST
  -help>.
  
  =head2 Batch Mode
  
  When running in the batch mode and redirecting C<STDOUT>, this state
  is automagically detected and the I<no color> mode is turned on, under
  which the program generates a minimal output to make the log files
  useful. If this doesn't work and you still get all the mess printed
  during the interactive run, set the C<APACHE_TEST_NO_COLOR=1>
  environment variable.
  
  =head1 Setting Up Testing Environment
  
  We will assume that you setup your testing environment even before you
  have started coding the project, which is a very smart thing to do.
  Of course it'll take you more time upfront, but it'll will save you a
  lot of time during the project developing and debugging stages. The
  L<extreme programming
  methodology|/item_extreme_programming_methodology> says that tests
  should be written before starting the code development.
  
  =head2 Basic Testing Environment
  
  So the first thing is to create a package and all the helper files, so
  later on we can distribute it on CPAN. We are going to develop an
  C<Apache::Amazing> module as an example.
  
    % h2xs -AXn Apache::Amazing
    Writing Apache/Amazing/Amazing.pm
    Writing Apache/Amazing/Makefile.PL
    Writing Apache/Amazing/README
    Writing Apache/Amazing/test.pl
    Writing Apache/Amazing/Changes
    Writing Apache/Amazing/MANIFEST
  
  C<h2xs> is a nifty utility that gets installed together with Perl and
  helps us to create some of the files we will need later.
  
  However we are going to use a little bit different files layout,
  therefore we are going to move things around a bit.
  
  We want our module to live in the I<Apache-Amazing> directory, so we
  do:
  
    % mv Apache/Amazing Apache-Amazing
    % rmdir Apache
  
  From now on the I<Apache-Amazing> directory is our working directory.
  
    % cd Apache-Amazing
  
  We don't need the I<test.pl>. as we are going to create a whole
  testing environment:
  
    % rm test.pl
  
  We want our package to reside under the I<lib> directory:
  
    % mkdir lib
    % mkdir lib/Apache
    % mv Amazing.pm lib/Apache
  
  Now we adjust the I<lib/Apache/Amazing.pm> to look like this:
  
    file:lib/Apache/Amazing.pm
    --------------------------
    package Apache::Amazing;
    
    use strict;
    use warnings;
    
    use Apache::RequestRec ();
    use Apache::RequestIO ();
    
    $Apache::Amazing::VERSION = '0.01';
    
    use Apache::Const -compile => 'OK';
    
    sub handler {
        my $r = shift;
        $r->content_type('text/plain');
        $r->print("Amazing!");
        return Apache::OK;
    }
    1;
    __END__
    ... pod documentation goes here...
  
  The only thing it does is setting the I<text/plain> header and
  responding with I<"Amazing!">.
  
  Next adjust or create the I<Makefile.PL> file:
  
    file:Makefile.PL
    ----------------
    require 5.6.1;
    
    use ExtUtils::MakeMaker;
    
    use lib qw(../blib/lib lib );
    
    use Apache::TestMM qw(test clean); #enable 'make test'
    
    # prerequisites
    my %require =
      (
       "Apache::Test" => "", # any version will do
      );
  
    # accept the configs from comman line
    Apache::TestMM::filter_args();
    Apache::TestMM::generate_script('t/TEST');
  
    WriteMakefile(
        NAME         => 'Apache::Amazing',
        VERSION_FROM => 'lib/Apache/Amazing.pm',
        PREREQ_PM    => \%require,
        clean        => {
                         FILES => "@{ clean_files() }",
                        },
        ($] >= 5.005 ?
            (ABSTRACT_FROM => 'lib/Apache/Amazing.pm',
             AUTHOR        => 'Stas Bekman <stas (at) stason.org>',
            ) : ()
        ),
    );
    
    sub clean_files {
        return [@scripts];
    }
  
  C<Apache::TestMM> will do a lot of thing for us, such as building a
  complete Makefile with proper I<'test'> and I<'clean'> targets,
  automatically converting I<.PL> and I<conf/*.in> files and more.
  
  As you see we specify a prerequisites hash with I<Apache::Test> in it,
  so if the package gets distributed on CPAN, C<CPAN.pm> shell will know
  to fetch and install this required package.
  
  Next we create the test suite, which will reside in the I<t>
  directory:
  
    % mkdir t
  
  First we create I<t/TEST.PL> which will be automatically converted
  into I<t/TEST> during I<perl Makefile.PL> stage:
  
    file:t/TEST.PL
    --------------
    #!perl
    
    use strict;
    use warnings FATAL => 'all';
    
    use lib qw(lib);
    
    use Apache::TestRunPerl ();
    
    Apache::TestRunPerl->new->run(@ARGV);
  
  Assuming that C<Apache::Test> is already installed on your system and
  Perl can find it. If not you should tell Perl where to find it. For
  example you could add:
  
    use lib qw(../Apache-Test/lib);
  
  to I<t/TEST.PL>, if C<Apache::Test> is located in a parallel
  directory.
  
  As you can see we didn't write the real path to the Perl executable,
  but C<#!perl>. When I<t/TEST> is created the correct path will be
  placed there automatically.
  
  Next we need to prepare extra Apache configuration bits, which will
  reside in I<t/conf>:
  
    % mkdir t/conf
  
  We create the I<t/conf/extra.conf.in> file which will be automatically
  converted into I<t/conf/extra.conf> before the server starts. If the
  file has any placeholders like C<@documentroot@>, these will be
  replaced with the real values specific for the used server. In our
  case we put the following configuration bits into this file:
  
    file:t/conf/extra.conf.in
    -------------------------
    # this file will be Include-d by @ServerRoot@/httpd.conf
    
    # where Apache::Amazing can be found
    PerlSwitches -Mlib=@ServerRoot@/../lib
    # preload the module
    PerlModule Apache::Amazing
    <Location /test/amazing>
        SetHandler modperl
        PerlResponseHandler Apache::Amazing
    </Location>
  
  As you can see we just add a simple E<lt>LocationE<gt> container and
  tell Apache that the namespace I</test/amazing> should be handled by
  C<Apache::Amazing> module running as a mod_perl handler.
  
  As mentioned before you can use C<Apache::Reload> to automatically
  reload the modules under development when they change. The setup for
  this module goes into I<t/conf/extra.conf.in> as well.
  
    file:t/conf/extra.conf.in
    -------------------------
    PerlModule Apache::Reload
    PerlPostReadRequestHandler Apache::Reload
    PerlSetVar ReloadAll Off
    PerlSetVar ReloadModules "Apache::Amazing"
  
  For more information about C<Apache::Reload> refer to its manpage.
  
  Now we can create a simple test:
  
    file:t/basic.t
    -----------
    use strict;
    use warnings FATAL => 'all';
    
    use Apache::Amazing;
    use Apache::Test;
    use Apache::TestUtil;
    
    plan tests => 2;
    
    ok 1; # simple load test
    
    my $config = Apache::Test::config();
    my $url = '/test/amazing';
    my $data = $config->http_raw_get($url);
    
    ok t_cmp(
             "Amazing!",
             $data,
             "basic test",
            );
  
  Now create the I<README> file.
  
    % touch README
  
  Don't forget to put in the relevant information about your module, or
  arrange for C<ExtUtils::MakeMaker::WriteMakefile()> to do this for you
  with:
  
    file:Makefile.PL
    ----------------
    WriteMakefile(
                 ...
        dist  => {
                  PREOP => 'pod2text lib/Apache/Amazing.pm > $(DISTVNAME)/README',
                 },
                 ...
                 );
  
  in this case I<README> will be created from the documenation POD
  sections in I<lib/Apache/Amazing.pm>, but the file has to exists for
  I<make dist> to succeed.
  
  and finally we adjust or create the I<MANIFEST> file, so we can
  prepare a complete distribution. Therefore we list all the files that
  should enter the distribution including the I<MANIFEST> file itself:
  
    file:MANIFEST
    -------------
    lib/Apache/Amazing.pm
    t/TEST.PL
    t/basic.t
    t/conf/extra.conf.in
    Makefile.PL
    Changes
    README
    MANIFEST
  
  That's it. Now we can build the package. But we need to know where
  C<apxs> utility from the installed on our system Apache is located. We
  pass its path as an option:
  
    % perl Makefile.PL -apxs ~/httpd/prefork/bin/apxs
    % make
    % make test
  
    basic...........ok
    All tests successful.
    Files=1, Tests=2,  1 wallclock secs ( 0.52 cusr +  0.02 csys =  0.54 CPU)
  
  To install the package run:
  
    % make install
  
  Now we are ready to distribute the package on CPAN:
  
    % make dist
  
  will create the package which can be immediately uploaded to CPAN. In
  this example the generated source package with all the required files
  will be called: I<Apache-Amazing-0.01.tar.gz>.
  
  The only thing that we haven't done and hope that you will do is to
  write the POD sections for the C<Apache::Amazing> module, explaining
  how amazingly it works and how amazingly it can be deployed by other
  users.
  
  
  =head2 Extending Configuration Setup
  
  Sometimes you need to add extra I<httpd.conf> configuration and perl
  startup specific to your project that uses C<Apache::Test>. This can
  be accomplished by creating the desired files with an extension I<.in>
  in the I<t/conf/> directory and running:
  
    panic% t/TEST -config
  
  which for each file with the extension I<.in> will create a new file,
  without this extension, convert any template placeholders into real
  values and link it from the main I<httpd.conf>. The latter happens
  only if the file have the following extensions:
  
  =over
  
  =item * .conf.in
  
  will add to I<t/conf/httpd.conf>:
  
    Include foo.conf
  
  =item * .pl.in
  
  will add to I<t/conf/httpd.conf>:
  
    PerlRequire foo.pl
  
  =item * other
  
  other files with I<.in> extension will be processed as well, but not
  linked from I<httpd.conf>.
  
  =back
  
  As mentioned before the converted files are created, any special token
  in them are getting replaced with the appropriate values. For example
  the token C<@ServerRoot@> will be replaced with the value defined by
  the C<ServerRoot> directive, so you can write a file that does the
  following:
  
    file:my-extra.conf.in
    ---------------------
    PerlSwitches -Mlib=@ServerRoot@/../lib
  
  and assuming that the I<ServerRoot> is I<~/modperl-2.0/t/>, when
  I<my-extra.conf> will be created, it'll look like:
  
    file:my-extra.conf
    ------------------
    PerlSwitches -Mlib=~/modperl-2.0/t/../lib
  
  The valid tokens are defined in C<%Apache::TestConfig::Usage> and also
  can be seen in the output of C<t/TEST -help>'s I<configuration
  options> section. The tokens are case insensitive.
  
  =head2 Special Configuration Files
  
  Some of the files in the I<t/conf> directory have a special meaning,
  since the C<Apache::Test> framework uses them for the minimal
  configuration setup. But they can be overriden:
  
  =over
  
  =item *
  
  if the file I<t/conf/httpd.conf.in> exists, it will be used instead of
  the default template (in I<Apache/TestConfig.pm>).
  
  =item *
  
  if the file I<t/conf/extra.conf.in> exists, it will be used to
  generate I<t/conf/extra.conf> with C<@variable@> substitutions.
  
  =item *
  
  if the file I<t/conf/extra.conf> exists, it will be included by
  I<httpd.conf>.
  
  =item *
  
  if the file I<t/conf/modperl_extra.pl> exists, it will be included by
  I<httpd.conf> as a mod_perl file (PerlRequire).
  
  =back
  
  
  
  =head1 Apache::Test Framework's Architecture
  
  In the previous section we have written a basic test, which doesn't do
  much. In the following sections we will explain how to write more
  elaborate tests.
  
  When you write the test for Apache, unless you want to test some
  static resource, like fetching a file, usually you have to write a
  response handler and the corresponding test that will generate a
  request which will exercise this response handler and verify that the
  response is as expected. From now we may call these two parts as
  client and server parts of the test, or request and response parts of
  the test.
  
  In some cases the response part of the test runs the test inside
  itself, so all it requires from the request part of the test, is to
  generate the request and print out a complete response without doing
  anything else. In such cases C<Apache::Test> can auto-generate the
  client part of the test for you.
  
  =head2 Developing Response-only Part of a Test
  
  If you write only a response part of the test, C<Apache::Test> will
  automatically generate the corresponding test part that will generated
  the response. In this case your test should print I<'ok 1'>, I<'not ok
  2'> responses as usual tests do. The autogenerated request part will
  receive the response and print them out automatically completing the
  C<Test::Harness> expectations.
  
  The corresponding request part of the test is named just like the
  response part, using the following translation:
  
    $response_test =~ s|t/[^/]+/Test([^/]+)/(.*).pm$|t/\L$1\E/$2.t|;
  
  so for example I<t/response/TestApache/write.pm> becomes:
  I<t/apache/write.t>.
  
  If we look at the autogenerated test I<t/apache/write.t>, we can see
  that it start with the warning that it has been autogenerated, so you
  won't attempt to change it, following by the trace of the calls that
  generated this test, in case you want to trace back to who generated
  the test, and finally it loads the C<Apache::TestConfig> module and
  prints a raw response from the the response part:
  
    use Apache::TestConfig ();
    print Apache::TestConfig->thaw->http_raw_get("/TestApache::write");
  
  As you can see the request URI is autogenerated from the response test
  name:
  
    $response_test =~ s|.*/([^/]+)/(.*).pm$|/$1::$2|;
  
  So I<t/response/TestApache/write.pm> becomes: I</TestApache::write>.
  
  Now a simple response test may look like this:
  
    package TestApache::write;
    
    use strict;
    use warnings FATAL => 'all';
    
    use constant BUFSIZ => 512; #small for testing
    use Apache::Const -compile => 'OK';
    
    sub handler {
        my $r = shift;
        $r->content_type('text/plain');
    
        $r->write("1..2\n");
        $r->write("ok 1")
        $r->write("not ok 2")
    
        Apache::OK;
    }
    1;
  
  [F] C<Apache::Const> is mod_perl 2.x's package, if you test under 1.x,
  use the C<Apache::Constants> module instead [/F].
  
  The configuration part for this test will be autogenerated by the
  C<Apache::Test> framework and added to the autogenerated file
  I<t/conf/httpd.conf>. In our case the following configuration section
  will be added.
  
    <Location /TestApache::write>
       SetHandler modperl
       PerlResponseHandler TestApache::write
    </Location>
  
  You should remember to run:
  
    % t/TEST -clean
  
  so when you run your new tests the new configuration will be added.
  
  =head2 Developing Response and Request Parts of a Test
  
  But in most cases you want to write a two parts test where the client
  (request) parts generates various requests and tests the responses.
  
  It's possible that the client part tests a static file or some other
  feature that doesn't require a dynamic response. In this case, only
  the request part of the test should be written.
  
  If you need to write the complete test, with two parts, you proceed
  just like in the previous section, but now you write the client part
  of the test by yourself. It's quite easy, all you have to do is to
  generate requests and check the response. So a typical test will look
  like this:
  
    file:t/apache/cool.t
    --------------------
    use strict;
    use warnings FATAL => 'all';
  
    use Apache::Test;
    use Apache::TestUtil;
    use Apache::TestRequest;
  
    plan tests => 1; # plan one test.
  
    Apache::TestRequest::module('default');
  
    my $config   = Apache::Test::config();
    my $hostport = Apache::TestRequest::hostport($config) || '';
    t_debug("connecting to $hostport");
  
    my $received = $config->http_raw_get("/TestApache::cool", undef);
    my $expected = "COOL";
  
    ok t_cmp(
             $expected,
             $received,
             "testing TestApache::cool",
              );
  
  See the L<Apache::TestUtil> manpage for more info on the t_cmp()
  function (e.g. it works with regexs as well).
  
  And the corresponding response part:
  
    file:t/response/TestApache/cool.pm
    ----------------------------------
    package TestApache::cool;
    
    use strict;
    use warnings FATAL => 'all';
    
    use Apache::Const -compile => 'OK';
    
    sub handler {
        my $r = shift;
        $r->content_type('text/plain');
    
        $r->write("COOL");
    
        Apache::OK;
    }
    1;
  
  Again, remember to run I<t/TEST -clean> before running the new test so
  the configuration will be created for it.
  
  As you can see the test generates a request to I</TestApache::cool>,
  and expects it to return I<"COOL">. If we run the test:
  
    % ./t/TEST t/apache/cool
  
  We see:
  
    apache/cool....ok
    All tests successful.
    Files=1, Tests=1,  1 wallclock secs ( 0.52 cusr +  0.02 csys =  0.54 CPU)
  
  But if we run it in the debug (verbose) mode, we can actually see what
  we are testing, what was expected and what was received:
  
    apache/cool....1..1
    # connecting to localhost:8529
    # testing : testing TestApache::cool
    # expected: COOL
    # received: COOL
    ok 1
    ok
    All tests successful.
    Files=1, Tests=1,  1 wallclock secs ( 0.49 cusr +  0.03 csys =  0.52 CPU)
  
  So in case in our simple test we have received something different
  from I<COOL> or nothing at all, we can immediately see what's the
  problem.
  
  The name of the request part of the test is very important. If
  C<Apache::Test> cannot find the corresponding test for the response
  part it'll automatically generate one and in this case it's probably
  not what you want. Therefore when you choose the filename for the
  test, make sure to pick the same C<Apache::Test> will pick. So if the
  response part is named: I<t/response/TestApache/cool.pm> the request
  part should be named I<t/apache/cool.t>. See the regular expression
  that does that in the previous section.
  
  =head2 Developing Test Response Handlers in C
  
  If you need to exercise some C API and you don't have a Perl glue for
  it, you can still use C<Apache::Test> for the testing. It allows you
  to write response handlers in C and makes it easy to integrate these
  with other Perl tests and use Perl for request part which will
  exercise the C module.
  
  The C modules look just like standard Apache C modules, with a couple
  of differences to:
  
  =over
  
  =item a
  
  help them fit into the test suite
  
  =item b
  
  allow them to compile nicely with Apache 1.x or 2.x.
  
  =back
  
  The I<httpd-test> ASF project is a good example to look at. The C
  modules are located under: I<httpd-test/perl-framework/c-modules/>.
  Look at I<c-modules/echo_post/echo_post.c> for a nice simple example.
  C<mod_echo_post> simply echos data that is C<POST>ed to it.
  
  The differences between vairous tests may be summarized as follows:
  
  =over
  
  =item *
  
  If the first line is:
  
    #define HTTPD_TEST_REQUIRE_APACHE 1
  
  or
  
    #define HTTPD_TEST_REQUIRE_APACHE 2
  
  then the test will be skipped unless the version matches. If a module
  is compatible with the version of Apache used then it will be
  automatically compiled by I<t/TEST> with C<-DAPACHE1> or C<-DAPACHE2>
  so you can conditionally compile it to suit different httpd versions.
  
  =item *
  
  If there is a section bounded by:
  
    #if CONFIG_FOR_HTTPD_TEST
    ...
    #endif
  
  in the I<.c> file then that section will be inserted verbatim into
  I<t/conf/httpd.conf> by I<t/TEST>.
  
  =back
  
  There is a certain amount of magic which hopefully allows most modules
  to be compiled for Apache 1.3 or Apache 2.0 without any conditional
  stuff.  Replace XXX with the module name, for example echo_post or
  random_chunk:
  
  =over
  
  =item *
  
  You should:
  
    #include "apache_httpd_test.h" 
  
  which should be preceded by an:
  
    #define APACHE_HTTPD_TEST_HANDLER XXX_handler
  
  I<apache_httpd_test.h> pulls in a lot of required includes and defines
  some constants and types that are not defined for Apache 1.3.
  
  =item *
  
  The handler function should be:
  
    static int XXX_handler(request_rec *r);
  
  =item *
  
  At the end of the file should be an:
  
    APACHE_HTTPD_TEST_MODULE(XXX)
  
  where XXX is the same as that in C<APACHE_HTTPD_TEST_HANDLER>. This
  will generate the hooks and stuff.
  
  =back
  
  =head2 Request Generation Methods
  
  META: here goes the explanation of shortcuts: GET_BODY, POST_BODY,
  etc.
  
  =head2 Starting Multiple Servers
  
  By default the C<Apache::Test> framework sets up only a single server
  to test against.
  
  In some cases you need to have more than one server.  If this is the
  situation, you have to override the I<maxclients> configuration
  directive, whose default is 1. Usually this is done in C<t/TEST.PL> by
  subclassing the parent test run class and overriding the
  new_test_config() method. For example if the parent class is
  C<Apache::TestRunPerl>, you can change your C<t/TEST.PL> to be:
  
    use strict;
    use warnings FATAL => 'all';
    
    use lib "../lib"; # test against the source lib for easier dev
    use lib map {("../blib/$_", "../../blib/$_")} qw(lib arch);
    
    use Apache::TestRunPerl ();
    
    package MyTest;
    
    our @ISA = qw(Apache::TestRunPerl);
    
    # subclass new_test_config to add some config vars which will be
    # replaced in generated httpd.conf
    sub new_test_config {
        my $self = shift;
    
        $self->{conf_opts}->{maxclients} = 2;
    
        return $self->SUPER::new_test_config;
    }
    
    MyTest->new->run(@ARGV);
  
  =head2 Multiple User Agents
  
  By default the C<Apache::Test> framework uses a single user agent
  which talks to the server (this is the C<LWP> user agent, if you have
  C<LWP> installed). You almost never use this agent directly in the
  tests, but via various wrappers. However if you need a second user
  agent you can clone these. For example:
  
    my $ua2 = Apache::TestRequest::user_agent()->clone;
  
  
  =head2 Hitting the Same Interpreter (Server Thread/Process Instance)
  
  When a single instance of the server thread/process is running, all
  the tests go through the same server. However if the C<Apache::Test>
  framework was configured to to run a few instances, two subsequent
  sub-tests may not hit the same server instance. In certain tests
  (e.g. testing the closure effect or the C<BEGIN> blocks) it's
  important to make sure that a sequence of sub-tests are run against
  the same server instance. The C<Apache::Test> framework supports this
  internally.
  
  Here is an example from C<ModPerl::Registry> closure tests. Using the
  counter closure problem under C<ModPerl::Registry>:
  
    file:cgi-bin/closure.pl
    -----------------------
    #!perl -w
    print "Content-type: text/plain\r\n\r\n";
    
    # this is a closure (when compiled inside handler()):
    my $counter = 0;
    counter();
    
    sub counter {
        #warn "$$";
        print ++$counter;
    }
  
  If this script get invoked twice in a row and we make sure that it
  gets executed by the same server instance, the first time it'll return
  1 and the second time 2. So here is the gist of the request part that
  makes sure that its two subsequent requests hit the same server
  instance:
  
    file:closure.t
    --------------
    ...
    my $url = "/same_interp/cgi-bin/closure.pl";
    my $same_interp = Apache::TestRequest::same_interp_tie($url);
    
    # should be no closure effect, always returns 1
    my $first  = req($same_interp, $url);
    my $second = req($same_interp, $url);
    ok t_cmp(
        1,
        $first && $second && ($second - $first),
        "the closure problem is there",
    );
    sub req {
        my($same_interp, $url) = @_;
        my $res = Apache::TestRequest::same_interp_do($same_interp,
                                                      \&GET, $url);
        return $res ? $res->content : undef;
    }
  
  In this test we generate two requests to I<cgi-bin/closure.pl> and
  expect the returned value to increment for each new request, because
  of the closure problem generated by C<ModPerl::Registry>. Since we
  don't know whether some other test has called this script already, we
  simply check whether the substraction of the two subsequent requests'
  outputs gives a value of 1.
  
  The test starts by requesting the server to tie a single instance to
  all requests made with a certain identifier. This is done using the
  same_interp_tie() function which returns a unique server instance's
  indentifier. From now on any requests made through same_interp_do()
  and supplying this indentifier as the first argument will be served by
  the same server instance. The second argument to same_interp_do() is
  the method to use for generating the request and the third is the URL
  to use. Extra arguments can be supplied if needed by the request
  generation method (e.g. headers).
  
  This technique works for testing purposes where we know that we have
  just a few server instances. What happens internally is when
  same_interp_tie() is called the server instance that served it returns
  its unique UUID, so when we want to hit the same server instance in
  subsequent requests we generate the same request until we learn that
  we are being served by the server instance that we want. This magic is
  done by using a fixup handler which returns C<OK> only if it sees that
  its unique id matches. As you understand this technique would be very
  inefficient in production with many server instances.
  
  =head1 Writing Tests
  
  All the communications between tests and C<Test::Harness> which
  executes them is done via STDOUT. I.e. whatever tests want to report
  they do by printing something to STDOUT. If a test wants to print some
  debug comment it should do it starting on a separate line, and each
  debug line should start with C<#>. The t_debug() function from the
  C<Apache::TestUtil> package should be used for that purpose.
  
  
  
  =head2 Defining How Many Sub-Tests Are to Be Run
  
  Before sub-tests of a certain test can be run it has to declare how
  many sub-tests it is going to run. In some cases the test may decide
  to skip some of its sub-tests or not to run any at all. Therefore the
  first thing the test has to print is:
  
    1..M\n
  
  where M is a positive integer. So if the test plans to run 5 sub-tests
  it should do:
  
    print "1..5\n";
  
  In C<Apache::Test> this is done as follows:
  
    use Apache::Test;
    plan tests => 5;
  
  
  
  =head2 Skipping a Whole Test
  
  Sometimes when the test cannot be run, because certain prerequisites
  are missing. To tell C<Test::Harness> that the whole test is to be
  skipped do:
  
    print "1..0 # skipped because of foo is missing\n";
  
  The optional comment after C<# skipped> will be used as a reason for
  test's skipping. Under C<Apache::Test> the optional last argument to
  the plan() function can be used to define prerequisites and skip the
  test:
  
    use Apache::Test;
    plan tests => 5, $test_skipping_prerequisites;
  
  This last argument can be:
  
  =over
  
  =item * a C<SCALAR>
  
  the test is skipped if the scalar has a false value. For example:
  
    plan tests => 5, 0;
  
  =item * an C<ARRAY> reference
  
  have_module() is called for each value in this array. The test is
  skipped if have_module() returns false (which happens when at least
  one C or Perl module from the list cannot be found). For example:
  
    plan tests => 5, [qw(mod_index mod_mime)];
  
  =item * a C<CODE> reference
  
  the tests will be skipped if the function returns a false value. For
  example:
  
      plan tests => 5, \&have_lwp;
  
  the test will be skipped if LWP is not available
  
  =back
  
  There is a number of useful functions whose return value can be used
  as a last argument for plan():
  
  =over
  
  =item * skip_unless()
  
  Alternatively to specifying a last argument for plan(), the
  skip_unless() function can be called before plan(), to decide whether
  to skip the whole test or not. plan() won't be reached if skip_unless
  decides to skip the test.
  
  skip_unless()'s argument is a list of things to test. The list can
  include scalars, which are passed to have_module(), and hash
  references. The hash references have a condition code reference as a
  key and a reason for failure as a value. The condition code is run and
  if it fails the provided reason is used to tell user why the test was
  skipped.
  
  For example:
  
    skip_unless({sub {$a==$b} => "$a != $b!"
                 sub {$a==1}  => "$a != 1!"},
                'LWP',
                'cgi_d',
                 {sub {0} => "forced to be skipped"},
               );
    plan tests => 5;
  
  In this example, the first argument is a hash reference which includes
  two pairs of condition test functions and the corresponding reasons,
  the second and the third arguments are scalars passed to have_module()
  and the last argument is another hash reference with a single
  condition. This is just to demonstrate that you can supply conditions
  in various syntaxes without particular order. If any of the
  requirements from this list fail, plan() won't be called since
  skip_unless() will call exit().
  
  =item * have_module()
  
  have_module() tests for existance of Perl modules or C modules
  I<mod_*>. It accepts a list of modules or a reference to the list.  If
  at least one of the modules is not found it returns a false value,
  otherwise it returns a true value. For example:
  
    plan tests => 5, have_module qw(Chatbot::Eliza Apache::AI);
  
  will skip the whole test if both Perl modules C<Chatbot::Eliza> and
  C<Apache::AI> are not available.
  
  =item * have_perl()
  
  have_perl('foo') checks whether the value of C<$Config{foo}> or
  C<$Config{usefoo}> is equal to I<'define'>. For example:
  
    plan tests => 2, have_perl 'ithreads';
  
  if Perl wasn't compiled with C<-Duseithreads> the condition will be
  false and the test will be skipped.
  
  =item * have_lwp()
  
  Tests whether the Perl module LWP is installed.
  
  =item * have_http11()
  
  Tries to tell LWP that sub-tests need to be run under HTTP 1.1
  protocol. Fails if the installed version of LWP is not capable of
  doing that.
  
  =item * have_cgi()
  
  tests whether mod_cgi or mod_cgid is available.
  
  =item * have_apache()
  
  tests for a specific version of httpd. For example:
  
    plan tests => 2, have_apache 2;
  
  will skip the test if not run under httpd 2.x.
  
  =back
  
  
  =head2 Skipping Numerous Tests
  
  Just like you can tell C<Apache::Test> to run only specific tests, you
  can tell it to run all but a few tests.
  
  If all files in a directory I<t/foo> should be skipped, create:
  
    file:t/foo/all.t
    ----------------
    print "1..0\n";
  
  Alternatively you can specify which tests should be skipped from a
  single file I<t/SKIP>. This file includes a list of tests to be
  skipped. You can include comments starting with C<#> and you can use
  the C<*> wildcharacter for multiply files matching.
  
  For example if in mod_perl 2.0 test suite we create the following file:
  
    file:t/SKIP
    -----------
    # skip all files in protocol
    protocol
    
    # skip basic cgi test
    modules/cgi.t
    
    # skip all filter/input_* files
    filter/input*.t
  
  In our example the first pattern specifies the directory name
  I<protocol>, since we want to skip all tests in it. But since the
  skipping is done based on matching the skip patterns from t/SKIP
  against a list of potential tests to be run, some other tests may be
  skipped as well if they match the pattern. Therefore it's safer to use
  a pattern like this:
  
    protocol/*.t
  
  The second pattern skips a single test I<modules/cgi.t>. Note that you
  shouldn't specify the leading I<t/>. The I<.t> extension is optional,
  so you can tell:
  
    # skip basic cgi test
    modules/cgi
  
  The last pattern tells C<Apache::Test> to skip all the tests starting
  with I<filter/input>.
  
  =head2 Reporting a Success or a Failure of Sub-tests
  
  After printing the number of planned sub-tests, and assuming that the
  test is not skipped, the tests is running its sub-tests and each
  sub-test is expected to report its success or failure by printing
  I<ok> or I<not ok> respectively followed by its sequential number and
  a new line. For example:
  
    print "ok 1\n";
    print "not ok 2\n";
    print "ok 3\n";
  
  In C<Apache::Test> this is done using the ok() function which prints
  I<ok> if its argument is a true value, otherwise it prints I<not
  ok>. In addition it keeps track of how many times it was called, and
  every time it prints an incremental number, therefore you can move
  sub-tests around without needing to remember to adjust sub-test's
  sequential number, since now you don't need them at all. For example
  this test snippet:
  
    use Apache::Test;
    use Apache::TestUtil;
    plan tests => 3;
    ok "success";
    t_debug("expecting to fail next test");
    ok "";
    ok 0;
  
  will print:
  
    1..3
    ok 1
    # expecting to fail next test
    not ok 2
    not ok 3
  
  Most of the sub-tests perform one of the following things:
  
  =over
  
  =item *
  
  test whether some variable is defined:
  
    ok defined $object;
  
  =item *
  
  test whether some variable is a true value:
  
    ok $value;
  
  or a false value:
  
    ok !$value;
  
  =item *
  
  test whether a received from somewhere value is equal to an expected
  value:
  
    $expected = "a good value";
    $received = get_value();
    ok defined $received && $received eq $expected;
  
  =back
  
  
  
  
  
  
  =head2 Skipping Sub-tests
  
  If the standard output line contains the substring I< # Skip> (with
  variations in spacing and case) after I<ok> or I<ok NUMBER>, it is
  counted as a skipped test. C<Test::Harness> reports the text after I<
  # Skip\S*\s+> as a reason for skipping. So you can count a sub-test as 
  a skipped as follows:
  
    print "ok 3 # Skip for some reason\n";
  
  or using the C<Apache::Test>'s skip() function which works similarly
  to ok():
  
    skip $should_skip, $test_me;
  
  so if C<$should_skip> is true, the test will be reported as
  skipped. The second argument is the one that's sent to ok(), so if
  C<$should_skip> is true, a normal ok() sub-test is run. The following
  example represent four possible outcomes of using the skip() function:
  
    skip_subtest_1.t
    --------------
    use Apache::Test;
    plan tests => 4;
    
    my $ok     = 1;
    my $not_ok = 0;
    
    my $should_skip = "foo is missing";
    skip $should_skip, $ok;
    skip $should_skip, $not_ok;
    
    $should_skip = '';
    skip $should_skip, $ok;
    skip $should_skip, $not_ok;
  
  now we run the test:
  
    % ./t/TEST -run-tests -verbose skip_subtest_1
    skip_subtest_1....1..4
    ok 1 # skip foo is missing
    ok 2 # skip foo is missing
    ok 3
    not ok 4
    # Failed test 4 in skip_subtest_1.t at line 13
    Failed 1/1 test scripts, 0.00% okay. 1/4 subtests failed, 75.00% okay.
  
  As you can see since C<$should_skip> had a true value, the first two
  sub-tests were explicitly skipped (using C<$should_skip> as a reason),
  so the second argument to skip didn't matter. In the last two
  sub-tests C<$should_skip> had a false value therefore the second
  argument was passed to the ok() function. Basically the following
  code:
  
    $should_skip = '';
    skip $should_skip, $ok;
    skip $should_skip, $not_ok;
  
  is equivalent to:
  
    ok $ok;
    ok $not_ok;
  
  C<Apache::Test> also allows to write tests in such a way that only
  selected sub-tests will be run.  The test simply needs to switch from
  using ok() to sok().  Where the argument to sok() is a CODE reference
  or a BLOCK whose return value will be passed to ok().  If sub-tests
  are specified on the command line only those will be run/passed to
  ok(), the rest will be skipped.  If no sub-tests are specified, sok()
  works just like ok().  For example, you can write this test:
  
    file:skip_subtest_2.t
    ---------------------
    use Apache::Test;
    plan tests => 4;
    sok {1};
    sok {0};
    sok sub {'true'};
    sok sub {''};
  
  and then ask to run only sub-tests 1 and 3 and to skip the rest.
  
    % ./t/TEST -verbose skip_subtest_2 1 3
    skip_subtest_2....1..4
    ok 1
    ok 2 # skip skipping this subtest
    ok 3
    ok 4 # skip skipping this subtest
    ok, 2/4 skipped:  skipping this subtest
    All tests successful, 2 subtests skipped.
  
  Only the sub-tests 1 and 3 get executed.
  
  A range of sub-tests to run can be given using the Perl's range
  operand:
  
    % ./t/TEST -verbose skip_subtest_2 2..4
    skip_subtest_2....1..4
    ok 1 # skip askipping this subtest
    not ok 2
    # Failed test 2
    ok 3
    not ok 4
    # Failed test 4
    Failed 1/1 test scripts, 0.00% okay. 2/4 subtests failed, 50.00% okay.
  
  In this run, only the first sub-test gets executed.
  
  =head2 Todo Sub-tests
  
  In a safe fashion to skipping specific sub-tests, it's possible to
  declare some sub-tests as I<todo>. This distinction is useful when we
  know that some sub-test is failing but for some reason we want to flag
  it as a todo sub-test and not as a broken test. C<Test::Harness>
  recognizes I<todo> sub-tests if the standard output line contains the
  substring I< # TODO> after I<not ok> or I<not ok NUMBER> and is
  counted as a todo sub-test.  The text afterwards is the explanation of
  the thing that has to be done before this sub-test will succeed. For
  example:
  
    print "not ok 42 # TODO not implemented\n";
  
  In C<Apache::Test> this can be done with passing a reference to a list
  of sub-tests numbers that should be marked as I<todo> sub-test:
  
    plan tests => 7, todo => [3, 6];
  
  In this example sub-tests 3 and 6 will be marked as I<todo> sub-tests.
  
  
  
  
  
  =head2 Making it Easy to Debug
  
  Ideally we want all the tests to pass, reporting minimum noise or none
  at all. But when some sub-tests fail we want to know the reason for
  their failure. If you are a developer you can dive into the code and
  easily find out what's the problem, but when you have a user who has a
  problem with the test suite it'll make his and your life much easier
  if you make it easy for the user to report you the exact problem.
  
  Usually this is done by printing the comment of what the sub-test
  does, what is the expected value and what's the received value. This
  is a good example of debug friendly sub-test:
  
    file:debug_comments.t
    ---------------------
    use Apache::Test;
    use Apache::TestUtil;
    plan tests => 1;
    
    t_debug("testing feature foo");
    $expected = "a good value";
    $received = "a bad value";
    t_debug("expected: $expected");
    t_debug("received: $received");
    ok defined $received && $received eq $expected;
  
  If in this example C<$received> gets assigned I<a bad value> string,
  the test will print the following:
  
    % t/TEST debug_comments
    debug_comments....FAILED test 1
  
  No debug help here, since in a non-verbose mode the debug comments
  aren't printed.  If we run the same test using the verbose mode,
  enabled with C<-verbose>:
  
    % t/TEST -verbose debug_comments
    debug_comments....1..1
    # testing feature foo
    # expected: a good value
    # received: a bad value
    not ok 1
  
  we can see exactly what's the problem, by visual expecting of the
  expected and received values.
  
  It's true that adding a few print statements for each sub tests is
  cumbersome, and adds a lot of noise, when you could just tell:
  
    ok "a good value" eq "a bad value";
  
  but no fear, C<Apache::TestUtil> comes to help. The function t_cmp()
  does all the work for you:
  
    use Apache::Test;
    use Apache::TestUtil;
    ok t_cmp(
        "a good value",
        "a bad value",
        "testing feature foo");
  
  In addition it will handle undef'ined values as well, so you can do:
  
    ok t_cmp(undef, $expected, "should be undef");
  
  
  
  
  
  =head2 Tie-ing STDOUT to a Response Handler Object
  
  It's possible to run the sub-tests in the response handler, and simply
  return them as a response to the client which in turn will print them
  out. Unfortunately in this case you cannot use ok() and other
  functions, since they print and don't return the results, therefore
  you have to do it manually. For example:
  
    sub handler {
        my $r = shift;
    
        $r->print("1..2\n");
        $r->print("ok 1\n");
        $r->print("not ok 2\n");
      
        return Apache::OK;
    }
  
  now the client should print the response to STDOUT for
  C<Test::Harness> processing.
  
  If the response handler is configured as:
  
    SetHandler perl-script
  
  C<STDOUT> is already tied to the request object C<$r>. Therefore you
  can now rewrite the handler as:
  
    use Apache::Test;
    sub handler {
        my $r = shift;
    
        Apache::Test::test_pm_refresh();
        plan tests => 2;
        ok "true";
        ok "";
      
        return Apache::OK;
    }
  
  However to be on the safe side you also have to call
  Apache::Test::test_pm_refresh() allowing plan() and friends to be
  called more than once per-process.
  
  Under different settings C<STDOUT> is not tied to the request object.
  If the first argument to plan() is an object, such as an
  C<Apache::RequestRec> object, C<STDOUT> will be tied to it. The
  C<Test.pm> global state will also be refreshed by calling
  C<Apache::Test::test_pm_refresh>. For example:
  
    use Apache::Test;
    sub handler {
        my $r = shift;
    
        plan $r, tests => 2;
        ok "true";
        ok "";
      
        return Apache::OK;
    }
  
  Yet another alternative to handling the test framework printing inside
  response handler is to use C<Apache::TestToString> class.
  
  The C<Apache::TestToString> class is used to capture C<Test.pm> output
  into a string.  Example:
  
    use Apache::Test;
    sub handler {
        my $r = shift;
    
        Apache::TestToString->start;
    
        plan tests => 2;
        ok "true";
        ok "";
      
        my $output = Apache::TestToString->finish;
        $r->print($output);
    
        return Apache::OK;
    }
  
  In this example C<Apache::TestToString> intercepts and buffers all the
  output from C<Test.pm> and can be retrieved with its finish()
  method. Which then can be printed to the client in one
  shot. Internally it calls Apache::Test::test_pm_refresh() to make sure
  plan(), ok() and other functions() will work correctly more than one
  test is running under the same interpreter.
  
  =head2 Auto Configuration
  
  If the test is comprised only from the request part, you have to
  manually configure the targets you are going to use. This is usually
  done in I<t/conf/extra.conf.in>.
  
  If your tests are comprised from the request and response parts,
  C<Apache::Test> automatically adds the configuration section for each
  response handler it finds. For example for the response handler:
  
    package TestResponse::nice;
    ... some code
    1;
  
  it will put into I<t/conf/httpd.conf>:
  
    <Location /TestResponse::nice>
        SetHandler modperl
        PerlResponseHandler TestResponse::nice
    </Location>
  
  If you want to add some extra configuration directives, use the
  C<__DATA__> section, as in this example:
  
    package TestResponse::nice;
    ... some code
    1;
    __DATA__
    PerlSetVar Foo Bar
  
  These directives will be wrapped into the C<E<lt>LocationE<gt>>
  section and placed into I<t/conf/httpd.conf>:
  
    <Location /TestResponse::nice>
        SetHandler modperl
        PerlResponseHandler TestResponse::nice
        PerlSetVar Foo Bar
    </Location>
  
  This autoconfiguration feature was added to:
  
  =over
  
  =item *
  
  simplify (less lines) test configuration.
  
  =item *
  
  ensure unique namespace for E<lt>Location ...E<gt>'s.
  
  =item *
  
  force E<lt>Location ...E<gt> names to be consistent.
  
  =item *
  
  prevent clashes within main configuration.
  
  =back
  
  If some directives are supposed to go to the base configuration,
  i.e. not to be automatically wrapped into C<E<lt>LocationE<gt>> block,
  you should use a special C<E<lt>BaseE<gt>>..C<E<lt>/BaseE<gt>> block:
  
    __DATA__
    <Base>
        PerlSetVar Config ServerConfig
    <Base>
    PerlSetVar Config LocalConfig
  
  Now the autogenerated section will look like this:
  
    PerlSetVar Config ServerConfig
    <Location /TestResponse::nice>
       SetHandler modperl
       PerlResponseHandler TestResponse::nice
       PerlSetVar Config LocalConfig
    </Location>
  
  As you can see the C<E<lt>BaseE<gt>>..C<E<lt>/BaseE<gt>> block has
  gone. As you can imagine this block was added to support our virtue of
  laziness, since most tests don't need to add directives to the base
  configuration and we want to keep the configuration sections in tests
  to a minimum and let Perl do the rest of the job for us.
  
  
  
  META: Virtual host?
  
  META: to be completed
  
  
  =head2 Threaded versus Non-threaded Perl Test's Compatibility
  
  Since the tests are supposed to run properly under non-threaded and
  threaded perl, you have to worry to enclose the threaded perl specific
  configuration bits in:
  
    <IfDefine PERL_USEITHREADS>
        ... configuration bits
    </IfDefine>
  
  C<Apache::Test> will start the server with -DPERL_USEITHREADS if the
  Perl is ithreaded.
  
  For example C<PerlOptions +Parent> is valid only for the threaded
  perl, therefore you have to write:
  
    <IfDefine PERL_USEITHREADS>
        # a new interpreter pool
        PerlOptions +Parent
    </IfDefine>
  
  Just like the configuration, the test's code has to work for both
  versions as well. Therefore you should wrap the code specific to the
  threaded perl into:
  
    if (have_perl 'ithreads'){
        # ithread specific code
    }
  
  which is essentially does a lookup in $Config{useithreads}.
  
  =head1 Debugging Tests
  
  Sometimes your tests won't run properly or even worse will
  segfault. There are cases where it's possible to debug broken tests
  with simple print statements but usually it's very time consuming and
  ineffective. Therefore it's a good idea to get yourself familiar with
  Perl and C debuggers, and this knowledge will save you a lot of time
  and grief in a long run.
  
  =head2 Under C debugger
  
  mod_perl-2.0 provides built in 'make test' debug facility. So in case
  you get a core dump during make test, or just for fun, run in one shell:
  
    % t/TEST -debug
  
  in another shell:
  
    % t/TEST -run-tests
  
  then the I<-debug> shell will have a C<(gdb)> prompt, type C<where>
  for stacktrace:
  
    (gdb) where
  
  You can change the default debugger by supplying the name of the
  debugger as an argument to I<-debug>. E.g. to run the server under
  C<ddd>:
  
    % ./t/TEST -debug=ddd
  
  META: list supported debuggers
  
  If you debug mod_perl internals you can set the breakpoints using the
  I<-breakpoint> option, which can be repeated as many times as
  needed. When you set at least one breakpoint, the server will start
  running till it meets the I<ap_run_pre_config> breakpoint. At this
  point we can set the breakpoint for the mod_perl code, something we
  cannot do earlier if mod_perl was built as DSO. For example:
  
    % ./t/TEST -debug -breakpoint=modperl_cmd_switches \
       -breakpoint=modperl_cmd_options
  
  will set the I<modperl_cmd_switches> and I<modperl_cmd_options>
  breakpoints and run the debugger.
  
  If you want to tell the debugger to jump to the start of the mod_perl
  code you may run:
  
    % ./t/TEST -debug -breakpoint=modperl_hook_init
  
  In fact I<-breakpoint> automatically turns on the debug mode, so you
  can run:
  
    % ./t/TEST -breakpoint=modperl_hook_init
  
  
  
  =head2 Under Perl debugger
  
  When the Perl code misbehaves it's the best to run it under the Perl
  debugger. Normally started as:
  
    % perl -debug program.pl
  
  the flow control gets passed to the Perl debugger, which allows you to
  run the program in single steps and examine its states and variables
  after every executed statement. Of course you can set up breakpoints
  and watches to skip irrelevant code sections and watch after certain
  variables. The I<perldebug> and the I<perldebtut> manpages are
  covering the Perl debugger in fine details.
  
  The C<Apache::Test> framework extends the Perl debugger and plugs in
  C<LWP>'s debug features, so you can debug the requests. Let's take
  test I<apache/read> from mod_perl 2.0 and present the features as we
  go:
  
  META: to be completed
  
  run .t test under the perl debugger
  % t/TEST -debug perl t/modules/access.t
  
  run .t test under the perl debugger (nonstop mode, output to t/logs/perldb.out)
  % t/TEST -debug perl=nostop t/modules/access.t
  
  turn on -v and LWP trace (1 is the default) mode in Apache::TestRequest
  % t/TEST -debug lwp t/modules/access.t
  
  turn on -v and LWP trace mode (level 2) in Apache::TestRequest
  % t/TEST -debug lwp=2 t/modules/access.t
  
  
  =head2 Tracing
  
  To get Start the server under strace(1):
  
    % t/TEST -debug strace
  
  The output goes to I<t/logs/strace.log>.
  
  Now in a second terminal run:
  
    % t/TEST -run-tests
  
  Beware that I<t/logs/strace.log> is going to be very big.
  
  META: can we provide strace(1) opts if we want to see only certain
  syscalls?
  
  
  
  
  =head1 Writing Tests Methodology
  
  META: to be completed
  
  
  =head2 When Tests Should Be Written
  
  =over
  
  =item * A New feature is Added
  
  Every time a new feature is added new tests should be added to cover
  the new feature.
  
  =item * A Bug is Reported
  
  Every time a bug gets reported, before you even attempt to fix the
  bug, write a test that exposes the bug. This will make much easier for
  you to test whether your fix actually fixes the bug.
  
  Now fix the bug and make sure that test passes ok.
  
  It's possible that a few tests can be written to expose the same
  bug. Write them all -- the more tests you have the less chances are
  that there is a bug in your code.
  
  If the person reporting the bug is a programmer you may try to ask her
  to write the test for you. But usually if the report includes a simple
  code that reproduces the bug, it should probably be easy to convert
  this code into a test.
  
  =back
  
  
  
  
  =head1 References
  
  =over
  
  =item * extreme programming methodology
  
  Extreme Programming: A Gentle Introduction:
  http://www.extremeprogramming.org/.
  
  Extreme Programming: http://www.xprogramming.com/.
  
  See also other sites linked from these URLs.
  
  =back
  
  =head1 Maintainers
  
  Maintainer is the person(s) you should contact with updates,
  corrections and patches.
  
  Stas Bekman E<lt>stas (at) stason.orgE<gt>
  
  =head1 Authors
  
  =over
  
  =item * Gary Benson E<lt>gbenson (at) redhat.comE<gt>
  
  =item * Stas Bekman E<lt>stas (at) stason.orgE<gt>
  
  =back
  
  =cut
  
  
  

---------------------------------------------------------------------
To unsubscribe, e-mail: docs-cvs-unsubscribe@perl.apache.org
For additional commands, e-mail: docs-cvs-help@perl.apache.org