You are viewing a plain text version of this content. The canonical link for it is here.
Posted to test-dev@httpd.apache.org by Aaron Bannert <aa...@clove.org> on 2001/08/23 18:30:23 UTC

flood and fork(), and test reports

[continued from a recent discussion on new-httpd or dev@httpd or whatever
that list is called nowadays ;) ]

It should be fairly trivial to add fork() support to flood. I've had
something in mind for awhile, so I might be able to patch that up later
today.

The only lingering problems are how to do reporting. Right now, like you
mentioned Justin, flood only has a couple reporting techniques, and
pretty much outputs raw per-hit statistics. This requires us to to
postprocessing on that data to calculate any aggregate statistcs
(totals, averages, stddev, etc...). Although we have some neat
awk scripts to do this (which we really should add to CVS), I forsee
us wanting to do this on a much larger scale and at runtime. For example:

10 machines, each emulating 15 "users" simultaneously. Each machine
emulates one "user" with one thread or fork()ed child process. A "user"
is some recorded real-world interaction with the system, which typically
amounts to: 1) a single hit to a page in keepalive mode, 2) 3 more keepalive
hits in addition to the original to retrieve some static content associated
with the page, and 3) some delay until we start over at step #1.

Now, while all this is going on and we are slamming the hell out of the
target system, each of these "farmers" (as we call them) are generating
some amount of reporting statistics. We can't say what these reports
will consist of, other than the timing metrics we have already defined,
and the result of the verify step. I think we are going to want a way
for each of these "farmers" to report back to the original parent flood
process (the one that invoked ssh/rsh to start a bunch of remote floods)
so that the parent process can do the data aggregation and spit out a
final report.

Originally I had proposed that whatever "report" mechanism we plugged in
to flood would handle this, for example we would have an "average times"
plugin that would spit out XML at the farmer level, which would get
send back over rsh/ssh to the parent flood which would collect that
junk back up and do the calculations. I'm not so sure this is as flexible
as I want, so I'm looking for suggestions. Let's get some discussion going
and maybe find out what kind of reports we'll want back from flood.
I'm aiming for some intermediate format that can be pulled apart later,
ala sar's binary format, but I want to get more feedback.

-aaron