You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@httpd.apache.org by Stefan Fritsch <sf...@sfritsch.de> on 2016/12/30 22:55:02 UTC

Automated tests

Hi,

it's quite rare that I have a bit of time for httpd nowadays. But  I want to 
comment on a mail that Jacob Champion wrote on -security that contains some 
valid points about the lack of our test framework. I am posting this to -dev 
with his permission.

On Wednesday, 21 December 2016 08:55:30 CET Jacob Champion wrote:
> - Our APIs are really complex, and we don't really have unit tests for
> them. Nor are the internal APIs documented as well as the external APIs
> are. We had a few false starts for security fixes this release that were
> later shown to break something else, and I think that's related.

Yes, httpd lacks unit tests. One problem is that many APIs depend on very 
complex structs like request_rec, conn_rec, server_conf, etc. In order to 
write unit tests for such APIs, one would need to write quite a bit of 
infrastructure to set these things up. I think it would be worth the effort, 
but it's not a small task. As there does not seem to be anybody with enough 
spare time to do it, one could possibly ask someone (CII?) for funding.

A possible approach would be to compile the unit tests in the server and 
execute them on startup if a special define is given (like the various DUMP_* 
defines). Not sure how to get access to all the static helper function for unit 
tests, though. Unless one would somehow include the tests in the same .c file.

> == Tests and Test Culture ==
> 
> Tests for security fixes are especially important, to prevent
> regressions from committers who weren't part of the embargoed
> conversations. But our current test battery isn't deep enough to match
> the complexity of the server, and many fixes for complex bugs (whether
> security-related or not) are going in without test cases.
> 
> Compared to many other test suites I have used, our tests:
> - are difficult for newcomers to install, run, write, and/or extend
> - are not committed in the same breath as the bugfixes or features they
> test, since they exist in a separate project root
> - are not low-level enough to test our C API directly, at least as far
> as I can tell. (I.e. if your code can't be easily called from a module,
> you're out of luck.)

If the test suite would be easier to run, maybe more people would submit 
tests. Is there a reason why the test suite is in a separate repository? Would 
it help if it was moved into the normal httpd repo? Would it make sense to 
include it in the release tarballs, possibly including the necessary non-
standard perl modules? And include it in the makefiles in a way that a user can 
install a set of standard perl modules (from a distribution or cpan) and then 
call "make test" to start it. What is in the test/ dir in the httpd repo right 
now seems mostly useless and could probably be removed.

Another idea to make writing tests more attractive could be to somehow include 
it in the backporting policy. For example, if there is a test for a new 
feature (positive and error handling) or a bug fix, we could require only two 
+1s for a backport.


> For a guy like me who prefers doing his own work test-driven, it's just
> kinda painful.
> 
> Furthermore, the Apache::Test project appears to be a separate thing
> entirely...? I don't know *how* separate it is, but I consider that a
> red flag. To write code quickly, the way you need to when faced with
> immediate security problems, you need 100% control of your test suite --
> including the ability to write embargoed tests.
> 
> To give you a concrete example: I know that one of the httpd tests I
> wrote before being invited to this project was left uncommitted, because
> it would have required changes to Apache::Test, and that just wasn't
> worth the effort.

I have never tried to change Apache::Test. But in any case, understanding it 
enough to fix or extend it is a significant hurdle.

Another thing that is missing: A buildbot that builds current trunk (and 
possibly 2.x branches) and runs the test suite and alerts the dev list of 
regressions. I guess this "just" needs a volunteer to set it up and document 
it and the ASF would provide the infrastructure.

Cheers,
Stefan


Re: Autobuild Progress (was Re: Automated tests)

Posted by Jacob Champion <ch...@gmail.com>.
On 01/30/2017 06:23 PM, Daniel Shahaf wrote:
> You could also configure buildbot to detect any new compiler warnings.
> That'll cause buildbot to report a FAILURE (red) or a WARNING (yellow),
> instead of a SUCCESS (green), if the build succeeded but a compiler
> warning has been issued.

Good suggestion. I *think* the eventual goal is to move to -Werror for 
maintainer-mode builds, which would remove buildbot from the equation 
entirely (and force committers to fix their stuff before committing :P), 
but there's currently a bug in trunk's configure script that keeps it 
from being enabled as intended.

I'll keep this in mind in case enabling -Werror proves to have other 
issues (or if we come across a platform without a reliable -Werror). Thanks!

--Jacob

Re: Autobuild Progress (was Re: Automated tests)

Posted by Daniel Shahaf <d....@daniel.shahaf.name>.
Jacob Champion wrote on Mon, Jan 30, 2017 at 12:02:35 -0800:
> On 01/02/2017 07:53 AM, Daniel Shahaf wrote:
> >Setting this up isn't a lot more complicated than filing an INFRA ticket
> >with a build script, a list of build dependencies, and a list of
> >branches to build, and deciding how build failures would be notified.
> 
> To follow up on this, we now have an operational (if not yet fully
> functional) buildbot for trunk:
> 
>     https://ci.apache.org/builders/httpd-trunk
> 

Nice!

> There's a lot of work yet to do, but for now we have an Ubuntu machine that
> can be manually triggered to run an incremental build on trunk.
> 
> Here's my list of TODOs:
> - run per-commit incremental builds
> - run nightly clean builds
> - run a nightly test suite
> - set up 2.4.x in addition to trunk
> - set up Windows in addition to Ubuntu

You could also configure buildbot to detect any new compiler warnings.
That'll cause buildbot to report a FAILURE (red) or a WARNING (yellow),
instead of a SUCCESS (green), if the build succeeded but a compiler
warning has been issued.

(Currently, the build passes the -Wall flag and has no warnings.)

You can crib the setup from the 'svn-warnings' builder in
subversion.conf in the same directory.

Cheers,

Daniel

Re: Autobuild Progress

Posted by Eric Covener <co...@gmail.com>.
On Tue, Mar 7, 2017 at 3:24 PM, Jacob Champion <ch...@gmail.com> wrote:
> Getting 2.4.x going will require some backports, so I'm planning to look
> into running the test suite against trunk (probably not next week but the
> week after). Unless there's anyone who *really* wants 2.4.x autobuilding
> ASAP. Thoughts?

I think the tests are probably a higher priority.


-- 
Eric Covener
covener@gmail.com

Re: Autobuild Progress

Posted by Jacob Champion <ch...@gmail.com>.
On 02/02/2017 12:22 PM, Jacob Champion wrote:
> Every commit you make to trunk (well, group of commits, within
> fifteen seconds of each other) is run through an incremental build,
> which takes about ten seconds. Every eight hours, the build tree is
> clobbered, resync'd, and built from scratch, which takes about a
> minute.
>
> Reporting is still IRC-only. I want to keep an eye on the bots for bit
> before unleashing them upon the mailing list.

Since we're having a parallel discussion about the test suite, it seems 
like a good time to update you on this piece as well.

     https://ci.apache.org/builders/httpd-trunk

The builder has been happily chugging away for over a month. It caught 
two bad trunk commits (both of which, IIRC, were fixed independently 
within ten minutes). There's been only one spurious failure, related to 
SVN complaining about DNS hostnames in the middle of a sync. (This 
happened around the same time that apache.org hosts were undergoing some 
sort of certificate strangeness, so I'm not really worried about it yet.)

So I think it's safe to start sending build reports to the mailing list.

We have the following wishlist items left:
- run a nightly test suite
- set up 2.4.x in addition to trunk
- set up Windows in addition to Ubuntu

Getting 2.4.x going will require some backports, so I'm planning to look 
into running the test suite against trunk (probably not next week but 
the week after). Unless there's anyone who *really* wants 2.4.x 
autobuilding ASAP. Thoughts?

--Jacob

Re: Autobuild Progress (was Re: Automated tests)

Posted by Jacob Champion <ch...@gmail.com>.
On 01/30/2017 12:02 PM, Jacob Champion wrote:
> - run per-commit incremental builds
> - run nightly clean builds

These two are implemented. Every commit you make to trunk (well, group 
of commits, within fifteen seconds of each other) is run through an 
incremental build, which takes about ten seconds. Every eight hours, the 
build tree is clobbered, resync'd, and built from scratch, which takes 
about a minute.

Reporting is still IRC-only. I want to keep an eye on the bots for bit 
before unleashing them upon the mailing list.

--Jacob


Re: Autobuild Progress (was Re: Automated tests)

Posted by Daniel Ruggeri <DR...@primary.net>.
On 1/31/2017 4:30 PM, Jacob Champion wrote:
> On 01/30/2017 05:39 PM, Daniel Ruggeri wrote:
>> I'm tremendously inspired by this work. What are your thoughts on the
>> idea of having a series of docker container builds that compile and run
>> the test suite on various distributions? I'll volunteer to give this a
>> whack since it's something that's been in the back of my mind for a long
>> while...
>
> I think that would be awesome. The cheaper we can make new test
> distributions, the easier we can test all sorts of different
> configurations (which, given how many knobs and buttons we expose, is
> important).
>
> I don't know how much of Infra's current Puppet/Buildbot framework is
> Docker-friendly, but if there's currently no cheap virtualization
> solution there for build slaves, then anything we added would
> potentially be useful for other ASF projects as well. Definitely
> something to start a conversation over.
>

Yes, definitely. Thinking more about this, even adding something
heavyweight like a type 2 hypervisor could potentially provide value so
long as the VM image is stripped down enough and we don't leave junk
behind on the slave. I'm not concerned about Puppet and buildbot
integration since Puppet is a great way to manage the configuration of
the slave (assuming that's what it's used for) which makes it easy to
have docker, virtualbox, vagrant or whatever installed and configured.

As far as buildbot, I'm sure it will support the execution of a script
which is all that's needed. My latest work with the
RemoteIPProxyProtocol stuff has me compiling httpd on my build machine
and standing up a docker container with haproxy inside. Hitting the
resulting build under various circumstances with wget scratches the
itch. I've got this distilled down into only four files (Dockerfile,
haproxy.cfg, setup script and test script). This is nice because...
well... I just don't want to install haproxy on my build box for this

In any event, I've started the conversation with builds@a.o to see
what's doable. Can crosspost or just return with feedback when I hear.


> (Side thought: since raw speed is typically one of the top priorities
> for a CI test platform, we'd have to carefully consider which items we
> tested by spinning up containers and which we ran directly on a
> physical machine. Though I don't know how fast Docker has gotten with
> all of the fancy virtualization improvements.)

Amen to that. Docker's quite fast since lxc and all the stuff around it
are very lightweight. The slowest parts are pulling the base image and
setting it up (installing compilers, the test framework, tools, etc).
This can be sped up greatly by building the image and publishing it back
to a (or "the") registry or keeping it local on the machine, but we'd
then have to maintain images which I'm not a fan of.


>
>> I think with the work you've done and plan to do, a step like above to
>> increase our ability to test against many distributions all at once (and
>> cheaply) and also making the test framework more approachable, we could
>> seriously increase our confidence when pulling the trigger on a release
>> or accepting a backport.
>
> +1. It'll take some doing (mostly focused on the coverage of the test
> suite itself), but we can get there.
>
>> I'm also a big fan of backports requiring tests, but am honestly
>> intimidated by the testing framework...
>
> What would make it less intimidating for you? (I agree with you, but
> I'm hoping to get your take without biasing it with my already-strong
> opinions. :D)

Opinions here... so take them with a grain of salt.
* The immediate barrier to entry is doco. From the test landing page,
you are greeted with a list of what's in the test project and links to
the components. Of the links there (our super important one is Perl
Framework), only the flood link leads to a useful getting started guide.
This may be lazy and kinda preachy, but not having good developer info
easily accessible is a Bad Thing(tm) since it's a surefire way to scare
off those potentially interested in participating in a project.
* It's also intimidating when a developer realizes they need to learn a
new skill set to create tests. Writing tests for Perl's testing
framework feels archaic, and I'm not sure is a skill many potential
contributors would possess unless they have developed Perl modules for
distribution. I understand the history of the suite so I _get_ why it's
this way... it's just that it is likely a turn-off. Disclaimer: I'm not
saying Perl has a bad testing framework. I have yet to find a testing
framework I'm a big fan of since they all have their idiosyncrasies. No
holy wars, please :-)
* Another barrier that I think is very much worth pointing out is that
several Perl modules must be installed. I have some history fighting
with Crypt::SSLeay to do what I want because it can be rather finicky.
For example, if your system libssl is something ancient like 0.9.8 but
you compiled httpd with 1.0.2, you'll have a bad time (unless you do
some acrobatics to compile/install the module by hand) trying to speak
modern crypto algorithms.
* The setup activities for the test framework also imply root access.
It's definitely possible to install CPAN modules in a local directory,
but that again also requires acrobatics. Some folks don't have root or
just don't want to install system-wide stuff for just one project. Other
testing frameworks use the same runtime to test as the code does to
execute (JUnit as an example).
* It also feels weird that the test project is separate and that I can't
run `make test'. This is a spinal reflex for sysadmins after compiling
software. Not really an 'intimidation' thing... just... weird.

These are reasons why I love the idea of using Docker for building and
testing. In a cleanroom pseudo-installation, you have complete control
of the environment and can manipulate it/throw it away. The immutability
also ensures you build and test from a known state. It also helps that
with a few changes to a Dockerfile I can switch from building and
testing on Debian to Ubuntu in minutes.

>
> --Jacob

-- 
Daniel Ruggeri


Re: Autobuild Progress (was Re: Automated tests)

Posted by Jacob Champion <ch...@gmail.com>.
On 01/30/2017 05:39 PM, Daniel Ruggeri wrote:
> I'm tremendously inspired by this work. What are your thoughts on the
> idea of having a series of docker container builds that compile and run
> the test suite on various distributions? I'll volunteer to give this a
> whack since it's something that's been in the back of my mind for a long
> while...

I think that would be awesome. The cheaper we can make new test 
distributions, the easier we can test all sorts of different 
configurations (which, given how many knobs and buttons we expose, is 
important).

I don't know how much of Infra's current Puppet/Buildbot framework is 
Docker-friendly, but if there's currently no cheap virtualization 
solution there for build slaves, then anything we added would 
potentially be useful for other ASF projects as well. Definitely 
something to start a conversation over.

(Side thought: since raw speed is typically one of the top priorities 
for a CI test platform, we'd have to carefully consider which items we 
tested by spinning up containers and which we ran directly on a physical 
machine. Though I don't know how fast Docker has gotten with all of the 
fancy virtualization improvements.)

> I think with the work you've done and plan to do, a step like above to
> increase our ability to test against many distributions all at once (and
> cheaply) and also making the test framework more approachable, we could
> seriously increase our confidence when pulling the trigger on a release
> or accepting a backport.

+1. It'll take some doing (mostly focused on the coverage of the test 
suite itself), but we can get there.

> I'm also a big fan of backports requiring tests, but am honestly
> intimidated by the testing framework...

What would make it less intimidating for you? (I agree with you, but I'm 
hoping to get your take without biasing it with my already-strong 
opinions. :D)

--Jacob

Re: Autobuild Progress (was Re: Automated tests)

Posted by Daniel Ruggeri <DR...@primary.net>.
I'm tremendously inspired by this work. What are your thoughts on the
idea of having a series of docker container builds that compile and run
the test suite on various distributions? I'll volunteer to give this a
whack since it's something that's been in the back of my mind for a long
while...

I think with the work you've done and plan to do, a step like above to
increase our ability to test against many distributions all at once (and
cheaply) and also making the test framework more approachable, we could
seriously increase our confidence when pulling the trigger on a release
or accepting a backport.

P.S.

I'm also a big fan of backports requiring tests, but am honestly
intimidated by the testing framework...

-- 
Daniel Ruggeri

On 1/30/2017 2:02 PM, Jacob Champion wrote:
> On 01/02/2017 07:53 AM, Daniel Shahaf wrote:
>> Setting this up isn't a lot more complicated than filing an INFRA ticket
>> with a build script, a list of build dependencies, and a list of
>> branches to build, and deciding how build failures would be notified.
>
> To follow up on this, we now have an operational (if not yet fully
> functional) buildbot for trunk:
>
>     https://ci.apache.org/builders/httpd-trunk
>
> There's a lot of work yet to do, but for now we have an Ubuntu machine
> that can be manually triggered to run an incremental build on trunk.
>
> Here's my list of TODOs:
> - run per-commit incremental builds
> - run nightly clean builds
> - run a nightly test suite
> - set up 2.4.x in addition to trunk
> - set up Windows in addition to Ubuntu
>
> == Details ==
>
> The bot is building against Ubuntu-packaged dependencies, which
> requires a new apr-config option for buildconf (run `./buildconf
> --help` on the latest trunk for info). This leaves out a few modules
> that need some bleeding-edge dependencies:
>
> - mod_brotli (needs the unreleased libbrotli)
> - mod_crypto (needs APR 1.6)
> - mod[_proxy]_http2 (needs libnghttp2)
> - mod_lua (needs our configure script to recognize Lua 5.3)
>
> So to run a full test suite, eventually we'll need to build those
> dependencies too. I figure this is a good start for now.
>
> The following modules aren't built because of platform-specific stuff:
>
> - mod_socache_dc (distcache)
> - mod_journald, mod_systemd
> - mod_privileges (sys/priv.h)
> - mpm_os2
> - mpm_winnt
>
> If you'd like to poke around, our buildbot configuration file is in
> the infra repository and is, I believe, open to all our committers:
>
>
> https://svn.apache.org/repos/infra/infrastructure/buildbot/aegis/buildmaster/master1/projects/httpd.conf
>
>
> --Jacob


Autobuild Progress (was Re: Automated tests)

Posted by Jacob Champion <ch...@gmail.com>.
On 01/02/2017 07:53 AM, Daniel Shahaf wrote:
> Setting this up isn't a lot more complicated than filing an INFRA ticket
> with a build script, a list of build dependencies, and a list of
> branches to build, and deciding how build failures would be notified.

To follow up on this, we now have an operational (if not yet fully 
functional) buildbot for trunk:

     https://ci.apache.org/builders/httpd-trunk

There's a lot of work yet to do, but for now we have an Ubuntu machine 
that can be manually triggered to run an incremental build on trunk.

Here's my list of TODOs:
- run per-commit incremental builds
- run nightly clean builds
- run a nightly test suite
- set up 2.4.x in addition to trunk
- set up Windows in addition to Ubuntu

== Details ==

The bot is building against Ubuntu-packaged dependencies, which requires 
a new apr-config option for buildconf (run `./buildconf --help` on the 
latest trunk for info). This leaves out a few modules that need some 
bleeding-edge dependencies:

- mod_brotli (needs the unreleased libbrotli)
- mod_crypto (needs APR 1.6)
- mod[_proxy]_http2 (needs libnghttp2)
- mod_lua (needs our configure script to recognize Lua 5.3)

So to run a full test suite, eventually we'll need to build those 
dependencies too. I figure this is a good start for now.

The following modules aren't built because of platform-specific stuff:

- mod_socache_dc (distcache)
- mod_journald, mod_systemd
- mod_privileges (sys/priv.h)
- mpm_os2
- mpm_winnt

If you'd like to poke around, our buildbot configuration file is in the 
infra repository and is, I believe, open to all our committers:

 
https://svn.apache.org/repos/infra/infrastructure/buildbot/aegis/buildmaster/master1/projects/httpd.conf

--Jacob

Re: Automated tests

Posted by Daniel Shahaf <d....@daniel.shahaf.name>.
Luca Toscano wrote on Mon, Jan 02, 2017 at 15:51:43 +0100:
> I don't have a wide experience on build httpd on systems different than
> Debian/Ubuntu, so any help/suggestion/pointer would help a lot (for
> example, building on Windows).

I wouldn't worry about that just yet.  Start by having only an Ubuntu
bot; that'd already be a step forward.  Let someone who builds on
Windows be the liaison with infra about a Windows buildslave.

Setting this up isn't a lot more complicated than filing an INFRA ticket
with a build script, a list of build dependencies, and a list of
branches to build, and deciding how build failures would be notified.

Cheers,

Daniel

Re: Automated tests

Posted by Luca Toscano <to...@gmail.com>.
Hi Stefan,

2016-12-30 23:55 GMT+01:00 Stefan Fritsch <sf...@sfritsch.de>:
>
>
> Another thing that is missing: A buildbot that builds current trunk (and
> possibly 2.x branches) and runs the test suite and alerts the dev list of
> regressions. I guess this "just" needs a volunteer to set it up and
> document
> it and the ASF would provide the infrastructure.
>

I agree 100% with Jacob, but this particular bit is something that I can
try to do. Not sure how feasible it would be to run the test suite, but
definitely we'd need something that simply builds httpd after each commit
on the major branches (2.2.x, 2.4.x, trunk).

I don't have a wide experience on build httpd on systems different than
Debian/Ubuntu, so any help/suggestion/pointer would help a lot (for
example, building on Windows).

Thanks!

Luca

Re: Automated tests

Posted by Jacob Champion <ch...@gmail.com>.
On 12/30/2016 02:55 PM, Stefan Fritsch wrote:
> Yes, httpd lacks unit tests. One problem is that many APIs depend on very
> complex structs like request_rec, conn_rec, server_conf, etc. In order to
> write unit tests for such APIs, one would need to write quite a bit of
> infrastructure to set these things up. I think it would be worth the effort,
> but it's not a small task. As there does not seem to be anybody with enough
> spare time to do it, one could possibly ask someone (CII?) for funding.
>
> A possible approach would be to compile the unit tests in the server and
> execute them on startup if a special define is given (like the various DUMP_*
> defines). Not sure how to get access to all the static helper function for unit
> tests, though. Unless one would somehow include the tests in the same .c file.

That's an interesting idea. To riff on that a little bit: I've seen some 
questions on #httpd recently about the shared-library build for 
libhttpd, which IIUC only exists on Windows at the moment. It seems like 
having a libhttpd would simplify building a unit test executable... can 
anyone point me to the history behind the removal of that feature?

> If the test suite would be easier to run, maybe more people would submit
> tests. Is there a reason why the test suite is in a separate repository? Would
> it help if it was moved into the normal httpd repo? Would it make sense to
> include it in the release tarballs, possibly including the necessary non-
> standard perl modules? And include it in the makefiles in a way that a user can
> install a set of standard perl modules (from a distribution or cpan) and then
> call "make test" to start it. What is in the test/ dir in the httpd repo right
> now seems mostly useless and could probably be removed.

My personal end goals are
- to be able to perform the standard `make && make check` invocation 
without installation (this was discussed with a user in another dev@ thread)
- to have a bugfix/feature *and* its related tests in the same commit or 
backported patchset

So, to that end, I'd like to see the test suite eventually move into the 
httpd repo. I think I can start on my first goal without that, though 
(and I plan to start looking at that soon). That will hopefully give us 
time to discuss any possible fallout of merging the two codebases, while 
giving us some of the benefits in the meantime.

> Another idea to make writing tests more attractive could be to somehow include
> it in the backporting policy. For example, if there is a test for a new
> feature (positive and error handling) or a bug fix, we could require only two
> +1s for a backport.

I like this idea too.

> Another thing that is missing: A buildbot that builds current trunk (and
> possibly 2.x branches) and runs the test suite and alerts the dev list of
> regressions. I guess this "just" needs a volunteer to set it up and document
> it and the ASF would provide the infrastructure.

+1. This is a prerequisite to having a nice release cadence, IMHO.

--Jacob

Re: Automated tests

Posted by Graham Leggett <mi...@sharp.fm>.
On 31 Dec 2016, at 4:58 AM, William A Rowe Jr <wr...@rowe-clan.net> wrote:

> Thinking two things would help.
> 
> Splitting our functional utilities into a libaputil would make it much easier to write the tests that exercise these elements of our code.

Definite +1.

I want to see a C based test suit, the same as apr and apr-util.

> And what I found easiest is a dedicated module to provide diagnostics or tests. When not loaded, they are skipped.

+1.

Regards,
Graham
—


Re: Automated tests

Posted by William A Rowe Jr <wr...@rowe-clan.net>.
On Dec 30, 2016 14:55, "Stefan Fritsch" <sf...@sfritsch.de> wrote:

Hi,

it's quite rare that I have a bit of time for httpd nowadays. But  I want to
comment on a mail that Jacob Champion wrote on -security that contains some
valid points about the lack of our test framework. I am posting this to -dev
with his permission.

On Wednesday, 21 December 2016 08:55:30 CET Jacob Champion wrote:
> - Our APIs are really complex, and we don't really have unit tests for
> them. Nor are the internal APIs documented as well as the external APIs
> are. We had a few false starts for security fixes this release that were
> later shown to break something else, and I think that's related.

Yes, httpd lacks unit tests. One problem is that many APIs depend on very
complex structs like request_rec, conn_rec, server_conf, etc. In order to
write unit tests for such APIs, one would need to write quite a bit of
infrastructure to set these things up. I think it would be worth the effort,
but it's not a small task. As there does not seem to be anybody with enough
spare time to do it, one could possibly ask someone (CII?) for funding.

A possible approach would be to compile the unit tests in the server and
execute them on startup if a special define is given (like the various
DUMP_*
defines). Not sure how to get access to all the static helper function for
unit
tests, though. Unless one would somehow include the tests in the same .c
file.


Thinking two things would help.

Splitting our functional utilities into a libaputil would make it much
easier to write the tests that exercise these elements of our code.

And what I found easiest is a dedicated module to provide diagnostics or
tests. When not loaded, they are skipped.