You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@apr.apache.org by Scott Lamb <sl...@slamb.org> on 2004/02/23 20:33:31 UTC
Compile-time vs. run-time checks
I'm putting together a patch to use SO_(RCV|SND)TIMEO for
apr_socket_timeout where available; I expect I'll find it has better
performance on some platforms, as it would no longer require using
non-blocking IO and preceding every read() and write() with a select().
(I intend to try benchmarking Apache on Darwin, where the system call
overhead seems to be quite high.)
On some older versions of platforms (Linux 2.2), these #defines exist
but do not work - it's not possible to set them. Can I assume that if
APR is built with a kernel in which it does work (Linux 2.4), it will
be run with one as well? Or should I include a runtime check for this
option?
Thanks,
Scott Lamb
Re: Compile-time vs. run-time checks
Posted by "William A. Rowe, Jr." <wr...@rowe-clan.net>.
At 01:33 PM 2/23/2004, Scott Lamb wrote:
>On some older versions of platforms (Linux 2.2), these #defines exist but do not work - it's not possible to set them. Can I assume that if APR is built with a kernel in which it does work (Linux 2.4), it will be run with one as well? Or should I include a runtime check for this option?
On win32 we have implemented such as run time tests. At this time, afaict
the Unix side relies on compile time tests.
However, some of the moving targets (sendfile not available, buggy, or both
available and stable) would be much better picked up at run time, since patch
updates to the kernel, or attempting to move the binaries from one box to
another have become more and more of a hassle.
Lets just say that nobody has picked up this ball and run with it yet. If you
are so inclined, by all means lean on run time tests. Note that tests once
per app invocation would be preferable to testing on each function invocation.
Bill
apr_socket_timeout speed (was Re: Compile-time vs. run-time checks)
Posted by Scott Lamb <sl...@slamb.org>.
On Feb 23, 2004, at 11:11 PM, Cliff Woolley wrote:
> On Mon, 23 Feb 2004, Scott Lamb wrote:
>
>> significant difference between them. In transferring either big or
>> small files with httpd-2.0 HEAD and ab over loopback on Darwin
>> (keepalive on). Which I'd think would be the ideal situation for
>> seeing
>> an improvement...
>
> Neither ab nor loopback make for a particularly good test of this sort
> of
> thing. I suggest you use flood instead of ab and use two machines
> instead
> of the loopback adapter.
I'll play with it a while. Flood was giving me trouble (couldn't find
docs on what the numbers it spat out meant, and the analysis awk script
got divide-by-zero errors), so I tried siege for a bit. Had
disappointing results, then realized I wasn't anywhere close to
saturating the server's CPU or the network. The roughly equal-speed
Linux 2.6 client machine is groaning...and it's spending 60% time in
softirq, according to top. I recently replaced the network card with
some cheap thing; maybe the drivers are just that awful. If so, I'll
need to replace it before getting decent benchmarks; it might be a
while.
>
> --Cliff
>
Thanks for the ideas.
Scott
Re: Compile-time vs. run-time checks
Posted by Cliff Woolley <jw...@virginia.edu>.
On Mon, 23 Feb 2004, Scott Lamb wrote:
> significant difference between them. In transferring either big or
> small files with httpd-2.0 HEAD and ab over loopback on Darwin
> (keepalive on). Which I'd think would be the ideal situation for seeing
> an improvement...
Neither ab nor loopback make for a particularly good test of this sort of
thing. I suggest you use flood instead of ab and use two machines instead
of the loopback adapter.
--Cliff
Re: Compile-time vs. run-time checks
Posted by Scott Lamb <sl...@slamb.org>.
On Feb 23, 2004, at 1:43 PM, Greg Stein wrote:
> On Mon, Feb 23, 2004 at 01:33:31PM -0600, Scott Lamb wrote:
>> I'm putting together a patch to use SO_(RCV|SND)TIMEO for
>> apr_socket_timeout where available; I expect I'll find it has better
>> performance on some platforms, as it would no longer require using
>> non-blocking IO and preceding every read() and write() with a
>> select().
>> (I intend to try benchmarking Apache on Darwin, where the system call
>> overhead seems to be quite high.)
It seems I was way off...I've got my somewhat tested but unpolished
patch attached, but unless someone else runs benchmarks and sees a
speedup, I see no real reason to apply. I couldn't see a statistically
significant difference between them. In transferring either big or
small files with httpd-2.0 HEAD and ab over loopback on Darwin
(keepalive on). Which I'd think would be the ideal situation for seeing
an improvement...
I'm surprised. System calls seem to be an order of magnitude slower on
Darwin than they were on Linux for roughly comparable hardware, so I'd
expected to see the extra overhead being significant in some way. But I
guess it was just such a small piece of the whole that it didn't matter
anyway. Or something.
>>
>> On some older versions of platforms (Linux 2.2), these #defines exist
>> but do not work - it's not possible to set them. Can I assume that if
>> APR is built with a kernel in which it does work (Linux 2.4), it will
>> be run with one as well? Or should I include a runtime check for this
>> option?
>
> Icky. I don't think it is really possible to make that assumption.
> Thankfully, I also believe this is reasonably solved with a global
> variable (i.e. race conditions around coming up with the same flag
> don't
> apply :-), and the value certainly won't change over the process'
> lifetime).
>
> I would recommend a dynamic solution for now. We may be able to make
> that
> compile-time for certain platforms, where we know "all" versions handle
> the flag properly [when present].
Thanks for the suggestion. Maybe I'll apply it to the next patch. :/
>
> Cheers,
> -g
Scott
Re: Compile-time vs. run-time checks
Posted by Greg Stein <gs...@lyra.org>.
On Mon, Feb 23, 2004 at 01:33:31PM -0600, Scott Lamb wrote:
> I'm putting together a patch to use SO_(RCV|SND)TIMEO for
> apr_socket_timeout where available; I expect I'll find it has better
> performance on some platforms, as it would no longer require using
> non-blocking IO and preceding every read() and write() with a select().
> (I intend to try benchmarking Apache on Darwin, where the system call
> overhead seems to be quite high.)
>
> On some older versions of platforms (Linux 2.2), these #defines exist
> but do not work - it's not possible to set them. Can I assume that if
> APR is built with a kernel in which it does work (Linux 2.4), it will
> be run with one as well? Or should I include a runtime check for this
> option?
Icky. I don't think it is really possible to make that assumption.
Thankfully, I also believe this is reasonably solved with a global
variable (i.e. race conditions around coming up with the same flag don't
apply :-), and the value certainly won't change over the process'
lifetime).
I would recommend a dynamic solution for now. We may be able to make that
compile-time for certain platforms, where we know "all" versions handle
the flag properly [when present].
Cheers,
-g
--
Greg Stein, http://www.lyra.org/