You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@httpd.apache.org by "Roy T. Fielding" <fi...@kiwi.ics.uci.edu> on 1998/01/04 09:42:11 UTC

Re: argh... bugs in request processing...

>All I'm trying to do is have Apache die properly if a client tries to send
>too many headers.  Easy, I know what to do and how to do it.  Doing it,
>however, is another question.

You can't use die() at that part of the process, which is why nothing
in that neighborhood calls die().  You need a different function
that just spits out a canned message.

>The second issue is that some errors (eg. URI too large) never get output
>to the client.  The reason for this is that the buffer isn't flushed
>before the connection is closed.  Oh, for some reason a URI too large
>doesn't call die() anyway but just sets r->status so it doesn't even try
>to send a response.  Why!?!  

Ditto.  It actually says that in the comments, or at least it did
when I added that code.  It is one of those things we put off til 2.0.

>Having send_error_response flush would be one way to fix the flush
>problem, but could have negative implications.  Oh, fsck it.  The problem
>is that a NULL return from read_request is used to indicate an error, yet
>if that is done then r isn't set so lingering_close is never called and we
>just set B_EOUT and toss the junk. 
>
>Hmm.  Are we leaking stuff here, since the pool isn't destroyed if
>read_request bails out?
>
>So we need to change read_request to cleanup after itself before returning
>NULL and flush connections.  Hmm... looks like this messes up the post
>read-request stuff too.

Yep, something like that.

....Roy