You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@apr.apache.org by bj...@apache.org on 2001/03/05 15:52:09 UTC

cvs commit: apr/network_io/os2 sendrecv.c

bjh         01/03/05 06:52:07

  Modified:    network_io/os2 sendrecv.c
  Log:
  OS/2: Limit data passed to writev() to 64k as that's all it can handle.
  
  Revision  Changes    Path
  1.19      +9 -3      apr/network_io/os2/sendrecv.c
  
  Index: sendrecv.c
  ===================================================================
  RCS file: /home/cvs/apr/network_io/os2/sendrecv.c,v
  retrieving revision 1.18
  retrieving revision 1.19
  diff -u -r1.18 -r1.19
  --- sendrecv.c	2001/02/16 04:16:01	1.18
  +++ sendrecv.c	2001/03/05 14:51:59	1.19
  @@ -142,10 +142,16 @@
       apr_status_t rv;
       struct iovec *tmpvec;
       int fds, err = 0;
  +    int nv_tosend, total = 0;
   
  -    tmpvec = alloca(sizeof(struct iovec) * nvec);
  -    memcpy(tmpvec, vec, sizeof(struct iovec) * nvec);
  +    /* Make sure writev() only gets fed 64k at a time */
  +    for ( nv_tosend = 0; total + vec[nv_tosend].iov_len < 65536; nv_tosend++ ) {
  +        total += vec[nv_tosend].iov_len;
  +    }
   
  +    tmpvec = alloca(sizeof(struct iovec) * nv_tosend);
  +    memcpy(tmpvec, vec, sizeof(struct iovec) * nv_tosend);
  +
       do {
           if (!sock->nonblock || err == SOCEWOULDBLOCK) {
               fds = sock->socketdes;
  @@ -165,7 +171,7 @@
               }
           }
   
  -        rv = writev(sock->socketdes, tmpvec, nvec);
  +        rv = writev(sock->socketdes, tmpvec, nv_tosend);
           err = rv < 0 ? sock_errno() : 0;
       } while (err == SOCEINTR || err == SOCEWOULDBLOCK);
   
  
  
  

Re: cvs commit: apr/network_io/os2 sendrecv.c

Posted by Cliff Woolley <cl...@yahoo.com>.
On Tue, 6 Mar 2001, Greg Ames wrote:

> Paul Reder posted a patch to mod_include to take care of a problem that
> sounds like this.  Ryan committed it not too long ago.  Do you see this
> behavior with the latest and greatest mod_include?

As I understand it, the problem has just shifted from mod_include down to
the content-length filter, which is buffering the entire data stream (even
though the data is now being passed down from mod_include in ~8K chunks).
So the problem is that we shouldn't even be using the content-length
filter in this case... we should be using HTTP/1.1 chunked transfer
encoding.

Right?

--Cliff

--------------------------------------------------------------
   Cliff Woolley
   cliffwoolley@yahoo.com
   Charlottesville, VA



Re: cvs commit: apr/network_io/os2 sendrecv.c

Posted by "William A. Rowe, Jr." <ad...@rowe-clan.net>.
From: "Brian Havard" <br...@kheldar.apana.org.au>
Sent: Wednesday, March 07, 2001 10:39 AM


> Although I'm no protocol expert, my reading of rfc2068 leads me to believe
> that such a request header would be used to indicate that the request body
> is chunked, not that it can accept/wants a chunked response.

Follow RFC2616 (and RFC2817) as the current http (and http/tls) specification.

[The later interests me the most ... mass vhosted ip-less SSL?  Cool :-]



Re: cvs commit: apr/network_io/os2 sendrecv.c

Posted by rb...@covalent.net.
> >Mod_include is working correctly, as is the content-length filter, in all
> >of my tests.  The only time I don't get chunked responses for the
> >content-length, is if I don't put "Transfer-Encoding: chunked" in the
> >request headers.
>
> That sounds wrong to me. The request shouldn't have to specify
> "Transfer-Encoding: chunked" on a 1.1 request. The fact that the HTTP
> version is 1.1 implies that the client can handle chunking: "All HTTP/1.1
> applications MUST be able to receive and decode the "chunked" transfer
> coding"
>
> Although I'm no protocol expert, my reading of rfc2068 leads me to believe
> that such a request header would be used to indicate that the request body
> is chunked, not that it can accept/wants a chunked response.
>
> The test client I'm using does not send a Transfer-Encoding header and
> Apache 1.3 quite happily sends it a chunked body.

You are most likely correct.  I'll review the logic in
ap_content_length_filter again.  This is almost 100% the bug.  I'll commit
a fix later today if it is obvious.

Ryan

_______________________________________________________________________________
Ryan Bloom                        	rbb@apache.org
406 29th St.
San Francisco, CA 94131
-------------------------------------------------------------------------------


Re: cvs commit: apr/network_io/os2 sendrecv.c

Posted by Brian Havard <br...@kheldar.apana.org.au>.
On Wed, 7 Mar 2001 06:53:02 -0800 (PST), rbb@covalent.net wrote:

>On Wed, 7 Mar 2001, Paul J. Reder wrote:
>
>> Cliff Woolley wrote:
>> > I'm not positive, but I *might* have seen this happening as well on my
>> > Linux box.  I was testing with some 128kb server-parsed files to try to
>> > reproduce the original problem, and I couldn't for the life of me get it
>> > to use chunked TE.  It always came back with a content-length.  To even
>> > try it with chunking on, I had to hack the server to ignore the
>> > content-length even if it had one.
>>
>> Currently mod_include deliberately unsets the content-length as soon as it has
>> identified an SSI tag to process. Is a filter supposed to set a chunked flag
>> to encourage possible chunked processing?
>>
>> In any event, the content-length should be unset if an SSI tag is found so
>> no server hacking should be required for SSI responses.
>
>Mod_include is working correctly, as is the content-length filter, in all
>of my tests.  The only time I don't get chunked responses for the
>content-length, is if I don't put "Transfer-Encoding: chunked" in the
>request headers.

That sounds wrong to me. The request shouldn't have to specify
"Transfer-Encoding: chunked" on a 1.1 request. The fact that the HTTP
version is 1.1 implies that the client can handle chunking: "All HTTP/1.1
applications MUST be able to receive and decode the "chunked" transfer
coding"

Although I'm no protocol expert, my reading of rfc2068 leads me to believe
that such a request header would be used to indicate that the request body
is chunked, not that it can accept/wants a chunked response.

The test client I'm using does not send a Transfer-Encoding header and
Apache 1.3 quite happily sends it a chunked body.

-- 
 ______________________________________________________________________________
 |  Brian Havard                 |  "He is not the messiah!                   |
 |  brianh@kheldar.apana.org.au  |  He's a very naughty boy!" - Life of Brian |
 ------------------------------------------------------------------------------


Re: cvs commit: apr/network_io/os2 sendrecv.c

Posted by rb...@covalent.net.
Okay guys, this is definately the bug we are seeing.  I am testing a patch
now, and it will be committed in a few hours.  Don't shoot the messenger
on this one.  I just found the bug, I don't know how long it has been
there.

Ryan

On 7 Mar 2001, Jeff Trawick wrote:

> Cliff Woolley <cl...@yahoo.com> writes:
>
> > On Wed, 7 Mar 2001 rbb@covalent.net wrote:
> >
> > > Mod_include is working correctly, as is the content-length filter, in all
> > > of my tests.  The only time I don't get chunked responses for the
> > > content-length, is if I don't put "Transfer-Encoding: chunked" in the
> > > request headers.
> >
> > Hrm... I was under the impression that "Transfer-Encoding: chunked" means
> > that the *request* is chunked.
>
> yep (I assume you mean when that header field is provided on the request)
>
> >                                  "TE: chunked" means that the *response*
> > may be chunked.  Does the server not send a chunked response unless the
> > client sent a chunked request?  Surely not, since many requests don't have
> > bodies to chunk in the first place.
>
> The server can send a chunked response for any 1.1 request.  Using
> non-chunked (with content length) is preferable because it gives the
> client information for displaying status.
>
> --
> Jeff Trawick | trawickj@bellsouth.net | PGP public key at web site:
>        http://www.geocities.com/SiliconValley/Park/9289/
>              Born in Roswell... married an alien...
>
>


_______________________________________________________________________________
Ryan Bloom                        	rbb@apache.org
406 29th St.
San Francisco, CA 94131
-------------------------------------------------------------------------------


Re: cvs commit: apr/network_io/os2 sendrecv.c

Posted by Jeff Trawick <tr...@bellsouth.net>.
Cliff Woolley <cl...@yahoo.com> writes:

> On Wed, 7 Mar 2001 rbb@covalent.net wrote:
> 
> > Mod_include is working correctly, as is the content-length filter, in all
> > of my tests.  The only time I don't get chunked responses for the
> > content-length, is if I don't put "Transfer-Encoding: chunked" in the
> > request headers.
> 
> Hrm... I was under the impression that "Transfer-Encoding: chunked" means
> that the *request* is chunked.  

yep (I assume you mean when that header field is provided on the request)

>                                  "TE: chunked" means that the *response*
> may be chunked.  Does the server not send a chunked response unless the
> client sent a chunked request?  Surely not, since many requests don't have
> bodies to chunk in the first place.

The server can send a chunked response for any 1.1 request.  Using
non-chunked (with content length) is preferable because it gives the
client information for displaying status.

-- 
Jeff Trawick | trawickj@bellsouth.net | PGP public key at web site:
       http://www.geocities.com/SiliconValley/Park/9289/
             Born in Roswell... married an alien...

Re: cvs commit: apr/network_io/os2 sendrecv.c

Posted by Cliff Woolley <cl...@yahoo.com>.
On Wed, 7 Mar 2001 rbb@covalent.net wrote:

> Mod_include is working correctly, as is the content-length filter, in all
> of my tests.  The only time I don't get chunked responses for the
> content-length, is if I don't put "Transfer-Encoding: chunked" in the
> request headers.

Hrm... I was under the impression that "Transfer-Encoding: chunked" means
that the *request* is chunked.  "TE: chunked" means that the *response*
may be chunked.  Does the server not send a chunked response unless the
client sent a chunked request?  Surely not, since many requests don't have
bodies to chunk in the first place.

--Cliff


--------------------------------------------------------------
   Cliff Woolley
   cliffwoolley@yahoo.com
   Charlottesville, VA



Re: cvs commit: apr/network_io/os2 sendrecv.c

Posted by rb...@covalent.net.
On Wed, 7 Mar 2001, Paul J. Reder wrote:

> Cliff Woolley wrote:
> > I'm not positive, but I *might* have seen this happening as well on my
> > Linux box.  I was testing with some 128kb server-parsed files to try to
> > reproduce the original problem, and I couldn't for the life of me get it
> > to use chunked TE.  It always came back with a content-length.  To even
> > try it with chunking on, I had to hack the server to ignore the
> > content-length even if it had one.
>
> Currently mod_include deliberately unsets the content-length as soon as it has
> identified an SSI tag to process. Is a filter supposed to set a chunked flag
> to encourage possible chunked processing?
>
> In any event, the content-length should be unset if an SSI tag is found so
> no server hacking should be required for SSI responses.

Mod_include is working correctly, as is the content-length filter, in all
of my tests.  The only time I don't get chunked responses for the
content-length, is if I don't put "Transfer-Encoding: chunked" in the
request headers.

mod_include should unset the content length, and the core sets up all the
flags for chunking, so mod_include can safely ignore it.

Ryan

_______________________________________________________________________________
Ryan Bloom                        	rbb@apache.org
406 29th St.
San Francisco, CA 94131
-------------------------------------------------------------------------------


Re: cvs commit: apr/network_io/os2 sendrecv.c

Posted by "Paul J. Reder" <re...@raleigh.ibm.com>.
Cliff Woolley wrote:
> I'm not positive, but I *might* have seen this happening as well on my
> Linux box.  I was testing with some 128kb server-parsed files to try to
> reproduce the original problem, and I couldn't for the life of me get it
> to use chunked TE.  It always came back with a content-length.  To even
> try it with chunking on, I had to hack the server to ignore the
> content-length even if it had one.

Currently mod_include deliberately unsets the content-length as soon as it has
identified an SSI tag to process. Is a filter supposed to set a chunked flag
to encourage possible chunked processing?

In any event, the content-length should be unset if an SSI tag is found so
no server hacking should be required for SSI responses.

-- 
Paul J. Reder
-----------------------------------------------------------
"The strength of the Constitution lies entirely in the determination of each
citizen to defend it.  Only if every single citizen feels duty bound to do
his share in this defense are the constitutional rights secure."
-- Albert Einstein

Re: cvs commit: apr/network_io/os2 sendrecv.c

Posted by Cliff Woolley <cl...@yahoo.com>.
On Tue, 6 Mar 2001 rbb@covalent.net wrote:

> The problem is with the content-length filter.  Brian said yesterday that
> it was actually buffering everything.

I'm not positive, but I *might* have seen this happening as well on my
Linux box.  I was testing with some 128kb server-parsed files to try to
reproduce the original problem, and I couldn't for the life of me get it
to use chunked TE.  It always came back with a content-length.  To even
try it with chunking on, I had to hack the server to ignore the
content-length even if it had one.

--Cliff


--------------------------------------------------------------
   Cliff Woolley
   cliffwoolley@yahoo.com
   Charlottesville, VA



Re: cvs commit: apr/network_io/os2 sendrecv.c

Posted by rb...@covalent.net.
On Tue, 6 Mar 2001, Greg Ames wrote:

> Brian Havard wrote:
> >
> > On Mon, 5 Mar 2001 13:42:00 -0500 (EST), Cliff Woolley wrote:
> > >
> > >AHA!  No wonder I couldn't reproduce the problem on Linux.  =-)  I take it
> > >that the >64k bogosity is now completely fixed?
> >
> > Well, yes & no. Yes that large writev's will no longer die on OS/2. No in
> > that it should never have had to handle them, there are other bugs to be
> > found. Requesting a 20MB shtml file chews 20MB of server memory, serious
> > badness. Am I the only one seeing this or is it easily reproducable?
> >
>
> Paul Reder posted a patch to mod_include to take care of a problem that
> sounds like this.  Ryan committed it not too long ago.  Do you see this
> behavior with the latest and greatest mod_include?

The problem is with the content-length filter.  Brian said yesterday that
it was actually buffering everything.

Ryan
_______________________________________________________________________________
Ryan Bloom                        	rbb@apache.org
406 29th St.
San Francisco, CA 94131
-------------------------------------------------------------------------------


Re: cvs commit: apr/network_io/os2 sendrecv.c

Posted by Greg Ames <gr...@remulak.net>.
Brian Havard wrote:
> 
> On Mon, 5 Mar 2001 13:42:00 -0500 (EST), Cliff Woolley wrote:
> >
> >AHA!  No wonder I couldn't reproduce the problem on Linux.  =-)  I take it
> >that the >64k bogosity is now completely fixed?
> 
> Well, yes & no. Yes that large writev's will no longer die on OS/2. No in
> that it should never have had to handle them, there are other bugs to be
> found. Requesting a 20MB shtml file chews 20MB of server memory, serious
> badness. Am I the only one seeing this or is it easily reproducable?
> 

Paul Reder posted a patch to mod_include to take care of a problem that
sounds like this.  Ryan committed it not too long ago.  Do you see this
behavior with the latest and greatest mod_include?

Greg

Re: cvs commit: apr/network_io/os2 sendrecv.c

Posted by rb...@covalent.net.
On Wed, 7 Mar 2001, Brian Havard wrote:

> On Mon, 5 Mar 2001 13:42:00 -0500 (EST), Cliff Woolley wrote:
>
> >On 5 Mar 2001 bjh@apache.org wrote:
> >
> >> bjh         01/03/05 06:52:07
> >>
> >>   Modified:    network_io/os2 sendrecv.c
> >>   Log:
> >>   OS/2: Limit data passed to writev() to 64k as that's all it can handle.
> >>
> >>   Revision  Changes    Path
> >>   1.19      +9 -3      apr/network_io/os2/sendrecv.c
> >>
> >
> >AHA!  No wonder I couldn't reproduce the problem on Linux.  =-)  I take it
> >that the >64k bogosity is now completely fixed?
>
> Well, yes & no. Yes that large writev's will no longer die on OS/2. No in
> that it should never have had to handle them, there are other bugs to be
> found. Requesting a 20MB shtml file chews 20MB of server memory, serious
> badness. Am I the only one seeing this or is it easily reproducable?

I just tried with a 100K file (like your 50K, only bigger) to get the
content-length fliter to buffer it, and couldn't.  Makes it look like
this too is an OS/2 issue.

Ryan

_______________________________________________________________________________
Ryan Bloom                        	rbb@apache.org
406 29th St.
San Francisco, CA 94131
-------------------------------------------------------------------------------


Re: cvs commit: apr/network_io/os2 sendrecv.c

Posted by Brian Havard <br...@kheldar.apana.org.au>.
On Mon, 5 Mar 2001 13:42:00 -0500 (EST), Cliff Woolley wrote:

>On 5 Mar 2001 bjh@apache.org wrote:
>
>> bjh         01/03/05 06:52:07
>>
>>   Modified:    network_io/os2 sendrecv.c
>>   Log:
>>   OS/2: Limit data passed to writev() to 64k as that's all it can handle.
>>
>>   Revision  Changes    Path
>>   1.19      +9 -3      apr/network_io/os2/sendrecv.c
>>
>
>AHA!  No wonder I couldn't reproduce the problem on Linux.  =-)  I take it
>that the >64k bogosity is now completely fixed?

Well, yes & no. Yes that large writev's will no longer die on OS/2. No in
that it should never have had to handle them, there are other bugs to be
found. Requesting a 20MB shtml file chews 20MB of server memory, serious
badness. Am I the only one seeing this or is it easily reproducable?

-- 
 ______________________________________________________________________________
 |  Brian Havard                 |  "He is not the messiah!                   |
 |  brianh@kheldar.apana.org.au  |  He's a very naughty boy!" - Life of Brian |
 ------------------------------------------------------------------------------


Re: cvs commit: apr/network_io/os2 sendrecv.c

Posted by Cliff Woolley <cl...@yahoo.com>.
On 5 Mar 2001 bjh@apache.org wrote:

> bjh         01/03/05 06:52:07
>
>   Modified:    network_io/os2 sendrecv.c
>   Log:
>   OS/2: Limit data passed to writev() to 64k as that's all it can handle.
>
>   Revision  Changes    Path
>   1.19      +9 -3      apr/network_io/os2/sendrecv.c
>

AHA!  No wonder I couldn't reproduce the problem on Linux.  =-)  I take it
that the >64k bogosity is now completely fixed?

--Cliff


--------------------------------------------------------------
   Cliff Woolley
   cliffwoolley@yahoo.com
   Charlottesville, VA



Re: cvs commit: apr/network_io/os2 sendrecv.c

Posted by Cliff Woolley <cl...@yahoo.com>.
On 5 Mar 2001 bjh@apache.org wrote:

> bjh         01/03/05 06:52:07
>
>   Modified:    network_io/os2 sendrecv.c
>   Log:
>   OS/2: Limit data passed to writev() to 64k as that's all it can handle.
>
>   Revision  Changes    Path
>   1.19      +9 -3      apr/network_io/os2/sendrecv.c
>

AHA!  No wonder I couldn't reproduce the problem on Linux.  =-)  I take it
that the >64k bogosity is now completely fixed?

--Cliff


--------------------------------------------------------------
   Cliff Woolley
   cliffwoolley@yahoo.com
   Charlottesville, VA