You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@httpd.apache.org by Niklas Edmundsson <ni...@acc.umu.se> on 2008/02/21 22:09:42 UTC

Re: httpd 2.2.8 segfaults

On Wed, 20 Feb 2008, Niklas Edmundsson wrote:

> In any case, I should probably try to figure out how to reproduce this thing. 
> All coredumps I've looked at have been when serving DVD images, which of 
> course works flawlessly when I try it...

OK, I've been able to reproduce this, and it looks really bad because:

- I'm able to reproduce without having mod_cache loaded, ie. vanilla
   httpd.
- It's as easy as continuing an aborted download, so it's a trivial
   DOS.

So, to reproduce I did:
1) Download 2222288895 bytes of the total 4444577792 bytes of a DVD
    image (debian-31r7-i386-binary-2.iso if you're curious).
2) Continue the download by doing wget -cS http://whatever/file.iso

This coredumps the server, immediately closing the connection to the 
client.

Backtrace of coredump is:
#0  0xffffe410 in __kernel_vsyscall ()
#1  0xb7cefca6 in kill () from /lib/tls/i686/cmov/libc.so.6
#2  0x08089a03 in sig_coredump (sig=11) at mpm_common.c:1235
#3  <signal handler called>
#4  0x00000000 in ?? ()
#5  0x08093010 in ap_byterange_filter (f=0x81606a0, bb=0x8161360)
     at byterange_filter.c:271
#6  0x0808aec5 in ap_pass_brigade (next=0x81606a0, bb=0x8161360)
     at util_filter.c:526
#7  0x08077576 in default_handler (r=0x815f968) at core.c:3740
#8  0x0807df8d in ap_run_handler (r=0x815f968) at config.c:157
#9  0x0807e6d7 in ap_invoke_handler (r=0x815f968) at config.c:372
#10 0x0808ea7c in ap_process_request (r=0x815f968) at http_request.c:258
#11 0x0808b543 in ap_process_http_connection (c=0x815bb08) at http_core.c:190
#12 0x08086df3 in ap_run_process_connection (c=0x815bb08) at connection.c:43
#13 0x08087274 in ap_process_connection (c=0x815bb08, csd=0x815b958)
     at connection.c:178
#14 0x08094b00 in process_socket (p=0x815b920, sock=0x815b958, my_child_num=0,
     my_thread_num=0, bucket_alloc=0x815d928) at worker.c:544
#15 0x080953c8 in worker_thread (thd=0x812d378, dummy=0x815b460)
     at worker.c:894
#16 0xb7e87eac in dummy_worker (opaque=0x812d378)
     at threadproc/unix/thread.c:142
#17 0xb7e1846b in start_thread () from /lib/tls/i686/cmov/libpthread.so.0
#18 0xb7d9873e in clone () from /lib/tls/i686/cmov/libc.so.6

(gdb) dump_bucket ec
  bucket=¨0¸(0x08161364) length=135664344 data=0x080641b0
      contents=[**unknown**]          rc=n/a

(gdb) print *ec
$1 = {link = {next = 0x815db00, prev = 0x8169a50}, type = 0x815d928,
   length = 135664344, start = -5193905754803399840, data = 0x80641b0,
   free = 0x8161390, list = 0x1}

(gdb) print *ec->type
$2 = {name = 0x815b920 "¨À\v\b0ù\025\b\030Ñ\022\b¸9\026\b\030À\025\b",
   num_func = 135641240, is_metadata = APR_BUCKET_DATA, destroy = 0x816bd00,
   read = 0x58, setaside = 0x815d928, split = 0x815d910, copy = 0}

(gdb) dump_brigade bb
dump of brigade 0x8161360
    | type     (address)    | length | data addr 
---------------------------------------------------
  0 | FILE     (0x0815db00) | 16777216 | 0x0815daa8
  1 | FILE     (0x0815db58) | 16777216 | 0x0815daa8 
<snip>
265 | FILE     (0x081699f8) | 16777216 | 0x0815daa8 
266 | FILE     (0x0815d948) | 15392768 | 0x0815daa8 
267 | EOS      (0x08169a50) | 0      | 0x00000000 
end of brigade

So it looks to me that the bb brigade is intact, but the ec bucket has 
been smashed into bits and pieces...

This is on ubuntu710-i386, configured with:
./configure --prefix=/tmp/2.2.8.worker.debug --with-mpm=worker 
--sysconfdir=/var/conf/apache2 --with-included-apr 
--enable-nonportable-atomics=yes --enable-layout=GNU --with-gdbm 
--without-berkeley-db --enable-mods-shared=all --enable-cache=shared 
--enable-disk-cache=shared --enable-ssl=shared --enable-cgi=shared 
--enable-suexec --with-suexec-caller=yada --with-suexec-uidmin=1000
--with-suexec-gidmin=1000 CFLAGS="-march=i686 -g"

So, is anyone else able to reproduce this?

Any clue on what's the reason? I see some notes in CHANGES about 
reusing brigades and so on, which might be related. However I'm way 
too unclued to figure out even the general area of where things go 
wrong in bucket-land...

I did some other tests, for example fetching 45809664 bytes of the 
file and then continuing, I get this reply:
   Content-Length: 103800832
   Content-Range: bytes 45809664-4444577791/4444577792

Which is of course dead wrong, and using wget which trusts 
Content-Length I end up with a truncated file. Talking to a 
httpd-2.2.6 server I get the correct reply.

Something is really messed up in 2.2.8 (and I'm partly to blame, since 
I didn't have time to test it prior to release ;)

An unrelated note: Why on earth chop the poor file into 267 buckets? 
MAX_BUCKET_SIZE in srclib/apr-util/buckets/apr_brigade.c is 1GB (which 
works, that's what I use with my DISKCACHE buckets), where does 16MB 
come from?


/Nikke - keeping a brown paper bag handy, any takers?
-- 
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
  Niklas Edmundsson, Admin @ {acc,hpc2n}.umu.se      |     nikke@acc.umu.se
---------------------------------------------------------------------------
  Many people are unenthusiastic about your work.
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

Re: httpd 2.2.8 segfaults

Posted by Brian Rectanus <br...@gmail.com>.
On Thu, Feb 21, 2008 at 1:09 PM, Niklas Edmundsson <ni...@acc.umu.se> wrote:
> On Wed, 20 Feb 2008, Niklas Edmundsson wrote:
>
>  > In any case, I should probably try to figure out how to reproduce this thing.
>  > All coredumps I've looked at have been when serving DVD images, which of
>  > course works flawlessly when I try it...
>
>  OK, I've been able to reproduce this, and it looks really bad because:
>
>  - I'm able to reproduce without having mod_cache loaded, ie. vanilla
>    httpd.
>  - It's as easy as continuing an aborted download, so it's a trivial
>    DOS.
>
>  So, to reproduce I did:
>  1) Download 2222288895 bytes of the total 4444577792 bytes of a DVD
>     image (debian-31r7-i386-binary-2.iso if you're curious).
>  2) Continue the download by doing wget -cS http://whatever/file.iso
>
>  This coredumps the server, immediately closing the connection to the
>  client.
>
>  Backtrace of coredump is:
>  #0  0xffffe410 in __kernel_vsyscall ()
>  #1  0xb7cefca6 in kill () from /lib/tls/i686/cmov/libc.so.6
>  #2  0x08089a03 in sig_coredump (sig=11) at mpm_common.c:1235
>  #3  <signal handler called>
>  #4  0x00000000 in ?? ()
>  #5  0x08093010 in ap_byterange_filter (f=0x81606a0, bb=0x8161360)
>      at byterange_filter.c:271
>  #6  0x0808aec5 in ap_pass_brigade (next=0x81606a0, bb=0x8161360)
>      at util_filter.c:526
>  #7  0x08077576 in default_handler (r=0x815f968) at core.c:3740
>  #8  0x0807df8d in ap_run_handler (r=0x815f968) at config.c:157
>  #9  0x0807e6d7 in ap_invoke_handler (r=0x815f968) at config.c:372
>  #10 0x0808ea7c in ap_process_request (r=0x815f968) at http_request.c:258
>  #11 0x0808b543 in ap_process_http_connection (c=0x815bb08) at http_core.c:190
>  #12 0x08086df3 in ap_run_process_connection (c=0x815bb08) at connection.c:43
>  #13 0x08087274 in ap_process_connection (c=0x815bb08, csd=0x815b958)
>      at connection.c:178
>  #14 0x08094b00 in process_socket (p=0x815b920, sock=0x815b958, my_child_num=0,
>      my_thread_num=0, bucket_alloc=0x815d928) at worker.c:544
>  #15 0x080953c8 in worker_thread (thd=0x812d378, dummy=0x815b460)
>      at worker.c:894
>  #16 0xb7e87eac in dummy_worker (opaque=0x812d378)
>      at threadproc/unix/thread.c:142
>  #17 0xb7e1846b in start_thread () from /lib/tls/i686/cmov/libpthread.so.0
>  #18 0xb7d9873e in clone () from /lib/tls/i686/cmov/libc.so.6
>
>  (gdb) dump_bucket ec
>   bucket=¨0¸(0x08161364) length=135664344 data=0x080641b0
>       contents=[**unknown**]          rc=n/a
>
>  (gdb) print *ec
>  $1 = {link = {next = 0x815db00, prev = 0x8169a50}, type = 0x815d928,
>    length = 135664344, start = -5193905754803399840, data = 0x80641b0,
>    free = 0x8161390, list = 0x1}
>
>  (gdb) print *ec->type
>  $2 = {name = 0x815b920 "¨À\v\b0ù\025\b\030Ñ\022\b¸9\026\b\030À\025\b",
>    num_func = 135641240, is_metadata = APR_BUCKET_DATA, destroy = 0x816bd00,
>    read = 0x58, setaside = 0x815d928, split = 0x815d910, copy = 0}
>
>  (gdb) dump_brigade bb
>  dump of brigade 0x8161360
>     | type     (address)    | length | data addr
>  ---------------------------------------------------
>   0 | FILE     (0x0815db00) | 16777216 | 0x0815daa8
>   1 | FILE     (0x0815db58) | 16777216 | 0x0815daa8
>  <snip>
>  265 | FILE     (0x081699f8) | 16777216 | 0x0815daa8
>  266 | FILE     (0x0815d948) | 15392768 | 0x0815daa8
>  267 | EOS      (0x08169a50) | 0      | 0x00000000
>  end of brigade
>
>  So it looks to me that the bb brigade is intact, but the ec bucket has
>  been smashed into bits and pieces...
>
>  This is on ubuntu710-i386, configured with:
>  ./configure --prefix=/tmp/2.2.8.worker.debug --with-mpm=worker
>  --sysconfdir=/var/conf/apache2 --with-included-apr
>  --enable-nonportable-atomics=yes --enable-layout=GNU --with-gdbm
>  --without-berkeley-db --enable-mods-shared=all --enable-cache=shared
>  --enable-disk-cache=shared --enable-ssl=shared --enable-cgi=shared
>  --enable-suexec --with-suexec-caller=yada --with-suexec-uidmin=1000
>  --with-suexec-gidmin=1000 CFLAGS="-march=i686 -g"
>
>  So, is anyone else able to reproduce this?
>
>  Any clue on what's the reason? I see some notes in CHANGES about
>  reusing brigades and so on, which might be related. However I'm way
>  too unclued to figure out even the general area of where things go
>  wrong in bucket-land...
>
>  I did some other tests, for example fetching 45809664 bytes of the
>  file and then continuing, I get this reply:
>    Content-Length: 103800832
>    Content-Range: bytes 45809664-4444577791/4444577792
>
>  Which is of course dead wrong, and using wget which trusts
>  Content-Length I end up with a truncated file. Talking to a
>  httpd-2.2.6 server I get the correct reply.


Hmm, that looks like a 32-bit cutoff.


4444577791 - 45809663 =  4398768128

4398768128 - 103800832 = 4294967296

4294967296 == 2^32

-B

>
>  Something is really messed up in 2.2.8 (and I'm partly to blame, since
>  I didn't have time to test it prior to release ;)
>
>  An unrelated note: Why on earth chop the poor file into 267 buckets?
>  MAX_BUCKET_SIZE in srclib/apr-util/buckets/apr_brigade.c is 1GB (which
>  works, that's what I use with my DISKCACHE buckets), where does 16MB
>  come from?
>
>
>  /Nikke - keeping a brown paper bag handy, any takers?
>  --
>  -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
>   Niklas Edmundsson, Admin @ {acc,hpc2n}.umu.se      |     nikke@acc.umu.se
>  ---------------------------------------------------------------------------
>   Many people are unenthusiastic about your work.
>  =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
>

Re: httpd 2.2.8 segfaults

Posted by Ruediger Pluem <rp...@apache.org>.

On 02/23/2008 09:46 AM, Niklas Edmundsson wrote:
> On Fri, 22 Feb 2008, Plüm, Rüdiger, VF-Group wrote:
> 
>>>     | type     (address)    | length | data addr
>>> ---------------------------------------------------
>>>   0 | FILE     (0x0815db00) | 16777216 | 0x0815daa8
>>>   1 | FILE     (0x0815db58) | 16777216 | 0x0815daa8
>>> <snip>
>>> 265 | FILE     (0x081699f8) | 16777216 | 0x0815daa8
>>> 266 | FILE     (0x0815d948) | 15392768 | 0x0815daa8
>>> 267 | EOS      (0x08169a50) | 0      | 0x00000000
>>> end of brigade
>>
>>
> 
>> Hm. Looks like to me that APR_BRIGADE_SENTINEL(ec) is true, because 
>> next points to the first bucket in the brigade and prev to the last 
>> one. AFAIK the SENTINEL is not a valid bucket and does not contain 
>> valid bucket data. This should NEVER happen and as we see the byte 
>> range filter code is not prepared to handle this.
> 
> Possibly. I wouldn't care too much though since backing out that faulty 
> patch to apr_brigade.c made the problem go away, even though it would 
> have been nicer with an "INTERNAL ERROR" message rather than a segfault.

I care, because I want to be sure that backing out the patch / fixing
apr_brigade_partition also fixes this one and that it is clear why we have
seen this 'corrupted' bucket. But I am pretty confident now that it was
the SENTINEL we saw here.

Regards

Rüdiger

Re: httpd 2.2.8 segfaults

Posted by Niklas Edmundsson <ni...@acc.umu.se>.
On Fri, 22 Feb 2008, Plüm, Rüdiger, VF-Group wrote:

>>     | type     (address)    | length | data addr
>> ---------------------------------------------------
>>   0 | FILE     (0x0815db00) | 16777216 | 0x0815daa8
>>   1 | FILE     (0x0815db58) | 16777216 | 0x0815daa8
>> <snip>
>> 265 | FILE     (0x081699f8) | 16777216 | 0x0815daa8
>> 266 | FILE     (0x0815d948) | 15392768 | 0x0815daa8
>> 267 | EOS      (0x08169a50) | 0      | 0x00000000
>> end of brigade
>
>

> Hm. Looks like to me that APR_BRIGADE_SENTINEL(ec) is true, because 
> next points to the first bucket in the brigade and prev to the last 
> one. AFAIK the SENTINEL is not a valid bucket and does not contain 
> valid bucket data. This should NEVER happen and as we see the byte 
> range filter code is not prepared to handle this.

Possibly. I wouldn't care too much though since backing out that 
faulty patch to apr_brigade.c made the problem go away, even though it 
would have been nicer with an "INTERNAL ERROR" message rather than a 
segfault.

/Nikke
-- 
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
  Niklas Edmundsson, Admin @ {acc,hpc2n}.umu.se      |     nikke@acc.umu.se
---------------------------------------------------------------------------
  Captain, I sense millions of minds focused on my cleavage.
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

Re: httpd 2.2.8 segfaults

Posted by Plüm, Rüdiger, VF-Group <ru...@vodafone.com>.
 

> -----Ursprüngliche Nachricht-----
> Von: Niklas Edmundsson  
> Gesendet: Donnerstag, 21. Februar 2008 22:10
> An: dev@httpd.apache.org
> Betreff: Re: httpd 2.2.8 segfaults
> 
> On Wed, 20 Feb 2008, Niklas Edmundsson wrote:
> 
> > In any case, I should probably try to figure out how to 
> reproduce this thing. 
> > All coredumps I've looked at have been when serving DVD 
> images, which of 
> > course works flawlessly when I try it...
> 
> OK, I've been able to reproduce this, and it looks really bad because:
> 
> - I'm able to reproduce without having mod_cache loaded, ie. vanilla
>    httpd.
> - It's as easy as continuing an aborted download, so it's a trivial
>    DOS.
> 
> So, to reproduce I did:
> 1) Download 2222288895 bytes of the total 4444577792 bytes of a DVD
>     image (debian-31r7-i386-binary-2.iso if you're curious).
> 2) Continue the download by doing wget -cS http://whatever/file.iso
> 
> This coredumps the server, immediately closing the connection to the 
> client.
> 
> Backtrace of coredump is:
> #0  0xffffe410 in __kernel_vsyscall ()
> #1  0xb7cefca6 in kill () from /lib/tls/i686/cmov/libc.so.6
> #2  0x08089a03 in sig_coredump (sig=11) at mpm_common.c:1235
> #3  <signal handler called>
> #4  0x00000000 in ?? ()
> #5  0x08093010 in ap_byterange_filter (f=0x81606a0, bb=0x8161360)
>      at byterange_filter.c:271
> #6  0x0808aec5 in ap_pass_brigade (next=0x81606a0, bb=0x8161360)
>      at util_filter.c:526
> #7  0x08077576 in default_handler (r=0x815f968) at core.c:3740
> #8  0x0807df8d in ap_run_handler (r=0x815f968) at config.c:157
> #9  0x0807e6d7 in ap_invoke_handler (r=0x815f968) at config.c:372
> #10 0x0808ea7c in ap_process_request (r=0x815f968) at 
> http_request.c:258
> #11 0x0808b543 in ap_process_http_connection (c=0x815bb08) at 
> http_core.c:190
> #12 0x08086df3 in ap_run_process_connection (c=0x815bb08) at 
> connection.c:43
> #13 0x08087274 in ap_process_connection (c=0x815bb08, csd=0x815b958)
>      at connection.c:178
> #14 0x08094b00 in process_socket (p=0x815b920, 
> sock=0x815b958, my_child_num=0,
>      my_thread_num=0, bucket_alloc=0x815d928) at worker.c:544
> #15 0x080953c8 in worker_thread (thd=0x812d378, dummy=0x815b460)
>      at worker.c:894
> #16 0xb7e87eac in dummy_worker (opaque=0x812d378)
>      at threadproc/unix/thread.c:142
> #17 0xb7e1846b in start_thread () from 
> /lib/tls/i686/cmov/libpthread.so.0
> #18 0xb7d9873e in clone () from /lib/tls/i686/cmov/libc.so.6
> 
> (gdb) dump_bucket ec
>   bucket=¨0¸(0x08161364) length=135664344 data=0x080641b0
>       contents=[**unknown**]          rc=n/a
> 
> (gdb) print *ec
> $1 = {link = {next = 0x815db00, prev = 0x8169a50}, type = 0x815d928,
>    length = 135664344, start = -5193905754803399840, data = 0x80641b0,
>    free = 0x8161390, list = 0x1}
> 
> (gdb) print *ec->type
> $2 = {name = 0x815b920 "¨À\v\b0ù\025\b\030Ñ\022\b¸9\026\b\030À\025\b",
>    num_func = 135641240, is_metadata = APR_BUCKET_DATA, 
> destroy = 0x816bd00,
>    read = 0x58, setaside = 0x815d928, split = 0x815d910, copy = 0}
> 
> (gdb) dump_brigade bb
> dump of brigade 0x8161360
>     | type     (address)    | length | data addr 
> ---------------------------------------------------
>   0 | FILE     (0x0815db00) | 16777216 | 0x0815daa8
>   1 | FILE     (0x0815db58) | 16777216 | 0x0815daa8 
> <snip>
> 265 | FILE     (0x081699f8) | 16777216 | 0x0815daa8 
> 266 | FILE     (0x0815d948) | 15392768 | 0x0815daa8 
> 267 | EOS      (0x08169a50) | 0      | 0x00000000 
> end of brigade


Hm. Looks like to me that APR_BRIGADE_SENTINEL(ec) is true, because next points to
the first bucket in the brigade and prev to the last one. AFAIK the SENTINEL
is not a valid bucket and does not contain valid bucket data.
This should NEVER happen and as we see the byte range filter code is not prepared
to handle this.

Regards

Rüdiger



Re: httpd 2.2.8 segfaults

Posted by Ruediger Pluem <rp...@apache.org>.

On 02/21/2008 10:09 PM, Niklas Edmundsson wrote:
> On Wed, 20 Feb 2008, Niklas Edmundsson wrote:
> 
>> In any case, I should probably try to figure out how to reproduce this 
>> thing. All coredumps I've looked at have been when serving DVD images, 
>> which of course works flawlessly when I try it...
> 
> OK, I've been able to reproduce this, and it looks really bad because:
> 
> - I'm able to reproduce without having mod_cache loaded, ie. vanilla
>   httpd.
> - It's as easy as continuing an aborted download, so it's a trivial
>   DOS.
> 
> So, to reproduce I did:
> 1) Download 2222288895 bytes of the total 4444577792 bytes of a DVD
>    image (debian-31r7-i386-binary-2.iso if you're curious).
> 2) Continue the download by doing wget -cS http://whatever/file.iso
> 
> This coredumps the server, immediately closing the connection to the 
> client.
> 
> Backtrace of coredump is:
> #0  0xffffe410 in __kernel_vsyscall ()
> #1  0xb7cefca6 in kill () from /lib/tls/i686/cmov/libc.so.6
> #2  0x08089a03 in sig_coredump (sig=11) at mpm_common.c:1235
> #3  <signal handler called>
> #4  0x00000000 in ?? ()
> #5  0x08093010 in ap_byterange_filter (f=0x81606a0, bb=0x8161360)
>     at byterange_filter.c:271
> #6  0x0808aec5 in ap_pass_brigade (next=0x81606a0, bb=0x8161360)
>     at util_filter.c:526
> #7  0x08077576 in default_handler (r=0x815f968) at core.c:3740
> #8  0x0807df8d in ap_run_handler (r=0x815f968) at config.c:157
> #9  0x0807e6d7 in ap_invoke_handler (r=0x815f968) at config.c:372
> #10 0x0808ea7c in ap_process_request (r=0x815f968) at http_request.c:258
> #11 0x0808b543 in ap_process_http_connection (c=0x815bb08) at 
> http_core.c:190
> #12 0x08086df3 in ap_run_process_connection (c=0x815bb08) at 
> connection.c:43
> #13 0x08087274 in ap_process_connection (c=0x815bb08, csd=0x815b958)
>     at connection.c:178
> #14 0x08094b00 in process_socket (p=0x815b920, sock=0x815b958, 
> my_child_num=0,
>     my_thread_num=0, bucket_alloc=0x815d928) at worker.c:544
> #15 0x080953c8 in worker_thread (thd=0x812d378, dummy=0x815b460)
>     at worker.c:894
> #16 0xb7e87eac in dummy_worker (opaque=0x812d378)
>     at threadproc/unix/thread.c:142
> #17 0xb7e1846b in start_thread () from /lib/tls/i686/cmov/libpthread.so.0
> #18 0xb7d9873e in clone () from /lib/tls/i686/cmov/libc.so.6
> 
> (gdb) dump_bucket ec
>  bucket=¨0¸(0x08161364) length=135664344 data=0x080641b0
>      contents=[**unknown**]          rc=n/a
> 
> (gdb) print *ec
> $1 = {link = {next = 0x815db00, prev = 0x8169a50}, type = 0x815d928,
>   length = 135664344, start = -5193905754803399840, data = 0x80641b0,
>   free = 0x8161390, list = 0x1}

Quick question:

What is shown by

- print ec

Is the 'smashed' bucket always in the same position of the brigade (e.g always the first, second,...)

Regards

Rüdiger



Re: httpd 2.2.8 segfaults

Posted by Niklas Edmundsson <ni...@acc.umu.se>.
On Thu, 21 Feb 2008, Ruediger Pluem wrote:

> Quick, maybe completely stupid question as I suspect the problem in 
> apr_brigade_partition:

Quick answer before I go to bed :)

> I always thought that apr_off_t and apr_size_t are *always* of the 
> same size and that the only difference between them is that 
> apr_size_t is unsigned whereas apr_off_t is signed.

> Is this thought correct?

No.

>From my understanding, apr_size_t is what can be addressed by the 
architecture, ie. on a 32bit machine it would be 32bit unsigned.

apr_off_t is the file offset, and thus not coupled to the 
architecture but rather if we're LFS capable or not.

/Nikke
-- 
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
  Niklas Edmundsson, Admin @ {acc,hpc2n}.umu.se      |     nikke@acc.umu.se
---------------------------------------------------------------------------
  In case of fire, yell, "FIRE" You know it makes sense
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

Re: httpd 2.2.8 segfaults

Posted by Ruediger Pluem <rp...@apache.org>.

On 02/21/2008 11:59 PM, William A. Rowe, Jr. wrote:
> Ruediger Pluem wrote:
>>
>> I always thought that apr_off_t and apr_size_t are *always* of the 
>> same size and that the
>> only difference between them is that apr_size_t is unsigned whereas 
>> apr_off_t is signed.
>> Is this thought correct?
> 
> NO NO NO no no.
> 
> off_t represents an index to storage (io through FILE, fd, apr_file
> whatever).
> 
> size_t represents an index to memory, corresponding to sizeof(void*)
> 
> off = offset into externals, size = offset into our memory space.
> 

Many thanks for the clarification and just to make a complete fool of myself
before I leave for bed :-):

apr_off_t is always signed whereas apr_size_t is always unsigned, correct?

Regards

Rüdiger

Re: httpd 2.2.8 segfaults

Posted by "William A. Rowe, Jr." <wr...@rowe-clan.net>.
Ruediger Pluem wrote:
> 
> I always thought that apr_off_t and apr_size_t are *always* of the same 
> size and that the
> only difference between them is that apr_size_t is unsigned whereas 
> apr_off_t is signed.
> Is this thought correct?

NO NO NO no no.

off_t represents an index to storage (io through FILE, fd, apr_file
whatever).

size_t represents an index to memory, corresponding to sizeof(void*)

off = offset into externals, size = offset into our memory space.

Re: httpd 2.2.8 segfaults

Posted by Ruediger Pluem <rp...@apache.org>.

On 02/21/2008 10:33 PM, Ruediger Pluem wrote:
> 
> 
> On 02/21/2008 10:09 PM, Niklas Edmundsson wrote:
>> On Wed, 20 Feb 2008, Niklas Edmundsson wrote:
>>
>>> In any case, I should probably try to figure out how to reproduce 
>>> this thing. All coredumps I've looked at have been when serving DVD 
>>> images, which of course works flawlessly when I try it...
>>
>> OK, I've been able to reproduce this, and it looks really bad because:
> 
> Could you please check if backing out the following patch out of apr-util
> 'fixes' the problem:
> 
> http://svn.apache.org/viewvc/apr/apr-util/branches/1.2.x/buckets/apr_brigade.c?r1=232557&r2=588057 

Quick, maybe completely stupid question as I suspect the problem in apr_brigade_partition:

I always thought that apr_off_t and apr_size_t are *always* of the same size and that the
only difference between them is that apr_size_t is unsigned whereas apr_off_t is signed.
Is this thought correct?

Regards

Rüdiger

Re: httpd 2.2.8 segfaults

Posted by Plüm, Rüdiger, VF-Group <ru...@vodafone.com>.
 

> -----Ursprüngliche Nachricht-----
> Von: Niklas Edmundsson 
> Gesendet: Sonntag, 24. Februar 2008 18:11
> An: dev@httpd.apache.org
> Cc: APR Developer List
> Betreff: Re: httpd 2.2.8 segfaults
> 
> On Sun, 24 Feb 2008, Ruediger Pluem wrote:
> 
> >> It seems to work after fixing the 
> APR_SIZE_MAX/MAX_APR_SIZE_T thing, this 
> >> is (still) httpd-2.2.8 on Ubuntu 32bit LFS-enabled.
> >
> > Just for clarification: The crashes are gone with this patch?
> > Then I would commit to apr-util trunk.
> 
> I can't reproduce them at least, using the same testcases that acted 
> up previously:
> - 4.1GB file, continuing at approx 2.5GB (instant segfault with stock
>    httpd 2.2.8+included APR).
> - same 4.1GB file, continuing at approx 50MB (bogus content-length)
> 
> So I'd say it's a step in the right direction.

Ok. I committed the patch to apr-util-trunk as r630780.

Regards

Rüdiger


Re: httpd 2.2.8 segfaults

Posted by Plüm, Rüdiger, VF-Group <ru...@vodafone.com>.
 

> -----Ursprüngliche Nachricht-----
> Von: Niklas Edmundsson 
> Gesendet: Sonntag, 24. Februar 2008 18:11
> An: dev@httpd.apache.org
> Cc: APR Developer List
> Betreff: Re: httpd 2.2.8 segfaults
> 
> On Sun, 24 Feb 2008, Ruediger Pluem wrote:
> 
> >> It seems to work after fixing the 
> APR_SIZE_MAX/MAX_APR_SIZE_T thing, this 
> >> is (still) httpd-2.2.8 on Ubuntu 32bit LFS-enabled.
> >
> > Just for clarification: The crashes are gone with this patch?
> > Then I would commit to apr-util trunk.
> 
> I can't reproduce them at least, using the same testcases that acted 
> up previously:
> - 4.1GB file, continuing at approx 2.5GB (instant segfault with stock
>    httpd 2.2.8+included APR).
> - same 4.1GB file, continuing at approx 50MB (bogus content-length)
> 
> So I'd say it's a step in the right direction.

Ok. I committed the patch to apr-util-trunk as r630780.

Regards

Rüdiger


Re: httpd 2.2.8 segfaults

Posted by Niklas Edmundsson <ni...@acc.umu.se>.
On Sun, 24 Feb 2008, Ruediger Pluem wrote:

>> It seems to work after fixing the APR_SIZE_MAX/MAX_APR_SIZE_T thing, this 
>> is (still) httpd-2.2.8 on Ubuntu 32bit LFS-enabled.
>
> Just for clarification: The crashes are gone with this patch?
> Then I would commit to apr-util trunk.

I can't reproduce them at least, using the same testcases that acted 
up previously:
- 4.1GB file, continuing at approx 2.5GB (instant segfault with stock
   httpd 2.2.8+included APR).
- same 4.1GB file, continuing at approx 50MB (bogus content-length)

So I'd say it's a step in the right direction.

/Nikke
-- 
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
  Niklas Edmundsson, Admin @ {acc,hpc2n}.umu.se      |     nikke@acc.umu.se
---------------------------------------------------------------------------
  *}    -    |            Tribble archery
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

Re: httpd 2.2.8 segfaults

Posted by Niklas Edmundsson <ni...@acc.umu.se>.
On Sun, 24 Feb 2008, Ruediger Pluem wrote:

>> It seems to work after fixing the APR_SIZE_MAX/MAX_APR_SIZE_T thing, this 
>> is (still) httpd-2.2.8 on Ubuntu 32bit LFS-enabled.
>
> Just for clarification: The crashes are gone with this patch?
> Then I would commit to apr-util trunk.

I can't reproduce them at least, using the same testcases that acted 
up previously:
- 4.1GB file, continuing at approx 2.5GB (instant segfault with stock
   httpd 2.2.8+included APR).
- same 4.1GB file, continuing at approx 50MB (bogus content-length)

So I'd say it's a step in the right direction.

/Nikke
-- 
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
  Niklas Edmundsson, Admin @ {acc,hpc2n}.umu.se      |     nikke@acc.umu.se
---------------------------------------------------------------------------
  *}    -    |            Tribble archery
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

Re: httpd 2.2.8 segfaults

Posted by Ruediger Pluem <rp...@apache.org>.

On 02/24/2008 03:16 PM, Niklas Edmundsson wrote:
> On Sat, 23 Feb 2008, Ruediger Pluem wrote:
> 
>>> I'm still not liking the casts and the mixed -1's, APR_SIZE_MAX and 
>>> MAX_APR_SIZE_T...
>>>
>>> In any case, I'll be busy for most of this weekend so I probably 
>>> won't have time to try patches until monday...
>>
>> Thats fine. Looking forward to your test results.
> 
> It seems to work after fixing the APR_SIZE_MAX/MAX_APR_SIZE_T thing, 
> this is (still) httpd-2.2.8 on Ubuntu 32bit LFS-enabled.

Just for clarification: The crashes are gone with this patch?
Then I would commit to apr-util trunk.

> 
> Compiling with optimization and all warnings (-O3 -W -Wall) I only get 
> these, which are not related to apr_brigade_partition:
> buckets/apr_brigade.c: In function 'apr_brigade_to_iovec':
> buckets/apr_brigade.c:359: warning: dereferencing type-punned pointer 
> will break strict-aliasing rules
> buckets/apr_brigade.c: In function 'apr_brigade_vprintf':
> buckets/apr_brigade.c:681: warning: comparison between signed and unsigned
> 
> The warning in apr_brigade_vprintf is trivial to fix, it should be 'int 
> written' instead of 'apr_size_t written' since that's what 
> apr_vformatter seems to return.

Thanks. Fixed on apr-util trunk in r630625.

> 
> Also, I get the point of (apr_size_t)(-1) now since it's the documented 
> "unknown bucket length" indicator. Ugly, but effective.
> 
> However, it seems to me that this leaves a hole for off-by-one 
> opportunities because on 32bit:
> 
> (apr_size_t)(-1) is 0xffffffff
> MAX_APR_SIZE_T is also 0xffffffff
> 
> This means that a bucket can hold max 0xfffffffe bytes right?

Hm. Yes and no, but a bucket of size 0xffffffff cannot be distinguished
from a bucket of unknown length which can may lead to "interesting"
code paths.

> 
> Without checking too much I would guess that passing 0xffffffff as 
> "point" argument to apr_brigade_partition could fall between the cracks 
> since the comparisons with MAX_APR_SIZE_T are all < or > ...

 From a quick checking I would say that apr_brigade_partition still works
as designed even in his edge case.

> 
>> From a deobfuscating view to avoid future bugs I would suggest:
> 1) Create proper defines for use with the bucket length, ie.
>    MAX_BUCKET_LEN and BUCKET_LEN_UNKNOWN or something. It's much
>    easier to read and keep track of.

Seems to be a valid point, but because of APR's versioning rules this
can be only done on trunk. We need to keep in mind that (apr_size_t)(-1)
is used in many places in apr-util / httpd. So it will be some tedious
work to get this replaced but I guess it's worth it.

> 
> 2) Wouldn't it make more sense of having apr_brigade_partition() being
>    a little more careful like apr_brigade_insert_file() and creating
>    buckets of at most MAX_BUCKET_SIZE bytes?

Effectively apr_brigade_partition does not create buckets that are larger
than the ones that were supplied as apr_bucket_split only creates buckets
of the same length or smaller buckets. So I do not see the point for an
additional "nanny" behaviour in apr_brigade_partition here. This should
be fixed by the caller.

Regards

Rüdiger


Re: httpd 2.2.8 segfaults

Posted by Ruediger Pluem <rp...@apache.org>.

On 02/24/2008 03:16 PM, Niklas Edmundsson wrote:
> On Sat, 23 Feb 2008, Ruediger Pluem wrote:
> 
>>> I'm still not liking the casts and the mixed -1's, APR_SIZE_MAX and 
>>> MAX_APR_SIZE_T...
>>>
>>> In any case, I'll be busy for most of this weekend so I probably 
>>> won't have time to try patches until monday...
>>
>> Thats fine. Looking forward to your test results.
> 
> It seems to work after fixing the APR_SIZE_MAX/MAX_APR_SIZE_T thing, 
> this is (still) httpd-2.2.8 on Ubuntu 32bit LFS-enabled.

Just for clarification: The crashes are gone with this patch?
Then I would commit to apr-util trunk.

> 
> Compiling with optimization and all warnings (-O3 -W -Wall) I only get 
> these, which are not related to apr_brigade_partition:
> buckets/apr_brigade.c: In function 'apr_brigade_to_iovec':
> buckets/apr_brigade.c:359: warning: dereferencing type-punned pointer 
> will break strict-aliasing rules
> buckets/apr_brigade.c: In function 'apr_brigade_vprintf':
> buckets/apr_brigade.c:681: warning: comparison between signed and unsigned
> 
> The warning in apr_brigade_vprintf is trivial to fix, it should be 'int 
> written' instead of 'apr_size_t written' since that's what 
> apr_vformatter seems to return.

Thanks. Fixed on apr-util trunk in r630625.

> 
> Also, I get the point of (apr_size_t)(-1) now since it's the documented 
> "unknown bucket length" indicator. Ugly, but effective.
> 
> However, it seems to me that this leaves a hole for off-by-one 
> opportunities because on 32bit:
> 
> (apr_size_t)(-1) is 0xffffffff
> MAX_APR_SIZE_T is also 0xffffffff
> 
> This means that a bucket can hold max 0xfffffffe bytes right?

Hm. Yes and no, but a bucket of size 0xffffffff cannot be distinguished
from a bucket of unknown length which can may lead to "interesting"
code paths.

> 
> Without checking too much I would guess that passing 0xffffffff as 
> "point" argument to apr_brigade_partition could fall between the cracks 
> since the comparisons with MAX_APR_SIZE_T are all < or > ...

 From a quick checking I would say that apr_brigade_partition still works
as designed even in his edge case.

> 
>> From a deobfuscating view to avoid future bugs I would suggest:
> 1) Create proper defines for use with the bucket length, ie.
>    MAX_BUCKET_LEN and BUCKET_LEN_UNKNOWN or something. It's much
>    easier to read and keep track of.

Seems to be a valid point, but because of APR's versioning rules this
can be only done on trunk. We need to keep in mind that (apr_size_t)(-1)
is used in many places in apr-util / httpd. So it will be some tedious
work to get this replaced but I guess it's worth it.

> 
> 2) Wouldn't it make more sense of having apr_brigade_partition() being
>    a little more careful like apr_brigade_insert_file() and creating
>    buckets of at most MAX_BUCKET_SIZE bytes?

Effectively apr_brigade_partition does not create buckets that are larger
than the ones that were supplied as apr_bucket_split only creates buckets
of the same length or smaller buckets. So I do not see the point for an
additional "nanny" behaviour in apr_brigade_partition here. This should
be fixed by the caller.

Regards

Rüdiger


Re: httpd 2.2.8 segfaults

Posted by Niklas Edmundsson <ni...@acc.umu.se>.
On Sat, 23 Feb 2008, Ruediger Pluem wrote:

>> I'm still not liking the casts and the mixed -1's, APR_SIZE_MAX and 
>> MAX_APR_SIZE_T...
>> 
>> In any case, I'll be busy for most of this weekend so I probably won't have 
>> time to try patches until monday...
>
> Thats fine. Looking forward to your test results.

It seems to work after fixing the APR_SIZE_MAX/MAX_APR_SIZE_T thing, 
this is (still) httpd-2.2.8 on Ubuntu 32bit LFS-enabled.

Compiling with optimization and all warnings (-O3 -W -Wall) I only get 
these, which are not related to apr_brigade_partition:
buckets/apr_brigade.c: In function 'apr_brigade_to_iovec':
buckets/apr_brigade.c:359: warning: dereferencing type-punned pointer will break strict-aliasing rules
buckets/apr_brigade.c: In function 'apr_brigade_vprintf':
buckets/apr_brigade.c:681: warning: comparison between signed and unsigned

The warning in apr_brigade_vprintf is trivial to fix, it should be 
'int written' instead of 'apr_size_t written' since that's what 
apr_vformatter seems to return.

Also, I get the point of (apr_size_t)(-1) now since it's the 
documented "unknown bucket length" indicator. Ugly, but effective.

However, it seems to me that this leaves a hole for off-by-one 
opportunities because on 32bit:

(apr_size_t)(-1) is 0xffffffff
MAX_APR_SIZE_T is also 0xffffffff

This means that a bucket can hold max 0xfffffffe bytes right?

Without checking too much I would guess that passing 0xffffffff as 
"point" argument to apr_brigade_partition could fall between the 
cracks since the comparisons with MAX_APR_SIZE_T are all < or > ...

>From a deobfuscating view to avoid future bugs I would suggest:
1) Create proper defines for use with the bucket length, ie.
    MAX_BUCKET_LEN and BUCKET_LEN_UNKNOWN or something. It's much
    easier to read and keep track of.

2) Wouldn't it make more sense of having apr_brigade_partition() being
    a little more careful like apr_brigade_insert_file() and creating
    buckets of at most MAX_BUCKET_SIZE bytes?


/Nikke
-- 
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
  Niklas Edmundsson, Admin @ {acc,hpc2n}.umu.se      |     nikke@acc.umu.se
---------------------------------------------------------------------------
  If at first you don't succeed, skydiving's out!
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

Re: httpd 2.2.8 segfaults

Posted by Ruediger Pluem <rp...@apache.org>.

On 02/23/2008 09:54 AM, Niklas Edmundsson wrote:

> 
> I'm still not liking the casts and the mixed -1's, APR_SIZE_MAX and 
> MAX_APR_SIZE_T...
> 
> In any case, I'll be busy for most of this weekend so I probably won't 
> have time to try patches until monday...

Thats fine. Looking forward to your test results.

Regards

Rüdiger


Re: httpd 2.2.8 segfaults

Posted by Niklas Edmundsson <ni...@acc.umu.se>.
On Fri, 22 Feb 2008, Plüm, Rüdiger, VF-Group wrote:

>>>> In general, that patch looks truly suspicious since it seems to me
>>>> it's typecasting wildly and not even using its newly invented
>>>> MAX_APR_SIZE_T in all places, because (apr_size_t)(-1)
>> really is the
>>>> same thing, right?
>>>
>>> No, MAX_APR_SIZE_T and (apr_size_t)(-1) might be different
>> depending on the
>>> platform. MAX_APR_SIZE_T is ~(apr_size_t)(0).
>>
>> Won't both be 0xff...ff as long as apr_size_t is unsigned (which it
>> should be)? If not, the code makes even less sense..
>
> I thought the same so far. But there seem to be platforms that we support
> where this is not the case. Don't ask which platforms these are. Somebody?

size_t must be unsigned for (apr_size_t)(-1) to work, or this code 
will be rather bogus IMHO. Comparing the length to signed -1 doesn't 
seem really productive...

>> Both casting signed -1 to unsigned and flipping the bits of 0 are
>> standard methods to get the max-value possible to store in a
>> variable...
>>
>>> As I have overcome my confusion regarding apr_off_t / apr_size_t I
>>> hope to have a look into the problem and find a solution how to do
>>> all the casting stuff correctly.
>>
>> My tip would be: less casts. If they're needed they're usually a sign
>> of bad design or a thinko somewhere.
>

> Meanwhile I tried to clean this up in trunk. Can you please try the 
> attached patch?
>
> Keep in mind that MAX_APR_SIZE_T is not present in apr-util 1.2.x 
> and that you need to adjust this manually. Remote eyes welcome as 
> well.

I'm still not liking the casts and the mixed -1's, APR_SIZE_MAX and 
MAX_APR_SIZE_T...

In any case, I'll be busy for most of this weekend so I probably won't 
have time to try patches until monday...

> Index: apr_brigade.c
> ===================================================================
> --- apr_brigade.c       (revision 630122)
> +++ apr_brigade.c       (working copy)
> @@ -97,6 +97,7 @@
>     apr_bucket *e;
>     const char *s;
>     apr_size_t len;
> +    apr_uint64_t point64;
>     apr_status_t rv;
>
>     if (point < 0) {
> @@ -108,17 +109,25 @@
>         return APR_SUCCESS;
>     }
>
> +    /*
> +     * Try to reduce the following casting mess: We know that point will be
> +     * larger equal 0 now and forever and thus that point (apr_off_t) and
> +     * apr_size_t will fit into apr_uint64_t in any case.
> +     */
> +    point64 = (apr_uint64_t)point;
> +
>     APR_BRIGADE_CHECK_CONSISTENCY(b);
>
>     for (e = APR_BRIGADE_FIRST(b);
>          e != APR_BRIGADE_SENTINEL(b);
>          e = APR_BUCKET_NEXT(e))
>     {
> -        /* For an unknown length bucket, while 'point' is beyond the possible
> +        /* For an unknown length bucket, while 'point64' is beyond the possible
>          * size contained in apr_size_t, read and continue...
>          */
> -        if ((e->length == (apr_size_t)(-1)) && (point > APR_SIZE_MAX)) {
> -            /* point is too far out to simply split this bucket,
> +        if ((e->length == (apr_size_t)(-1))
> +            && (point64 > (apr_uint64_t)APR_SIZE_MAX)) {
> +            /* point64 is too far out to simply split this bucket,
>              * we must fix this bucket's size and keep going... */
>             rv = apr_bucket_read(e, &s, &len, APR_BLOCK_READ);
>             if (rv != APR_SUCCESS) {
> @@ -126,14 +135,15 @@
>                 return rv;
>             }
>         }
> -        else if (((apr_size_t)point < e->length) || (e->length == (apr_size_t)(-1))) {
> -            /* We already consumed buckets where point is beyond
> +        else if ((point64 < (apr_uint64_t)e->length)
> +                 || (e->length == (apr_size_t)(-1))) {
> +            /* We already consumed buckets where point64 is beyond
>              * our interest ( point > MAX_APR_SIZE_T ), above.
> -             * Here point falls between 0 and MAX_APR_SIZE_T
> +             * Here point falls between 0 and MAX_APR_SIZE_T
>              * and is within this bucket, or this bucket's len
>              * is undefined, so now we are ready to split it.
>              * First try to split the bucket natively... */
> -            if ((rv = apr_bucket_split(e, (apr_size_t)point))
> +            if ((rv = apr_bucket_split(e, (apr_size_t)point64))
>                     != APR_ENOTIMPL) {
>                 *after_point = APR_BUCKET_NEXT(e);
>                 return rv;
> @@ -150,17 +160,17 @@
>             /* this assumes that len == e->length, which is okay because e
>              * might have been morphed by the apr_bucket_read() above, but
>              * if it was, the length would have been adjusted appropriately */
> -            if ((apr_size_t)point < e->length) {
> +            if (point64 < (apr_uint64_t)e->length) {
>                 rv = apr_bucket_split(e, (apr_size_t)point);
>                 *after_point = APR_BUCKET_NEXT(e);
>                 return rv;
>             }
>         }
> -        if (point == e->length) {
> +        if (point64 == (apr_uint64_t)e->length) {
>             *after_point = APR_BUCKET_NEXT(e);
>             return APR_SUCCESS;
>         }
> -        point -= e->length;
> +        point64 -= (apr_uint64_t)e->length;
>     }
>     *after_point = APR_BRIGADE_SENTINEL(b);
>     return APR_INCOMPLETE;
>
>
> Regards
>
> Rüdiger
>
>


/Nikke
-- 
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
  Niklas Edmundsson, Admin @ {acc,hpc2n}.umu.se      |     nikke@acc.umu.se
---------------------------------------------------------------------------
  Captain, I sense millions of minds focused on my cleavage.
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

Re: httpd 2.2.8 segfaults

Posted by Jim Jagielski <ji...@jaguNET.com>.
On Feb 22, 2008, at 5:21 PM, Ruediger Pluem wrote:

>
>
> On 02/22/2008 07:40 PM, William A. Rowe, Jr. wrote:
>> Joe Orton wrote:
>>> CC'ing dev@apr since the code in question is in APR.
>>>
>>> On Fri, Feb 22, 2008 at 05:45:53PM +0100, Plüm, Rüdiger, VF-Group  
>>> wrote:
>>>>> On Feb 22, 2008, at 9:27 AM, Plüm, Rüdiger, VF-Group wrote:
>>>>>> +    /*
>>>>>> +     * Try to reduce the following casting mess: We know that  
>>>>>> point will be
>>>>>> +     * larger equal 0 now and forever and thus that point  
>>>>>> (apr_off_t) and
>>>>>> +     * apr_size_t will fit into apr_uint64_t in any case.
>>>>>> +     */
>>>>> Do we really know that? Is that confirmed at configure
>>>>> time?
>>>> Do we have any integer on any platform that we support that is  
>>>> larger
>>>> as apr_uint64_t / apr_int64_t?
>>>> I always thought that they are the largest and that on no platform
>>>> we have any integers with more than 64 bit.
>>>
>>> APR doesn't support any platform where sizeof(apr_off_t) > 8, that  
>>> is correct.
>> Don't we know for a fact that apr_off_t >= apr_size_t on all  
>> platforms,
>> today?
>
> Do we have 32 bit platforms without LFS?

Why are we asking these questions? If we need to ask or
ensure something, that is what configure is there for :) :)


Re: httpd 2.2.8 segfaults

Posted by Jim Jagielski <ji...@jaguNET.com>.
On Feb 22, 2008, at 5:21 PM, Ruediger Pluem wrote:

>
>
> On 02/22/2008 07:40 PM, William A. Rowe, Jr. wrote:
>> Joe Orton wrote:
>>> CC'ing dev@apr since the code in question is in APR.
>>>
>>> On Fri, Feb 22, 2008 at 05:45:53PM +0100, Plüm, Rüdiger, VF-Group  
>>> wrote:
>>>>> On Feb 22, 2008, at 9:27 AM, Plüm, Rüdiger, VF-Group wrote:
>>>>>> +    /*
>>>>>> +     * Try to reduce the following casting mess: We know that  
>>>>>> point will be
>>>>>> +     * larger equal 0 now and forever and thus that point  
>>>>>> (apr_off_t) and
>>>>>> +     * apr_size_t will fit into apr_uint64_t in any case.
>>>>>> +     */
>>>>> Do we really know that? Is that confirmed at configure
>>>>> time?
>>>> Do we have any integer on any platform that we support that is  
>>>> larger
>>>> as apr_uint64_t / apr_int64_t?
>>>> I always thought that they are the largest and that on no platform
>>>> we have any integers with more than 64 bit.
>>>
>>> APR doesn't support any platform where sizeof(apr_off_t) > 8, that  
>>> is correct.
>> Don't we know for a fact that apr_off_t >= apr_size_t on all  
>> platforms,
>> today?
>
> Do we have 32 bit platforms without LFS?

Why are we asking these questions? If we need to ask or
ensure something, that is what configure is there for :) :)


Re: httpd 2.2.8 segfaults

Posted by Ruediger Pluem <rp...@apache.org>.

On 02/22/2008 07:40 PM, William A. Rowe, Jr. wrote:
> Joe Orton wrote:
>> CC'ing dev@apr since the code in question is in APR.
>>
>> On Fri, Feb 22, 2008 at 05:45:53PM +0100, Plüm, Rüdiger, VF-Group wrote:
>>>> On Feb 22, 2008, at 9:27 AM, Plüm, Rüdiger, VF-Group wrote:
>>>>> +    /*
>>>>> +     * Try to reduce the following casting mess: We know that 
>>>>> point will be
>>>>> +     * larger equal 0 now and forever and thus that point 
>>>>> (apr_off_t) and
>>>>> +     * apr_size_t will fit into apr_uint64_t in any case.
>>>>> +     */
>>>> Do we really know that? Is that confirmed at configure
>>>> time?
>>> Do we have any integer on any platform that we support that is larger
>>> as apr_uint64_t / apr_int64_t?
>>> I always thought that they are the largest and that on no platform
>>> we have any integers with more than 64 bit.
>>
>> APR doesn't support any platform where sizeof(apr_off_t) > 8, that is 
>> correct.
> 
> Don't we know for a fact that apr_off_t >= apr_size_t on all platforms,
> today?

Do we have 32 bit platforms without LFS? In this case I would assume
that apr_size_t is an unsigned 32 bit integer whereas apr_off_t is a signed
32 bit integer. So this assumption would not be true there.
Even if do not think that it would really hurt us in practice, but what about
64 bit platforms? Don't they use a 64 bit signed integer for apr_off_t and
a 64 unsigned one for apr_size_t?

> 
> I can't see how apr supporting only file offsets smaller than available
> memory would ever be desirable.

I think this is not a matter of APR, but a matter of the original definitions
of off_t and size_t like in my possible correct examples above :-).

Regards

Rüdiger



Re: httpd 2.2.8 segfaults

Posted by Ruediger Pluem <rp...@apache.org>.

On 02/22/2008 07:40 PM, William A. Rowe, Jr. wrote:
> Joe Orton wrote:
>> CC'ing dev@apr since the code in question is in APR.
>>
>> On Fri, Feb 22, 2008 at 05:45:53PM +0100, Plüm, Rüdiger, VF-Group wrote:
>>>> On Feb 22, 2008, at 9:27 AM, Plüm, Rüdiger, VF-Group wrote:
>>>>> +    /*
>>>>> +     * Try to reduce the following casting mess: We know that 
>>>>> point will be
>>>>> +     * larger equal 0 now and forever and thus that point 
>>>>> (apr_off_t) and
>>>>> +     * apr_size_t will fit into apr_uint64_t in any case.
>>>>> +     */
>>>> Do we really know that? Is that confirmed at configure
>>>> time?
>>> Do we have any integer on any platform that we support that is larger
>>> as apr_uint64_t / apr_int64_t?
>>> I always thought that they are the largest and that on no platform
>>> we have any integers with more than 64 bit.
>>
>> APR doesn't support any platform where sizeof(apr_off_t) > 8, that is 
>> correct.
> 
> Don't we know for a fact that apr_off_t >= apr_size_t on all platforms,
> today?

Do we have 32 bit platforms without LFS? In this case I would assume
that apr_size_t is an unsigned 32 bit integer whereas apr_off_t is a signed
32 bit integer. So this assumption would not be true there.
Even if do not think that it would really hurt us in practice, but what about
64 bit platforms? Don't they use a 64 bit signed integer for apr_off_t and
a 64 unsigned one for apr_size_t?

> 
> I can't see how apr supporting only file offsets smaller than available
> memory would ever be desirable.

I think this is not a matter of APR, but a matter of the original definitions
of off_t and size_t like in my possible correct examples above :-).

Regards

Rüdiger



Re: httpd 2.2.8 segfaults

Posted by "William A. Rowe, Jr." <wr...@rowe-clan.net>.
Joe Orton wrote:
> CC'ing dev@apr since the code in question is in APR.
> 
> On Fri, Feb 22, 2008 at 05:45:53PM +0100, Plüm, Rüdiger, VF-Group wrote:
>>> On Feb 22, 2008, at 9:27 AM, Plüm, Rüdiger, VF-Group wrote:
>>>> +    /*
>>>> +     * Try to reduce the following casting mess: We know that point will be
>>>> +     * larger equal 0 now and forever and thus that point (apr_off_t) and
>>>> +     * apr_size_t will fit into apr_uint64_t in any case.
>>>> +     */
>>> Do we really know that? Is that confirmed at configure
>>> time?
>> Do we have any integer on any platform that we support that is larger
>> as apr_uint64_t / apr_int64_t?
>> I always thought that they are the largest and that on no platform
>> we have any integers with more than 64 bit.
> 
> APR doesn't support any platform where sizeof(apr_off_t) > 8, that is 
> correct.

Don't we know for a fact that apr_off_t >= apr_size_t on all platforms,
today?

I can't see how apr supporting only file offsets smaller than available
memory would ever be desirable.

Re: httpd 2.2.8 segfaults

Posted by "William A. Rowe, Jr." <wr...@rowe-clan.net>.
Joe Orton wrote:
> CC'ing dev@apr since the code in question is in APR.
> 
> On Fri, Feb 22, 2008 at 05:45:53PM +0100, Plüm, Rüdiger, VF-Group wrote:
>>> On Feb 22, 2008, at 9:27 AM, Plüm, Rüdiger, VF-Group wrote:
>>>> +    /*
>>>> +     * Try to reduce the following casting mess: We know that point will be
>>>> +     * larger equal 0 now and forever and thus that point (apr_off_t) and
>>>> +     * apr_size_t will fit into apr_uint64_t in any case.
>>>> +     */
>>> Do we really know that? Is that confirmed at configure
>>> time?
>> Do we have any integer on any platform that we support that is larger
>> as apr_uint64_t / apr_int64_t?
>> I always thought that they are the largest and that on no platform
>> we have any integers with more than 64 bit.
> 
> APR doesn't support any platform where sizeof(apr_off_t) > 8, that is 
> correct.

Don't we know for a fact that apr_off_t >= apr_size_t on all platforms,
today?

I can't see how apr supporting only file offsets smaller than available
memory would ever be desirable.

Re: httpd 2.2.8 segfaults

Posted by Joe Orton <jo...@redhat.com>.
CC'ing dev@apr since the code in question is in APR.

On Fri, Feb 22, 2008 at 05:45:53PM +0100, Plüm, Rüdiger, VF-Group wrote:
> > On Feb 22, 2008, at 9:27 AM, Plüm, Rüdiger, VF-Group wrote:
> > > +    /*
> > > +     * Try to reduce the following casting mess: We know that point will be
> > > +     * larger equal 0 now and forever and thus that point (apr_off_t) and
> > > +     * apr_size_t will fit into apr_uint64_t in any case.
> > > +     */
> > 
> > Do we really know that? Is that confirmed at configure
> > time?
> 
> Do we have any integer on any platform that we support that is larger
> as apr_uint64_t / apr_int64_t?
> I always thought that they are the largest and that on no platform
> we have any integers with more than 64 bit.

APR doesn't support any platform where sizeof(apr_off_t) > 8, that is 
correct.

joe

Re: httpd 2.2.8 segfaults

Posted by Joe Orton <jo...@redhat.com>.
CC'ing dev@apr since the code in question is in APR.

On Fri, Feb 22, 2008 at 05:45:53PM +0100, Plüm, Rüdiger, VF-Group wrote:
> > On Feb 22, 2008, at 9:27 AM, Plüm, Rüdiger, VF-Group wrote:
> > > +    /*
> > > +     * Try to reduce the following casting mess: We know that point will be
> > > +     * larger equal 0 now and forever and thus that point (apr_off_t) and
> > > +     * apr_size_t will fit into apr_uint64_t in any case.
> > > +     */
> > 
> > Do we really know that? Is that confirmed at configure
> > time?
> 
> Do we have any integer on any platform that we support that is larger
> as apr_uint64_t / apr_int64_t?
> I always thought that they are the largest and that on no platform
> we have any integers with more than 64 bit.

APR doesn't support any platform where sizeof(apr_off_t) > 8, that is 
correct.

joe

Re: httpd 2.2.8 segfaults

Posted by Plüm, Rüdiger, VF-Group <ru...@vodafone.com>.
 

> -----Ursprüngliche Nachricht-----
> Von: Jim Jagielski 
> Gesendet: Freitag, 22. Februar 2008 17:41
> An: dev@httpd.apache.org
> Betreff: Re: httpd 2.2.8 segfaults
> 
> 
> On Feb 22, 2008, at 9:27 AM, Plüm, Rüdiger, VF-Group wrote:
> 
> > +    /*
> > +     * Try to reduce the following casting mess: We know 
> that point  
> > will be
> > +     * larger equal 0 now and forever and thus that point  
> > (apr_off_t) and
> > +     * apr_size_t will fit into apr_uint64_t in any case.
> > +     */
> 
> Do we really know that? Is that confirmed at configure
> time?

Do we have any integer on any platform that we support that is larger
as apr_uint64_t / apr_int64_t?
I always thought that they are the largest and that on no platform
we have any integers with more than 64 bit.

Regards

Rüdiger


Re: httpd 2.2.8 segfaults

Posted by Jim Jagielski <ji...@jaguNET.com>.
On Feb 22, 2008, at 9:27 AM, Plüm, Rüdiger, VF-Group wrote:

> +    /*
> +     * Try to reduce the following casting mess: We know that point  
> will be
> +     * larger equal 0 now and forever and thus that point  
> (apr_off_t) and
> +     * apr_size_t will fit into apr_uint64_t in any case.
> +     */

Do we really know that? Is that confirmed at configure
time?


Re: httpd 2.2.8 segfaults

Posted by Plüm, Rüdiger, VF-Group <ru...@vodafone.com>.
 

> -----Ursprüngliche Nachricht-----
> Von: Niklas Edmundsson 
> Gesendet: Freitag, 22. Februar 2008 13:45
> An: dev@httpd.apache.org
> Betreff: Re: httpd 2.2.8 segfaults
> 
> On Fri, 22 Feb 2008, Plüm, Rüdiger, VF-Group wrote:
> 
> >> In general, that patch looks truly suspicious since it seems to me
> >> it's typecasting wildly and not even using its newly invented
> >> MAX_APR_SIZE_T in all places, because (apr_size_t)(-1) 
> really is the
> >> same thing, right?
> >
> > No, MAX_APR_SIZE_T and (apr_size_t)(-1) might be different 
> depending on the
> > platform. MAX_APR_SIZE_T is ~(apr_size_t)(0).
> 
> Won't both be 0xff...ff as long as apr_size_t is unsigned (which it 
> should be)? If not, the code makes even less sense..

I thought the same so far. But there seem to be platforms that we support
where this is not the case. Don't ask which platforms these are. Somebody?

> 
> Both casting signed -1 to unsigned and flipping the bits of 0 are 
> standard methods to get the max-value possible to store in a 
> variable...
> 
> > As I have overcome my confusion regarding apr_off_t / apr_size_t I 
> > hope to have a look into the problem and find a solution how to do 
> > all the casting stuff correctly.
> 
> My tip would be: less casts. If they're needed they're usually a sign 
> of bad design or a thinko somewhere.

Meanwhile I tried to clean this up in trunk. Can you please try the attached patch?

Keep in mind that MAX_APR_SIZE_T is not present in apr-util 1.2.x and that you
need to adjust this manually. Remote eyes welcome as well.

Index: apr_brigade.c
===================================================================
--- apr_brigade.c       (revision 630122)
+++ apr_brigade.c       (working copy)
@@ -97,6 +97,7 @@
     apr_bucket *e;
     const char *s;
     apr_size_t len;
+    apr_uint64_t point64;
     apr_status_t rv;

     if (point < 0) {
@@ -108,17 +109,25 @@
         return APR_SUCCESS;
     }

+    /*
+     * Try to reduce the following casting mess: We know that point will be
+     * larger equal 0 now and forever and thus that point (apr_off_t) and
+     * apr_size_t will fit into apr_uint64_t in any case.
+     */
+    point64 = (apr_uint64_t)point;
+
     APR_BRIGADE_CHECK_CONSISTENCY(b);

     for (e = APR_BRIGADE_FIRST(b);
          e != APR_BRIGADE_SENTINEL(b);
          e = APR_BUCKET_NEXT(e))
     {
-        /* For an unknown length bucket, while 'point' is beyond the possible
+        /* For an unknown length bucket, while 'point64' is beyond the possible
          * size contained in apr_size_t, read and continue...
          */
-        if ((e->length == (apr_size_t)(-1)) && (point > APR_SIZE_MAX)) {
-            /* point is too far out to simply split this bucket,
+        if ((e->length == (apr_size_t)(-1))
+            && (point64 > (apr_uint64_t)APR_SIZE_MAX)) {
+            /* point64 is too far out to simply split this bucket,
              * we must fix this bucket's size and keep going... */
             rv = apr_bucket_read(e, &s, &len, APR_BLOCK_READ);
             if (rv != APR_SUCCESS) {
@@ -126,14 +135,15 @@
                 return rv;
             }
         }
-        else if (((apr_size_t)point < e->length) || (e->length == (apr_size_t)(-1))) {
-            /* We already consumed buckets where point is beyond
+        else if ((point64 < (apr_uint64_t)e->length)
+                 || (e->length == (apr_size_t)(-1))) {
+            /* We already consumed buckets where point64 is beyond
              * our interest ( point > MAX_APR_SIZE_T ), above.
-             * Here point falls between 0 and MAX_APR_SIZE_T
+             * Here point falls between 0 and MAX_APR_SIZE_T
              * and is within this bucket, or this bucket's len
              * is undefined, so now we are ready to split it.
              * First try to split the bucket natively... */
-            if ((rv = apr_bucket_split(e, (apr_size_t)point))
+            if ((rv = apr_bucket_split(e, (apr_size_t)point64))
                     != APR_ENOTIMPL) {
                 *after_point = APR_BUCKET_NEXT(e);
                 return rv;
@@ -150,17 +160,17 @@
             /* this assumes that len == e->length, which is okay because e
              * might have been morphed by the apr_bucket_read() above, but
              * if it was, the length would have been adjusted appropriately */
-            if ((apr_size_t)point < e->length) {
+            if (point64 < (apr_uint64_t)e->length) {
                 rv = apr_bucket_split(e, (apr_size_t)point);
                 *after_point = APR_BUCKET_NEXT(e);
                 return rv;
             }
         }
-        if (point == e->length) {
+        if (point64 == (apr_uint64_t)e->length) {
             *after_point = APR_BUCKET_NEXT(e);
             return APR_SUCCESS;
         }
-        point -= e->length;
+        point64 -= (apr_uint64_t)e->length;
     }
     *after_point = APR_BRIGADE_SENTINEL(b);
     return APR_INCOMPLETE;


Regards

Rüdiger


Re: httpd 2.2.8 segfaults

Posted by Niklas Edmundsson <ni...@acc.umu.se>.
On Fri, 22 Feb 2008, Plüm, Rüdiger, VF-Group wrote:

>> In general, that patch looks truly suspicious since it seems to me
>> it's typecasting wildly and not even using its newly invented
>> MAX_APR_SIZE_T in all places, because (apr_size_t)(-1) really is the
>> same thing, right?
>
> No, MAX_APR_SIZE_T and (apr_size_t)(-1) might be different depending on the
> platform. MAX_APR_SIZE_T is ~(apr_size_t)(0).

Won't both be 0xff...ff as long as apr_size_t is unsigned (which it 
should be)? If not, the code makes even less sense...

Both casting signed -1 to unsigned and flipping the bits of 0 are 
standard methods to get the max-value possible to store in a 
variable...

> As I have overcome my confusion regarding apr_off_t / apr_size_t I 
> hope to have a look into the problem and find a solution how to do 
> all the casting stuff correctly.

My tip would be: less casts. If they're needed they're usually a sign 
of bad design or a thinko somewhere.

/Nikke
-- 
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
  Niklas Edmundsson, Admin @ {acc,hpc2n}.umu.se      |     nikke@acc.umu.se
---------------------------------------------------------------------------
  "Data, find the USS Pasteur." - Picard
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

Re: httpd 2.2.8 segfaults

Posted by Plüm, Rüdiger, VF-Group <ru...@vodafone.com>.
 

> -----Ursprüngliche Nachricht-----
> Von: Niklas Edmundsson 
> Gesendet: Freitag, 22. Februar 2008 11:04
> An: dev@httpd.apache.org
> Betreff: Re: httpd 2.2.8 segfaults
> 
> On Thu, 21 Feb 2008, Ruediger Pluem wrote:
> 
> >
> >
> > On 02/21/2008 10:09 PM, Niklas Edmundsson wrote:
> >> On Wed, 20 Feb 2008, Niklas Edmundsson wrote:
> >> 
> >>> In any case, I should probably try to figure out how to 
> reproduce this 
> >>> thing. All coredumps I've looked at have been when 
> serving DVD images, 
> >>> which of course works flawlessly when I try it...
> >> 
> >> OK, I've been able to reproduce this, and it looks really 
> bad because:
> >
> > Could you please check if backing out the following patch 
> out of apr-util
> > 'fixes' the problem:
> >
> > 
> http://svn.apache.org/viewvc/apr/apr-util/branches/1.2.x/bucke
> ts/apr_brigade.c?r1=232557&r2=588057
> 
> That's indeed the culprit.
> 
> In general, that patch looks truly suspicious since it seems to me 
> it's typecasting wildly and not even using its newly invented 
> MAX_APR_SIZE_T in all places, because (apr_size_t)(-1) really is the 
> same thing, right?

No, MAX_APR_SIZE_T and (apr_size_t)(-1) might be different depending on the
platform. MAX_APR_SIZE_T is ~(apr_size_t)(0).

As I have overcome my confusion regarding apr_off_t / apr_size_t I hope to have
a look into the problem and find a solution how to do all the casting
stuff correctly.

Regards

Rüdiger

> 
> /Nikke
> -- 
> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
> -=-=-=-=-=-=-
>   Niklas Edmundsson, Admin @ {acc,hpc2n}.umu.se      |     
> nikke@acc.umu.se
> --------------------------------------------------------------
> -------------
>   Windows IS NOT a virus...viruses do something.
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
> =-=-=-=-=-=-=
> 

Re: httpd 2.2.8 segfaults

Posted by Niklas Edmundsson <ni...@acc.umu.se>.
On Thu, 21 Feb 2008, Ruediger Pluem wrote:

>
>
> On 02/21/2008 10:09 PM, Niklas Edmundsson wrote:
>> On Wed, 20 Feb 2008, Niklas Edmundsson wrote:
>> 
>>> In any case, I should probably try to figure out how to reproduce this 
>>> thing. All coredumps I've looked at have been when serving DVD images, 
>>> which of course works flawlessly when I try it...
>> 
>> OK, I've been able to reproduce this, and it looks really bad because:
>
> Could you please check if backing out the following patch out of apr-util
> 'fixes' the problem:
>
> http://svn.apache.org/viewvc/apr/apr-util/branches/1.2.x/buckets/apr_brigade.c?r1=232557&r2=588057

That's indeed the culprit.

In general, that patch looks truly suspicious since it seems to me 
it's typecasting wildly and not even using its newly invented 
MAX_APR_SIZE_T in all places, because (apr_size_t)(-1) really is the 
same thing, right?

/Nikke
-- 
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
  Niklas Edmundsson, Admin @ {acc,hpc2n}.umu.se      |     nikke@acc.umu.se
---------------------------------------------------------------------------
  Windows IS NOT a virus...viruses do something.
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

Re: httpd 2.2.8 segfaults

Posted by Ruediger Pluem <rp...@apache.org>.

On 02/21/2008 10:33 PM, Ruediger Pluem wrote:
> 
> 
> On 02/21/2008 10:09 PM, Niklas Edmundsson wrote:
>> On Wed, 20 Feb 2008, Niklas Edmundsson wrote:
>>
>>> In any case, I should probably try to figure out how to reproduce 
>>> this thing. All coredumps I've looked at have been when serving DVD 
>>> images, which of course works flawlessly when I try it...
>>
>> OK, I've been able to reproduce this, and it looks really bad because:
> 
> Could you please check if backing out the following patch out of apr-util
> 'fixes' the problem:
> 
> http://svn.apache.org/viewvc/apr/apr-util/branches/1.2.x/buckets/apr_brigade.c?r1=232557&r2=588057 

Alternatively could you give the attached patch a try. It will not work cleanly with apr-util 1.2.x
because APR_SIZE_MAX is not defined there. So this would need to be fixed.

Regards

Rüdiger


Re: httpd 2.2.8 segfaults

Posted by Ruediger Pluem <rp...@apache.org>.

On 02/21/2008 10:09 PM, Niklas Edmundsson wrote:
> On Wed, 20 Feb 2008, Niklas Edmundsson wrote:
> 
>> In any case, I should probably try to figure out how to reproduce this 
>> thing. All coredumps I've looked at have been when serving DVD images, 
>> which of course works flawlessly when I try it...
> 
> OK, I've been able to reproduce this, and it looks really bad because:

Could you please check if backing out the following patch out of apr-util
'fixes' the problem:

http://svn.apache.org/viewvc/apr/apr-util/branches/1.2.x/buckets/apr_brigade.c?r1=232557&r2=588057

Regards

Rüdiger