You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@httpd.apache.org by Cliff Woolley <cl...@yahoo.com> on 2001/11/28 23:18:43 UTC

extraneous flushes after request is handled?

Can someone explain to me why in the following gdb trace, even after the
entire request has been sent down the line EOS and all, there are two more
calls down the stack with FLUSH buckets?  This is the worker MPM (dunno if
that matters) and an HTTP/0.9 GET request of an 8KB parsed file.  One of
them seems to be lingering-close related, but I'm not sure about the other
one.  They might both be totally correct, I just wasn't expecting to see
them happen.

--Cliff

Breakpoint 1, core_output_filter (f=0x818473c, b=0x818d81c) at core.c:3008
3008        conn_rec *c = f->c;
(gdb) dump_brigade b
dump of brigade 0x818d81c
   0: bucket=MMAP(0x823e248), length=8192, data=0x823e2e8
   1: bucket=EOS(0x823e2c8), length=0, data=0x0
(gdb) bt
#0  core_output_filter (f=0x818473c, b=0x818d81c) at core.c:3008
#1  0x80b9835 in ap_pass_brigade (next=0x818473c, bb=0x818d81c)
    at util_filter.c:388
#2  0x808855e in ap_http_header_filter (f=0x818d70c, b=0x818d81c)
    at http_protocol.c:1239
#3  0x80b9835 in ap_pass_brigade (next=0x818d70c, bb=0x818d81c)
    at util_filter.c:388
#4  0x80bb467 in ap_content_length_filter (f=0x818d6f4, b=0x818d81c)
    at protocol.c:985
#5  0x80b9835 in ap_pass_brigade (next=0x818d6f4, bb=0x818d81c)
    at util_filter.c:388
#6  0x8089d7b in ap_byterange_filter (f=0x818d6dc, bb=0x818d81c)
    at http_protocol.c:2507
#7  0x80b9835 in ap_pass_brigade (next=0x818d6dc, bb=0x818d81c)
    at util_filter.c:388
#8  0x806e62f in send_parsed_content (bb=0xbf3fd940, r=0x818c3ec, 
f=0x818d6c4)
    at mod_include.c:2956
#9  0x806ea5f in includes_filter (f=0x818d6c4, b=0x818d81c)
    at mod_include.c:3116
#10 0x80b9835 in ap_pass_brigade (next=0x818d6c4, bb=0x818d81c)
    at util_filter.c:388
#11 0x80bf198 in default_handler (r=0x818c3ec) at core.c:2780
#12 0x80b01c9 in ap_run_handler (r=0x818c3ec) at config.c:185
#13 0x80b0653 in ap_invoke_handler (r=0x818c3ec) at config.c:350
#14 0x808a380 in ap_process_request (r=0x818c3ec) at http_request.c:292
#15 0x8086d9b in ap_process_http_connection (c=0x818448c) at 
http_core.c:283
#16 0x80b8258 in ap_run_process_connection (c=0x818448c) at 
connection.c:84
#17 0x80b84f4 in ap_process_connection (c=0x818448c) at connection.c:229
#18 0x80adf16 in process_socket (p=0x818437c, sock=0x81843ac, 
my_child_num=0,
    my_thread_num=0) at worker.c:502
#19 0x80ae34c in worker_thread (thd=0x81009b4, dummy=0x822df48) at 
worker.c:716
#20 0x4003847e in dummy_worker (opaque=0x81009b4) at thread.c:122
#21 0x40255065 in pthread_start_thread (arg=0xbf3ffc00) at manager.c:274
(gdb) continue
Continuing.
 
Breakpoint 1, core_output_filter (f=0x818473c, b=0x818deec) at core.c:3008
3008        conn_rec *c = f->c;
(gdb) dump_brigade b
dump of brigade 0x818deec
   0: bucket=FLUSH(0x823e248), length=0, data=0x0
(gdb) bt
#0  core_output_filter (f=0x818473c, b=0x818deec) at core.c:3008
#1  0x80b9835 in ap_pass_brigade (next=0x818473c, bb=0x818deec)
    at util_filter.c:388
#2  0x808a345 in check_pipeline_flush (r=0x818c3ec) at http_request.c:262
#3  0x808a3b6 in ap_process_request (r=0x818c3ec) at http_request.c:313
#4  0x8086d9b in ap_process_http_connection (c=0x818448c) at 
http_core.c:283
#5  0x80b8258 in ap_run_process_connection (c=0x818448c) at 
connection.c:84
#6  0x80b84f4 in ap_process_connection (c=0x818448c) at connection.c:229
#7  0x80adf16 in process_socket (p=0x818437c, sock=0x81843ac, 
my_child_num=0,
    my_thread_num=0) at worker.c:502
#8  0x80ae34c in worker_thread (thd=0x81009b4, dummy=0x822df48) at 
worker.c:716
#9  0x4003847e in dummy_worker (opaque=0x81009b4) at thread.c:122
#10 0x40255065 in pthread_start_thread (arg=0xbf3ffc00) at manager.c:274
(gdb) continue
Continuing.
 
Breakpoint 1, core_output_filter (f=0x818473c, b=0x818482c) at core.c:3008
3008        conn_rec *c = f->c;
(gdb) dump_brigade b
dump of brigade 0x818482c
   0: bucket=FLUSH(0x823e248), length=0, data=0x0
(gdb) bt
#0  core_output_filter (f=0x818473c, b=0x818482c) at core.c:3008
#1  0x80b9835 in ap_pass_brigade (next=0x818473c, bb=0x818482c)
    at util_filter.c:388
#2  0x80b83b0 in ap_flush_conn (c=0x818448c) at connection.c:143
#3  0x80b8426 in ap_lingering_close (c=0x818448c) at connection.c:184
#4  0x80adf1f in process_socket (p=0x818437c, sock=0x81843ac, 
my_child_num=0,
    my_thread_num=0) at worker.c:503
#5  0x80ae34c in worker_thread (thd=0x81009b4, dummy=0x822df48) at 
worker.c:716
#6  0x4003847e in dummy_worker (opaque=0x81009b4) at thread.c:122
#7  0x40255065 in pthread_start_thread (arg=0xbf3ffc00) at manager.c:274
(gdb) continue
Continuing.

--------------------------------------------------------------
   Cliff Woolley
   cliffwoolley@yahoo.com
   Charlottesville, VA



Re: extraneous flushes after request is handled?

Posted by Ryan Bloom <rb...@covalent.net>.
On Wednesday 28 November 2001 02:18 pm, Cliff Woolley wrote:

The lingering close one is correct.  I would bet that the other is a bug, The
other is happening because of pipelining.  Basically, you have gotten an
EOS bucket, but the core might have squirelled away the data to be sent
with the next response over this connection.  If the check_pipeline_flush
doesn't find another request, it has to flush the core_output_filter to get
the data out.  If you look at the core_output_filter, you will see that an
EOS bucket allows the data to be saved, but FLUSH buckets are taken
to mean write that data no matter what.

Ryan

> Can someone explain to me why in the following gdb trace, even after the
> entire request has been sent down the line EOS and all, there are two more
> calls down the stack with FLUSH buckets?  This is the worker MPM (dunno if
> that matters) and an HTTP/0.9 GET request of an 8KB parsed file.  One of
> them seems to be lingering-close related, but I'm not sure about the other
> one.  They might both be totally correct, I just wasn't expecting to see
> them happen.
>


______________________________________________________________
Ryan Bloom				rbb@apache.org
Covalent Technologies			rbb@covalent.net
--------------------------------------------------------------