You are viewing a plain text version of this content. The canonical link for it is here.
Posted to test-dev@httpd.apache.org by Norman Tuttle <nt...@photon.poly.edu> on 2003/11/04 18:47:30 UTC

Redo of: Diff for flood_net_ssl.c [Ref A1]

See below for explanation of change; attached diff now correct.

-Norman Tuttle, OpenDemand Systems Developer ntuttle@opendemand.com

On Mon, 13 Oct 2003, Norman Tuttle wrote:

> To Apache Flood development team:
> 
> Other than some small touch-ups, these changes to flood_net_ssl.c involve
> 1) taking recursive function code out of socket read/write functions,
> (replacing with do .. while loop) resulting in
> a) more robust code, with less possibility of stack-related issues.
> b) errors returned in recursively-called code were not being propagated.
> c) with iterative code it is easier to set limits on the amount of
> iteration, if necessary. I have not changed the logic here (yet).
> and
> 2) the addition of certain cases which should cause a continuation of
> reading.



Re: Dangerous Flood memory model compromises larger runs

Posted by Norman Tuttle <nt...@photon.poly.edu>.
Flood developers, APR gurus, Cliff Wooley, Sander Striker, etc.:

Looks like I'm in the process of implementing the needed fix of
implementing multiple pool levels in Flood. The decision point is going to
be between allocating a new pool from the source pool every repeated
instance (i.e. each time a url-testing is attempted by a virtual user)
which is then destroyed before the next url is encountered, or creating
the subpool for requests/responses once from its parent and clearing the
pool after each url finishes. I want to know (1) if any memory is actually
released when the pool is cleared and (2) what you see as performance hits
for each method. I notice that Flood, on a "session" level (virtual user
or "farmer" executing one round of a list of urls in a profile), performs
the second option, clearing the subpool after each session. Also, if
memory is not actually released in that event, is the cleared subpool at
least going to properly reuse the memory it has already allocated? Once
the farmer_pool has been cleared, I assume that at that time any subpools
it had would need to be recreated (by an apr_pool_create()).

-Norman Tuttle ntuttle@opendemand.com Developer, OpenDemand Systems

PS: After a call to apr_socket_connect(), is the allocated quantity
pointed-to by its 2nd (apr_sockaddr_t *) argument required to stay around
for the duration of the life of the socket, or is its value used
immediately and it can be deallocated? I will check myself but perhaps
someone knows offhand.

On Thu, 13 Nov 2003, Sander Striker wrote:

> On Thu, 2003-11-13 at 16:26, Cliff Woolley wrote:
> > On Thu, 13 Nov 2003, Norman Tuttle wrote:
> > 
> > > How do the pools define "if possible" in your wording below (i.e., how
> > > would the pool know when to reuse memory)?
> > 
> > It's kind of complicated, so I don't know how well I can explain it off
> > the top of my head (Sander, feel free to jump in here :), but it keeps
> > freelist buckets of varying power-of-two sizes, and if it finds one of the
> > appropriate size, it will use it.
> 
> The allocator keeps freelists of multiples of 4k blocks actually ;)
> 
> > But there are two levels of things going on, too, because the allocator
> > hands out blocks of a certain size, which the pools then divide up into
> > smaller blocks...
> 
> Pools get blocks from the allocator.  Pools then hand out the requested
> memory to the caller.  They hold on to the 'surplus' for the next
> allocation.
> 
> On a pool clear, all used blocks return to the allocator (except for
> the block containing the pool itself (8k)).
> 
> > Oy.
> > 
> > Sander?  Help me out.  :)
> 
> You know what, this question comes up reasonably frequently.  Enough for
> me to start writing a document on this.
> 
> I'll post a URL later on.
> 
> 
> Sander
> 


Re: Dangerous Flood memory model compromises larger runs

Posted by Norman Tuttle <nt...@photon.poly.edu>.
Thank you for your input (and immediate response), Sander and Cliff. We
hope to use it to help resolve our issues.

-Norman Tuttle ntuttle@opendemand.com

On Thu, 13 Nov 2003, Sander Striker wrote:

> On Thu, 2003-11-13 at 16:26, Cliff Woolley wrote:
> > On Thu, 13 Nov 2003, Norman Tuttle wrote:
> > 
> > > How do the pools define "if possible" in your wording below (i.e., how
> > > would the pool know when to reuse memory)?
> > 
> > It's kind of complicated, so I don't know how well I can explain it off
> > the top of my head (Sander, feel free to jump in here :), but it keeps
> > freelist buckets of varying power-of-two sizes, and if it finds one of the
> > appropriate size, it will use it.
> 
> The allocator keeps freelists of multiples of 4k blocks actually ;)
> 
> > But there are two levels of things going on, too, because the allocator
> > hands out blocks of a certain size, which the pools then divide up into
> > smaller blocks...
> 
> Pools get blocks from the allocator.  Pools then hand out the requested
> memory to the caller.  They hold on to the 'surplus' for the next
> allocation.
> 
> On a pool clear, all used blocks return to the allocator (except for
> the block containing the pool itself (8k)).
> 
> > Oy.
> > 
> > Sander?  Help me out.  :)
> 
> You know what, this question comes up reasonably frequently.  Enough for
> me to start writing a document on this.
> 
> I'll post a URL later on.
> 
> 
> Sander
> 


Re: Dangerous Flood memory model compromises larger runs

Posted by Sander Striker <st...@apache.org>.
On Thu, 2003-11-13 at 16:26, Cliff Woolley wrote:
> On Thu, 13 Nov 2003, Norman Tuttle wrote:
> 
> > How do the pools define "if possible" in your wording below (i.e., how
> > would the pool know when to reuse memory)?
> 
> It's kind of complicated, so I don't know how well I can explain it off
> the top of my head (Sander, feel free to jump in here :), but it keeps
> freelist buckets of varying power-of-two sizes, and if it finds one of the
> appropriate size, it will use it.

The allocator keeps freelists of multiples of 4k blocks actually ;)

> But there are two levels of things going on, too, because the allocator
> hands out blocks of a certain size, which the pools then divide up into
> smaller blocks...

Pools get blocks from the allocator.  Pools then hand out the requested
memory to the caller.  They hold on to the 'surplus' for the next
allocation.

On a pool clear, all used blocks return to the allocator (except for
the block containing the pool itself (8k)).

> Oy.
> 
> Sander?  Help me out.  :)

You know what, this question comes up reasonably frequently.  Enough for
me to start writing a document on this.

I'll post a URL later on.


Sander

Re: Dangerous Flood memory model compromises larger runs

Posted by Cliff Woolley <jw...@virginia.edu>.
On Thu, 13 Nov 2003, Norman Tuttle wrote:

> How do the pools define "if possible" in your wording below (i.e., how
> would the pool know when to reuse memory)?

It's kind of complicated, so I don't know how well I can explain it off
the top of my head (Sander, feel free to jump in here :), but it keeps
freelist buckets of varying power-of-two sizes, and if it finds one of the
appropriate size, it will use it.

But there are two levels of things going on, too, because the allocator
hands out blocks of a certain size, which the pools then divide up into
smaller blocks...

Oy.

Sander?  Help me out.  :)

Re: Dangerous Flood memory model compromises larger runs

Posted by Norman Tuttle <nt...@photon.poly.edu>.
How do the pools define "if possible" in your wording below (i.e., how
would the pool know when to reuse memory)?

-Norman Tuttle

On Thu, 13 Nov 2003, Cliff Woolley wrote:

> On Thu, 13 Nov 2003, Norman Tuttle wrote:
> 
> > around without generating any data, and (3) that the data (timings, in
> > particular) itself seems to be suspect when we are in the process of
> > "hitting the rail". I was wondering whether (1) other people have seen
> > this issue with this or other applications using apr pools, and (2)
> > whether there is any "quick" fix that people can see to remedy this
> > problem. I understand that there is still work to be done to Flood to
> 
> APR pools allocate but do not automatically deallocate.  They hang onto
> the memory they have and reuse it later if possible.  If you want to set a
> limit on the amount the pools' underlying allocator will hang onto, use
> apr_allocator_create(), call apr_allocator_set_max_free() or whatever it's
> called, and then use apr_pool_create_ex() to create the pool with that
> "limited" allocator.
> 
> Have a look at the prefork or worker MPMs from httpd for an example.
> 
> --Cliff
> 


Re: Dangerous Flood memory model compromises larger runs

Posted by Cliff Woolley <jw...@virginia.edu>.
On Thu, 13 Nov 2003, Norman Tuttle wrote:

> around without generating any data, and (3) that the data (timings, in
> particular) itself seems to be suspect when we are in the process of
> "hitting the rail". I was wondering whether (1) other people have seen
> this issue with this or other applications using apr pools, and (2)
> whether there is any "quick" fix that people can see to remedy this
> problem. I understand that there is still work to be done to Flood to

APR pools allocate but do not automatically deallocate.  They hang onto
the memory they have and reuse it later if possible.  If you want to set a
limit on the amount the pools' underlying allocator will hang onto, use
apr_allocator_create(), call apr_allocator_set_max_free() or whatever it's
called, and then use apr_pool_create_ex() to create the pool with that
"limited" allocator.

Have a look at the prefork or worker MPMs from httpd for an example.

--Cliff

Dangerous Flood memory model compromises larger runs

Posted by Norman Tuttle <nt...@photon.poly.edu>.
In a 100-user test, the Flood1.1 executable under Windows consistently
grows in the amount of memory usage it maintains, even while the number of
threads is slowly diminishing (at the tail end of the test).
             While running a "Flood clone" which, while
using the pools-based Flood memory model, and also generating both more
data per url as well as data which must be maintained per page and
session, we find that under heavy load and/or long time durations we are
seeing (1) segmentation faults, (2) cases where users seem to be hanging
around without generating any data, and (3) that the data (timings, in
particular) itself seems to be suspect when we are in the process of
"hitting the rail". I was wondering whether (1) other people have seen
this issue with this or other applications using apr pools, and (2)
whether there is any "quick" fix that people can see to remedy this
problem. I understand that there is still work to be done to Flood to
generate pools at lower levels, but the consummate behavior of just
allocating memory when you need it without cleaning up (since you can
wait for when the pools will be cleaned up at a higher stage) is a
practice bordering on disaster. I was also wondering whether the APR
(current or from about a year a ago) has been tuned to prevent memory
leaks or whether our design currently enforces this. I suspect that not
much research has been done here. The goal of this email is not to knock
the current development on Flood but is asking for help in resolving an
issue we are facing. If we can overcome ours, we can also help Flood
overcome its own potential issues as well.

I appreciate any responses.

-Norman Tuttle, software developer, OpenDemand Systems
ntuttle@opendemand.com