You are viewing a plain text version of this content. The canonical link for it is here.
Posted to modperl@perl.apache.org by Jeremy Howard <jh...@fastmail.fm> on 2000/12/22 07:38:19 UTC

Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscriptsthat contain un-shared memory

Perrin Harkins wrote:
> What I was saying is that it doesn't make sense for one to need fewer
> interpreters than the other to handle the same concurrency.  If you have
> 10 requests at the same time, you need 10 interpreters.  There's no way
> speedycgi can do it with fewer, unless it actually makes some of them
> wait.  That could be happening, due to the fork-on-demand model, although
> your warmup round (priming the pump) should take care of that.
>
I don't know if Speedy fixes this, but one problem with mod_perl v1 is that
if, for instance, a large POST request is being uploaded, this takes a whole
perl interpreter while the transaction is occurring. This is at least one
place where a Perl interpreter should not be needed.

Of course, this could be overcome if an HTTP Accelerator is used that takes
the whole request before passing it to a local httpd, but I don't know of
any proxies that work this way (AFAIK they all pass the packets as they
arrive).



Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscriptsthat contain un-shared memory

Posted by Jeremy Howard <jh...@fastmail.fm>.
Joe Schaefer wrote:
> "Jeremy Howard" <jh...@fastmail.fm> writes:
> > I don't know if Speedy fixes this, but one problem with mod_perl v1 is
that
> > if, for instance, a large POST request is being uploaded, this takes a
whole
> > perl interpreter while the transaction is occurring. This is at least
one
> > place where a Perl interpreter should not be needed.
> >
> > Of course, this could be overcome if an HTTP Accelerator is used that
takes
> > the whole request before passing it to a local httpd, but I don't know
of
> > any proxies that work this way (AFAIK they all pass the packets as they
> > arrive).
>
> I posted a patch to modproxy a few months ago that specifically
> addresses this issue.  It has a ProxyPostMax directive that changes
> it's behavior to a store-and-forward proxy for POST data (it also enabled
> keepalives on the browser-side connection if they were enabled on the
> frontend server.)
>
FYI, this patch is at:

  http://www.mail-archive.com/modperl@apache.org/msg11072.html



Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscriptsthat contain un-shared memory

Posted by Gunther Birznieks <gu...@extropia.com>.
At 10:17 PM 12/22/2000 -0500, Joe Schaefer wrote:
>"Jeremy Howard" <jh...@fastmail.fm> writes:
>
>[snipped]
>I posted a patch to modproxy a few months ago that specifically
>addresses this issue.  It has a ProxyPostMax directive that changes
>it's behavior to a store-and-forward proxy for POST data (it also enabled
>keepalives on the browser-side connection if they were enabled on the
>frontend server.)
>
>It does this by buffering the data to a temp file on the proxy before
>opening the backend socket.  It's straightforward to make it buffer to
>a portion of RAM instead- if you're interested I can post another patch
>that does this also, but it's pretty much untested.
Cool! Are these patches now incorporated in the core mod_proxy if we 
download it off the web? Or do we troll through the mailing list to find 
the patch?

(Similar question about the forwarding of remote user patch someone posted 
last year).

Thanks,
     Gunther


Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscriptsthat contain un-shared memory

Posted by Joe Schaefer <jo...@sunstarsys.com>.
"Jeremy Howard" <jh...@fastmail.fm> writes:

> Perrin Harkins wrote:
> > What I was saying is that it doesn't make sense for one to need fewer
> > interpreters than the other to handle the same concurrency.  If you have
> > 10 requests at the same time, you need 10 interpreters.  There's no way
> > speedycgi can do it with fewer, unless it actually makes some of them
> > wait.  That could be happening, due to the fork-on-demand model, although
> > your warmup round (priming the pump) should take care of that.

A backend server can realistically handle multiple frontend requests, since
the frontend server must stick around until the data has been delivered
to the client (at least that's my understanding of the lingering-close
issue that was recently discussed at length here). Hypothetically speaking,
if a "FastCGI-like"[1] backend can deliver it's content faster than the 
apache (front-end) server can "proxy" it to the client, you won't need as 
many to handle the same (front-end) traffic load.

As an extreme hypothetical example, say that over a 5 second period you
are barraged with 100 modem requests that typically would take 5s each to 
service.  This means (sans lingerd :) that at the end of your 5 second 
period, you have 100 active apache children around.

But if new requests during that 5 second interval were only received at 
20/second, and your "FastCGI-like" server could deliver the content to
apache in one second, you might only have forked 50-60 "FastCGI-like" new 
processes to handle all 100 requests (forks take a little time :).

Moreover, an MRU design allows the transient effects of a short burst 
of abnormally heavy traffic to dissipate quickly, and IMHO that's its 
chief advantage over LRU.  To return to this hypothetical, suppose 
that immediately following this short burst, we maintain a sustained 
traffic of 20 new requests per second. Since it takes 5 seconds to 
deliver the content, that amounts to a sustained concurrency level 
of 100. The "Fast-CGI like" backend may have initially reacted by forking 
50-60 processes, but with MRU only 20-30 processes will actually be 
handling the load, and this reduction would happen almost immediately 
in this hyothetical.  This means that the remaining transient 20-30 
processes could be quickly killed off or _moved to swap_ without adversely 
affecting server performance.

Again, this is all purely hypothetical - I don't have benchmarks to
back it up ;)

> I don't know if Speedy fixes this, but one problem with mod_perl v1 is that
> if, for instance, a large POST request is being uploaded, this takes a whole
> perl interpreter while the transaction is occurring. This is at least one
> place where a Perl interpreter should not be needed.
> 
> Of course, this could be overcome if an HTTP Accelerator is used that takes
> the whole request before passing it to a local httpd, but I don't know of
> any proxies that work this way (AFAIK they all pass the packets as they
> arrive).

I posted a patch to modproxy a few months ago that specifically 
addresses this issue.  It has a ProxyPostMax directive that changes 
it's behavior to a store-and-forward proxy for POST data (it also enabled 
keepalives on the browser-side connection if they were enabled on the 
frontend server.)

It does this by buffering the data to a temp file on the proxy before 
opening the backend socket.  It's straightforward to make it buffer to 
a portion of RAM instead- if you're interested I can post another patch 
that does this also, but it's pretty much untested.


[1] I've never used SpeedyCGI, so I've refrained from specifically discussing 
    it. Also, a mod_perl backend server using Apache::Registry can be viewed as 
    "FastCGI-like" for the purpose of my argument.

-- 
Joe Schaefer