You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@httpd.apache.org by Steven B <st...@teamholistic.com> on 2010/04/14 03:31:16 UTC

FCGID: Changes in behaviour of DefaultMaxClassProcessCount/FcgidMaxProcessesPerClass negatively impacting shared hosting providers

Somehow there was a decision made that changed max PHP processes from
per-user to per-vhost between the old mod_fcgid and the newer 2.3.5 version.

This is really affecting us on servers where we had a nice simple way to say
"users shall have X PHP processes and no more!" but now with the update to
2.3.5 it becomes modified with "times the number of vhosts you have"

There needs to be some kind of replacement directive here somewhere. The
ability to limit people to a set number of spawned PHP processes based on
user ID is very valuable to control resource usage and this has been blown
away.

Also this introduces another problem related to what happens when something
happens which may cause the server to slow down a little. If one for example
has 50 people on one server, each running 10 vhosts, and have a maxprocess
count of 3. Under the old system you would be limited to the server spawning
150 PHP processes. Okay so that sucks, but it can be handled and the server
catches up or a watchful eye or script comes in and massages out the kinks.
Under the new method you now have the server spawning 1500 PHP processes.

I'm sure that example has lots of holes and might not be the best.

But we really need to get back a way to put a limit on PHP processes by
user. This per-vhost thing might be great in theory or for people who can
exercise an iron fist of control on their servers, but for people running
shared servers trying to sell a product and be competitive this change just
blows the lid off of one of the most valuable resource/user controls that we
had.

The real life affect is users more able to impact others and an increase of
10-20% in committed memory for all these extra PHP processes. So far we've
been able to counteract this a little by tightening up (a little too tight)
the idle timeouts on the processes above the minimum but this then impacts
CPU.

Or I don't know, maybe I'm missing something?

Steve

Re: FCGID: Changes in behaviour of DefaultMaxClassProcessCount/FcgidMaxProcessesPerClass negatively impacting shared hosting providers

Posted by Jeff Trawick <tr...@gmail.com>.
On Tue, Apr 13, 2010 at 9:31 PM, Steven B <st...@teamholistic.com> wrote:
> Somehow there was a decision made that changed max PHP processes from
> per-user to per-vhost between the old mod_fcgid and the newer 2.3.5 version.

Maybe this is the commit?  (before mod_fcgid development moved here
but after 2.2 was released)

http://svn.apache.org/viewvc?view=revision&revision=753578

It added the check for vhost to the search for the data structure that
maintains the counter against which FcgidMaxProcessesPerClass is
compared.

(And it doesn't work as expected unless each vhost is given a distinct
ServerName directive, though I guess that is not a big concern.)

> This is really affecting us on servers where we had a nice simple way to say
> "users shall have X PHP processes and no more!" but now with the update to
> 2.3.5 it becomes modified with "times the number of vhosts you have"
>
> There needs to be some kind of replacement directive here somewhere. The
> ability to limit people to a set number of spawned PHP processes based on
> user ID is very valuable to control resource usage and this has been blown
> away.

The ability to control how the class is defined has come up several times.

These checks essentially define the class today:

    for (current_node = g_stat_list_header;
         current_node != NULL; current_node = current_node->next) {
        if (current_node->inode == command->inode
            && current_node->deviceid == command->deviceid
            && !strcmp(current_node->cmdline, command->cmdline)
            && current_node->virtualhost == command->virtualhost
            && current_node->uid == command->uid
            && current_node->gid == command->gid)
            break;
    }

The doc says "A process class is the set of processes which were
started with the same executable file and share certain other
characteristics such as virtual host and identity. Two commands which
are links to or otherwise refer to the same executable file share the
same process class."

(Meanwhile, there is a field called share_grp_id that was defined but
not used (recently?) that has some interaction with all of this.  I
don't recall the intention, but I think Rainer or Chris described it
on this list at some point.)

I guess your problem would be resolved if we allow some control over
the attributes that define the class.  You'd want to ignore
virtualhost; I can't recall if other attributes have been problematic.

We/I/? thought before about allowing a symbolic name to be set per
vhost that would be used instead of the vhost; you could declare the
same name in multiple vhosts if desired, or even use the same name
globally.  That could be a good solution if the virtualhost attribute
of the class is the only problematic one.

Meanwhile, I respect that virtualhost was added as an attribute to the
class while fixing a more serious problem -- see the change log entry
here: http://svn.apache.org/viewvc/httpd/mod_fcgid/trunk/mod_fcgid/ChangeLog?r1=753578&r2=753577&pathrev=753578
-- so the exact code changes may not be just the obvious ones.

Possibly there's an intersection with mod_fcgid's server-status report.

>
> Also this introduces another problem related to what happens when something
> happens which may cause the server to slow down a little. If one for example
> has 50 people on one server, each running 10 vhosts, and have a maxprocess
> count of 3. Under the old system you would be limited to the server spawning
> 150 PHP processes. Okay so that sucks, but it can be handled and the server
> catches up or a watchful eye or script comes in and massages out the kinks.
> Under the new method you now have the server spawning 1500 PHP processes.
>
> I'm sure that example has lots of holes and might not be the best.
>
> But we really need to get back a way to put a limit on PHP processes by
> user. This per-vhost thing might be great in theory or for people who can
> exercise an iron fist of control on their servers, but for people running
> shared servers trying to sell a product and be competitive this change just
> blows the lid off of one of the most valuable resource/user controls that we
> had.
>
> The real life affect is users more able to impact others and an increase of
> 10-20% in committed memory for all these extra PHP processes. So far we've
> been able to counteract this a little by tightening up (a little too tight)
> the idle timeouts on the processes above the minimum but this then impacts
> CPU.
>
> Or I don't know, maybe I'm missing something?

I don't think you're missing anything.

I'll try to look in more detail to see what implementation
complications there are.

If someone recalls the old discussions of defining the class in
different ways, a summary would be great.  (Otherwise, they're in the
mailing list archives.)