You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@httpd.apache.org by Jan Kaluža <jk...@redhat.com> on 2015/08/24 16:47:54 UTC

PR 58267: Regression in 2.2.31 caused by r1680920

Hi,

unfortunately, the r1680920 brought undesired behavior described in PR 
58267 to 2.2.x. The bug is well described in the PR, so I won't describe 
it in this email.

I have tried to debug it and I think the problem is that we use also 
server->server_hostname to compute the hash in the 
ap_proxy_set_scoreboard_lb. This hash is used to find out proper 
ap_scoreboard field.

It all happens in mod_proxy.c:child_init's scope.

If the "<Proxy Balancer://foobar>" has been defined, all the 
BalancerMembers are initialized with the hash computed with usage of 
global server->server_hostname.

Later, if the "ProxyPass /foobar/ Balancer://foobar/" has been used in 
the VirtualHost, ap_proxy_initialize_worker_share is called again with 
server->server_hostname set to the VirtualHost's one.

Now, the root of the error is that the scoreboard size is static (set to 
proxy_lb_workers + PROXY_DYNAMIC_BALANCER_LIMIT), but it is not 
incremented when ProxyPass with balancer is used in the virtualhost. 
This leads to lack of space in scoreboard when Balancers are used in 
multiple virtualhosts.

I think there are two possible fixes:

1) Do not use server->server_hostname when computing hash which is used 
to determine right scoreboard field. I think this would fix this bug, 
but I'm not sure what would happen in situations when you define 2 
balancers with the same name in different virtualhosts...

On the other-side, when there is global Proxy balancer, it make sense to 
use the same worker->s for all the ProxyPass in virtualhosts.

2) Increment proxy_lb_workers according to number of workers in balancer 
when using "ProxyPass /foobar/ Balancer://foobar/" in the VirtualHost. 
The scoreboard would have right size and ap_proxy_set_scoreboard_lb 
would not fail then.


Since it's 2.2.x which should be probably stable without big changes, 
I'm asking the list for more opinions... I will try to implement patch 
for option 2) tomorrow and see if this really fixes the issue.

Regards,
Jan Kaluza

Re: PR 58267: Regression in 2.2.31 caused by r1680920

Posted by Jan Kaluža <jk...@redhat.com>.
On 08/24/2015 04:47 PM, Jan Kaluža wrote:
> Hi,
>
> unfortunately, the r1680920 brought undesired behavior described in PR
> 58267 to 2.2.x. The bug is well described in the PR, so I won't describe
> it in this email.
>
> I have tried to debug it and I think the problem is that we use also
> server->server_hostname to compute the hash in the
> ap_proxy_set_scoreboard_lb. This hash is used to find out proper
> ap_scoreboard field.
>
> It all happens in mod_proxy.c:child_init's scope.
>
> If the "<Proxy Balancer://foobar>" has been defined, all the
> BalancerMembers are initialized with the hash computed with usage of
> global server->server_hostname.
>
> Later, if the "ProxyPass /foobar/ Balancer://foobar/" has been used in
> the VirtualHost, ap_proxy_initialize_worker_share is called again with
> server->server_hostname set to the VirtualHost's one.
>
> Now, the root of the error is that the scoreboard size is static (set to
> proxy_lb_workers + PROXY_DYNAMIC_BALANCER_LIMIT), but it is not
> incremented when ProxyPass with balancer is used in the virtualhost.
> This leads to lack of space in scoreboard when Balancers are used in
> multiple virtualhosts.
>
> I think there are two possible fixes:
>
> 1) Do not use server->server_hostname when computing hash which is used
> to determine right scoreboard field. I think this would fix this bug,
> but I'm not sure what would happen in situations when you define 2
> balancers with the same name in different virtualhosts...
>
> On the other-side, when there is global Proxy balancer, it make sense to
> use the same worker->s for all the ProxyPass in virtualhosts.

It seems that 2.4.x uses just the name of the worker to determine slot 
for shared memory, so maybe it wouldn't be a problem for 2.2.x too...

Jan Kaluza

> 2) Increment proxy_lb_workers according to number of workers in balancer
> when using "ProxyPass /foobar/ Balancer://foobar/" in the VirtualHost.
> The scoreboard would have right size and ap_proxy_set_scoreboard_lb
> would not fail then.
>
>
> Since it's 2.2.x which should be probably stable without big changes,
> I'm asking the list for more opinions... I will try to implement patch
> for option 2) tomorrow and see if this really fixes the issue.
>
> Regards,
> Jan Kaluza


Re: PR 58267: Regression in 2.2.31 caused by r1680920

Posted by Yann Ylavic <yl...@gmail.com>.
On Tue, Aug 25, 2015 at 10:22 AM, Jan Kaluža <jk...@redhat.com> wrote:
>
> On 08/24/2015 11:12 PM, Yann Ylavic wrote:
>>
>> I tested the below which seems to work.
>
>
> Hm, this reserves the slots in scoreboard even when the balancers are not
> used in the virtualhost, or am I wrong?

Correct, but there may still be RewriteRule(s) (with [P] flag) that
use the balancer.

>
> I originally thought about increasing proxy_lb_workers only when they are
> used in the ProxyPass* in the vhost.

I think this would be a regression, RewriteRules using <Proxy>
declared balancers/workers do also reuse their parameters/connections
(not the "proxy:reverse" worker), some third-party modules may too.

Re: PR 58267: Regression in 2.2.31 caused by r1680920

Posted by Yann Ylavic <yl...@gmail.com>.
On Tue, Aug 25, 2015 at 10:48 AM, Plüm, Rüdiger, Vodafone Group
<ru...@vodafone.com> wrote:
> I think the current state of 2.2.31 breaks existing 2.2.x configuration prior to 2.2.31.
> Prior to 2.2.31 you could do the following:
>
> <Proxy Balancer://proxy1>
> BalancerMember ajp://127.0.0.1:7001
> BalancerMember ajp://127.0.0.2:7002
> </Proxy>
>
> <virtualhost *:80>
> ProxyPass / balancer://proxy1/
> </virtualhost>
>
> <virtualhost 127.0.0.1:8080>
>
> <Location /bmanager>
>    sethandler balancer-manager
> </Location>
>
> </virtualhost>
>
> With this configuration you could provide your reverse proxy to the outside world while having the
> balancermanager managing the balancer only listening to localhost. This is not possible any longer with 2.2.31
> as the worker->s is now different in each virtualhost whereas before it was the same as the worker->id was used
> to identify the scoreboard slot.

You are right, the old behaviour was to share the score of the main
server for all the vhosts that use it.

I thought that since the worker's parameters are per vhost, each
vhost's worker had its own score so that e.g. an error on one of them
would not affect the error state of the others.
I can (and indeed always unconsciously did) live with the fact that
the scored parameters are ignored in the vhost if they are also
defined in the main server (i.e. it is useless to define the same
<Proxy> with different parameters both in the main server and a vhost
:)

> The patches proposed so far would not change this.

Yes, it implements a score per vhost...

On Tue, Aug 25, 2015 at 11:39 AM, Plüm, Rüdiger, Vodafone Group
<ru...@vodafone.com> wrote:
> How about the following patch? It uses the server_rec of the server that originally created the configuration item.

LGTM, thanks Rüdiger for helping on this.

Re: PR 58267: Regression in 2.2.31 caused by r1680920

Posted by Yann Ylavic <yl...@gmail.com>.
On Tue, Aug 25, 2015 at 2:15 PM, Jan Kaluža <jk...@redhat.com> wrote:
> On 08/25/2015 11:39 AM, Plüm, Rüdiger, Vodafone Group wrote:
>>
>> How about the following patch? It uses the server_rec of the server that
>> originally created the configuration item.
>
>
> This looks like good strategy. I've verified that the patch fixes this issue
> and does not break anything when testing briefly.
>
> What do you think, Yann?

OK for me too.

Re: PR 58267: Regression in 2.2.31 caused by r1680920

Posted by Jim Jagielski <ji...@jaguNET.com>.
+1
> On Aug 25, 2015, at 8:57 AM, Plüm, Rüdiger, Vodafone Group <ru...@vodafone.com> wrote:
> 
> Now the more complete patch (including bump):
> 
> Index: modules/proxy/proxy_util.c
> ===================================================================
> --- modules/proxy/proxy_util.c	(revision 1697578)
> +++ modules/proxy/proxy_util.c	(working copy)
> @@ -1460,6 +1460,7 @@
>     (*worker)->flush_packets = flush_off;
>     (*worker)->flush_wait = PROXY_FLUSH_WAIT;
>     (*worker)->smax = -1;
> +    (*worker)->server = conf->s;
>     /* Increase the total worker count */
>     proxy_lb_workers++;
>     init_conn_pool(p, *worker);
> @@ -1807,6 +1808,7 @@
>                                                 server_rec *server)
> {
>     if (ap_scoreboard_image && !worker->s) {
> +        server_rec *id_server;
>         int i = 0;
>         proxy_worker_stat *free_slot = NULL;
>         proxy_worker_stat *s;
> @@ -1824,14 +1826,20 @@
>             apr_md5_update(&ctx, (unsigned char *)balancer->name,
>                            strlen(balancer->name));
>         }
> -        if (server) {
> +        if (worker->server) {
> +            id_server = worker->server;
> +        }
> +        else {
> +            id_server = server;
> +        }
> +        if (id_server) {
>             server_addr_rec *addr;
>             /* Assumes the unique identifier of a vhost is its address(es)
>              * plus the ServerName:Port. Should two or more vhosts have this
>              * same identifier, the first one would always be elected to
>              * handle the requests, so this shouldn't be an issue...
>              */
> -            for (addr = server->addrs; addr; addr = addr->next) {
> +            for (addr = id_server->addrs; addr; addr = addr->next) {
>                 char host_ip[64]; /* for any IPv[46] string */
>                 apr_sockaddr_ip_getbuf(host_ip, sizeof host_ip,
>                                        addr->host_addr);
> @@ -1840,10 +1848,10 @@
>                 apr_md5_update(&ctx, (unsigned char *)&addr->host_port,
>                                sizeof(addr->host_port));
>             }
> -            apr_md5_update(&ctx, (unsigned char *)server->server_hostname,
> -                           strlen(server->server_hostname));
> -            apr_md5_update(&ctx, (unsigned char *)&server->port,
> -                           sizeof(server->port));
> +            apr_md5_update(&ctx, (unsigned char *)id_server->server_hostname,
> +                           strlen(id_server->server_hostname));
> +            apr_md5_update(&ctx, (unsigned char *)&id_server->port,
> +                           sizeof(id_server->port));
>         }
>         apr_md5_final(digest, &ctx);
> 
> Index: modules/proxy/mod_proxy.c
> ===================================================================
> --- modules/proxy/mod_proxy.c	(revision 1697578)
> +++ modules/proxy/mod_proxy.c	(working copy)
> @@ -1129,6 +1129,7 @@
>     ps->badopt = bad_error;
>     ps->badopt_set = 0;
>     ps->pool = p;
> +    ps->s = s;
> 
>     return ps;
> }
> @@ -1172,6 +1173,7 @@
>     ps->proxy_status = (overrides->proxy_status_set == 0) ? base->proxy_status : overrides->proxy_status;
>     ps->proxy_status_set = overrides->proxy_status_set || base->proxy_status_set;
>     ps->pool = p;
> +    ps->s = overrides->s;
>     return ps;
> }
> 
> Index: modules/proxy/mod_proxy.h
> ===================================================================
> --- modules/proxy/mod_proxy.h	(revision 1697578)
> +++ modules/proxy/mod_proxy.h	(working copy)
> @@ -193,6 +193,7 @@
>     } proxy_status;             /* Status display options */
>     char proxy_status_set;
>     apr_pool_t *pool;           /* Pool used for allocating this struct */
> +    server_rec *s;              /* The server_rec where this configuration was created in */
> } proxy_server_conf;
> 
> 
> @@ -369,6 +370,7 @@
>     char            disablereuse_set;
>     apr_interval_time_t conn_timeout;
>     char            conn_timeout_set;
> +    server_rec      *server;    /* The server_rec where this configuration was created in */
> };
> 
> /*
> Index: include/ap_mmn.h
> ===================================================================
> --- include/ap_mmn.h	(revision 1697578)
> +++ include/ap_mmn.h	(working copy)
> @@ -158,6 +158,8 @@
>  * 20051115.38 (2.2.30) Add ap_proxy_set_scoreboard_lb() in mod_proxy.h
>  * 20051115.39 (2.2.30) Add ap_proxy_connection_reusable()
>  * 20051115.40 (2.2.30) Add ap_map_http_request_error()
> + * 20051115.41 (2.2.32) Add s member to proxy_server_conf struct and server
> + *                      member to proxy_worker struct.
>  */
> 
> #define MODULE_MAGIC_COOKIE 0x41503232UL /* "AP22" */
> @@ -165,7 +167,7 @@
> #ifndef MODULE_MAGIC_NUMBER_MAJOR
> #define MODULE_MAGIC_NUMBER_MAJOR 20051115
> #endif
> -#define MODULE_MAGIC_NUMBER_MINOR 40                    /* 0...n */
> +#define MODULE_MAGIC_NUMBER_MINOR 41                    /* 0...n */
> 
> /**
>  * Determine if the server's current MODULE_MAGIC_NUMBER is at least a
> 
> 
> Regards
> 
> Rüdiger
> 
>> -----Original Message-----
>> From: Plüm, Rüdiger, Vodafone Group
>> Sent: Dienstag, 25. August 2015 14:41
>> To: dev@httpd.apache.org
>> Subject: RE: PR 58267: Regression in 2.2.31 caused by r1680920
>> 
>> Thanks for the pointer. It is missing because I removed it by accident
>> when excluding some debug code I setup locally for analysing the issue
>> from the patch I posted. I will post a proper version and if you agree put
>> it in STATUS for 2.2.x. As far as I can tell this change only applies to
>> 2.2.x. So it would be fine to propose it directly in STATUS without any
>> trunk commit.
>> 
>> Regards
>> 
>> Rüdiger
>> 
>>> -----Original Message-----
>>> From: Jan Kaluža [mailto:jkaluza@redhat.com]
>>> Sent: Dienstag, 25. August 2015 14:15
>>> To: dev@httpd.apache.org
>>> Subject: Re: PR 58267: Regression in 2.2.31 caused by r1680920
>>> 
>>> On 08/25/2015 11:39 AM, Plüm, Rüdiger, Vodafone Group wrote:
>>>> How about the following patch? It uses the server_rec of the server
>> that
>>> originally created the configuration item.
>>> 
>>> This looks like good strategy. I've verified that the patch fixes this
>>> issue and does not break anything when testing briefly.
>>> 
>>> What do you think, Yann?
>>> 
>>> Note that "id_server" declaration is missing in the patch. Otherwise
>>> it's OK.
>>> 
>>> Regards,
>>> Jan Kaluza
>>> 
>>>> Index: modules/proxy/proxy_util.c
>>>> ===================================================================
>>>> --- modules/proxy/proxy_util.c	(revision 1697578)
>>>> +++ modules/proxy/proxy_util.c	(working copy)
>>>> @@ -1460,6 +1460,7 @@
>>>>      (*worker)->flush_packets = flush_off;
>>>>      (*worker)->flush_wait = PROXY_FLUSH_WAIT;
>>>>      (*worker)->smax = -1;
>>>> +    (*worker)->server = conf->s;
>>>>      /* Increase the total worker count */
>>>>      proxy_lb_workers++;
>>>>      init_conn_pool(p, *worker);
>>>> @@ -1824,14 +1829,20 @@
>>>>              apr_md5_update(&ctx, (unsigned char *)balancer->name,
>>>>                             strlen(balancer->name));
>>>>          }
>>>> -        if (server) {
>>>> +        if (worker->server) {
>>>> +            id_server = worker->server;
>>>> +        }
>>>> +        else {
>>>> +            id_server = server;
>>>> +        }
>>>> +        if (id_server) {
>>>>              server_addr_rec *addr;
>>>>              /* Assumes the unique identifier of a vhost is its
>>> address(es)
>>>>               * plus the ServerName:Port. Should two or more vhosts
>>> have this
>>>>               * same identifier, the first one would always be
>> elected
>>> to
>>>>               * handle the requests, so this shouldn't be an issue...
>>>>               */
>>>> -            for (addr = server->addrs; addr; addr = addr->next) {
>>>> +            for (addr = id_server->addrs; addr; addr = addr->next) {
>>>>                  char host_ip[64]; /* for any IPv[46] string */
>>>>                  apr_sockaddr_ip_getbuf(host_ip, sizeof host_ip,
>>>>                                         addr->host_addr);
>>>> @@ -1840,10 +1851,10 @@
>>>>                  apr_md5_update(&ctx, (unsigned char *)&addr-
>>>> host_port,
>>>>                                 sizeof(addr->host_port));
>>>>              }
>>>> -            apr_md5_update(&ctx, (unsigned char *)server-
>>>> server_hostname,
>>>> -                           strlen(server->server_hostname));
>>>> -            apr_md5_update(&ctx, (unsigned char *)&server->port,
>>>> -                           sizeof(server->port));
>>>> +            apr_md5_update(&ctx, (unsigned char *)id_server-
>>>> server_hostname,
>>>> +                           strlen(id_server->server_hostname));
>>>> +            apr_md5_update(&ctx, (unsigned char *)&id_server->port,
>>>> +                           sizeof(id_server->port));
>>>>          }
>>>>          apr_md5_final(digest, &ctx);
>>>> 
>>>> Index: modules/proxy/mod_proxy.c
>>>> ===================================================================
>>>> --- modules/proxy/mod_proxy.c	(revision 1697578)
>>>> +++ modules/proxy/mod_proxy.c	(working copy)
>>>> @@ -1129,6 +1129,7 @@
>>>>      ps->badopt = bad_error;
>>>>      ps->badopt_set = 0;
>>>>      ps->pool = p;
>>>> +    ps->s = s;
>>>> 
>>>>      return ps;
>>>>  }
>>>> @@ -1172,6 +1173,7 @@
>>>>      ps->proxy_status = (overrides->proxy_status_set == 0) ? base-
>>>> proxy_status : overrides->proxy_status;
>>>>      ps->proxy_status_set = overrides->proxy_status_set || base-
>>>> proxy_status_set;
>>>>      ps->pool = p;
>>>> +    ps->s = overrides->s;
>>>>      return ps;
>>>>  }
>>>> 
>>>> Index: modules/proxy/mod_proxy.h
>>>> ===================================================================
>>>> --- modules/proxy/mod_proxy.h	(revision 1697578)
>>>> +++ modules/proxy/mod_proxy.h	(working copy)
>>>> @@ -193,6 +193,7 @@
>>>>      } proxy_status;             /* Status display options */
>>>>      char proxy_status_set;
>>>>      apr_pool_t *pool;           /* Pool used for allocating this
>>> struct */
>>>> +    server_rec *s;              /* The server_rec where this
>>> configuration was created in */
>>>>  } proxy_server_conf;
>>>> 
>>>> 
>>>> @@ -369,6 +370,7 @@
>>>>      char            disablereuse_set;
>>>>      apr_interval_time_t conn_timeout;
>>>>      char            conn_timeout_set;
>>>> +    server_rec      *server;    /* The server_rec where this
>>> configuration was created in */
>>>>  };
>>>> 
>>>>  /*
>>>> 
>>>> 
>>>> Regards
>>>> 
>>>> Rüdiger
>>>> 
>>>>> -----Original Message-----
>>>>> From: Plüm, Rüdiger, Vodafone Group
>>>>> Sent: Dienstag, 25. August 2015 10:48
>>>>> To: dev@httpd.apache.org
>>>>> Subject: RE: PR 58267: Regression in 2.2.31 caused by r1680920
>>>>> 
>>>>> I think the current state of 2.2.31 breaks existing 2.2.x
>> configuration
>>>>> prior to 2.2.31.
>>>>> Prior to 2.2.31 you could do the following:
>>>>> 
>>>>> <Proxy Balancer://proxy1>
>>>>> BalancerMember ajp://127.0.0.1:7001
>>>>> BalancerMember ajp://127.0.0.2:7002
>>>>> </Proxy>
>>>>> 
>>>>> <virtualhost *:80>
>>>>> ProxyPass / balancer://proxy1/
>>>>> </virtualhost>
>>>>> 
>>>>> <virtualhost 127.0.0.1:8080>
>>>>> 
>>>>> <Location /bmanager>
>>>>>    sethandler balancer-manager
>>>>> </Location>
>>>>> 
>>>>> </virtualhost>
>>>>> 
>>>>> With this configuration you could provide your reverse proxy to the
>>>>> outside world while having the
>>>>> balancermanager managing the balancer only listening to localhost.
>> This
>>> is
>>>>> not possible any longer with 2.2.31
>>>>> as the worker->s is now different in each virtualhost whereas before
>> it
>>>>> was the same as the worker->id was used
>>>>> to identify the scoreboard slot.
>>>>> The patches proposed so far would not change this. I think in order
>> to
>>>>> revert to the old behaviour we need to
>>>>> store with each worker in which server_rec context it was created.
>> e.g.
>>>>> adding a const char * field to the worker that would be filled with
>>>>> server->server_hostname. Then we could use this value for creating
>> the
>>>>> md5.
>>>>> 
>>>>> Regards
>>>>> 
>>>>> Rüdiger
>>>>> 
>>>>>> -----Original Message-----
>>>>>> From: Jan Kaluža [mailto:jkaluza@redhat.com]
>>>>>> Sent: Dienstag, 25. August 2015 10:23
>>>>>> To: dev@httpd.apache.org
>>>>>> Subject: Re: PR 58267: Regression in 2.2.31 caused by r1680920
>>>>>> 
>>>>>> On 08/24/2015 11:12 PM, Yann Ylavic wrote:
>>>>>>> On Mon, Aug 24, 2015 at 5:51 PM, Yann Ylavic <yl...@gmail.com>
>>>>>> wrote:
>>>>>>>> 
>>>>>>>> On Mon, Aug 24, 2015 at 4:47 PM, Jan Kaluža <jk...@redhat.com>
>>>>> wrote:
>>>>>>>>> 
>>>>>>>>> 2) Increment proxy_lb_workers according to number of workers in
>>>>>> balancer
>>>>>>>>> when using "ProxyPass /foobar/ Balancer://foobar/" in the
>>>>> VirtualHost.
>>>>>> The
>>>>>>>>> scoreboard would have right size and ap_proxy_set_scoreboard_lb
>>>>> would
>>>>>> not
>>>>>>>>> fail then.
>>>>>>>> 
>>>>>>>> I think we can do this quite easily in merge_proxy_config(), by
>>>>>>>> incrementing proxy_lb_workers for each base->balancers->workers. I
>>>>> did
>>>>>>>> not test it yet though.
>>>>>>> 
>>>>>>> I tested the below which seems to work.
>>>>>> 
>>>>>> Hm, this reserves the slots in scoreboard even when the balancers
>> are
>>>>>> not used in the virtualhost, or am I wrong?
>>>>>> 
>>>>>> I originally thought about increasing proxy_lb_workers only when
>> they
>>>>>> are used in the ProxyPass* in the vhost.
>>>>>> 
>>>>>>> Index: modules/proxy/mod_proxy.c
>>>>>>> ===================================================================
>>>>>>> --- modules/proxy/mod_proxy.c    (revision 1697358)
>>>>>>> +++ modules/proxy/mod_proxy.c    (working copy)
>>>>>>> @@ -1135,6 +1135,7 @@ static void * create_proxy_config(apr_pool_t
>>> *p,
>>>>> s
>>>>>>> 
>>>>>>>   static void * merge_proxy_config(apr_pool_t *p, void *basev,
>> void
>>>>>> *overridesv)
>>>>>>>   {
>>>>>>> +    int i;
>>>>>>>       proxy_server_conf *ps = apr_pcalloc(p,
>>>>> sizeof(proxy_server_conf));
>>>>>>>       proxy_server_conf *base = (proxy_server_conf *) basev;
>>>>>>>       proxy_server_conf *overrides = (proxy_server_conf *)
>>> overridesv;
>>>>>>> @@ -1147,6 +1148,13 @@ static void * merge_proxy_config(apr_pool_t
>>> *p,
>>>>>> vo
>>>>>>>       ps->allowed_connect_ports = apr_array_append(p,
>>>>>>> base->allowed_connect_ports, overrides->allowed_connect_ports);
>>>>>>>       ps->workers = apr_array_append(p, base->workers, overrides-
>>>>>>> workers);
>>>>>>>       ps->balancers = apr_array_append(p, base->balancers,
>>> overrides-
>>>>>>> balancers);
>>>>>>> +    /* The balancers inherited from base don't have their members
>>>>>> reserved on
>>>>>>> +     * the scorebord_lb for this server, account for them now.
>>>>>>> +     */
>>>>>>> +    for (i = 0; i < base->balancers->nelts; ++i) {
>>>>>>> +        proxy_balancer *balancer = (proxy_balancer *)base-
>>>> balancers-
>>>>>>> elts + i;
>>>>>>> +        proxy_lb_workers += balancer->workers->nelts;
>>>>>>> +    }
>>>>>>>       ps->forward = overrides->forward ? overrides->forward :
>> base-
>>>>>>> forward;
>>>>>>>       ps->reverse = overrides->reverse ? overrides->reverse :
>> base-
>>>>>>> reverse;
>>>>>>> 
>>>>>>> --
>>>>>>> 
>>>>>>> Please note that since all the workers would really be accounted in
>>>>>>> the scoreboard, configurations like the one of PR 58267 (with
>>>>>>> inherited balancers) would also need bigger SHMs (but no more) than
>>>>>>> currently...
>>>>>>> 
>>>>>>> WDYT?
>>>>>>> 
>>>>>> 
>>>>>> Regards,
>>>>>> Jan Kaluza
>>>>>> 
>>>> 
> 
> <pr58267.diff>


RE: PR 58267: Regression in 2.2.31 caused by r1680920

Posted by Plüm, Rüdiger, Vodafone Group <ru...@vodafone.com>.
Plus CHANGES:

Index: CHANGES
===================================================================
--- CHANGES     (revision 1697578)
+++ CHANGES     (working copy)
@@ -1,8 +1,10 @@
                                                          -*- coding: utf-8 -*-
 Changes with Apache 2.2.32

+  *) mod_proxy: Fix a regression with 2.2.31 that caused inherited workers to
+     use a different scoreboard slot then the original one.  PR 58267.
+    [Ruediger Pluem]

-
 Changes with Apache 2.2.31

   *) Correct win32 build issues for mod_proxy exports, OpenSSL 1.0.x headers.

Regards

Rüdiger

> -----Original Message-----
> From: Plüm, Rüdiger, Vodafone Group
> Sent: Dienstag, 25. August 2015 14:58
> To: dev@httpd.apache.org
> Subject: RE: PR 58267: Regression in 2.2.31 caused by r1680920
> 
> Now the more complete patch (including bump):
> 
> Index: modules/proxy/proxy_util.c
> ===================================================================
> --- modules/proxy/proxy_util.c	(revision 1697578)
> +++ modules/proxy/proxy_util.c	(working copy)
> @@ -1460,6 +1460,7 @@
>      (*worker)->flush_packets = flush_off;
>      (*worker)->flush_wait = PROXY_FLUSH_WAIT;
>      (*worker)->smax = -1;
> +    (*worker)->server = conf->s;
>      /* Increase the total worker count */
>      proxy_lb_workers++;
>      init_conn_pool(p, *worker);
> @@ -1807,6 +1808,7 @@
>                                                  server_rec *server)
>  {
>      if (ap_scoreboard_image && !worker->s) {
> +        server_rec *id_server;
>          int i = 0;
>          proxy_worker_stat *free_slot = NULL;
>          proxy_worker_stat *s;
> @@ -1824,14 +1826,20 @@
>              apr_md5_update(&ctx, (unsigned char *)balancer->name,
>                             strlen(balancer->name));
>          }
> -        if (server) {
> +        if (worker->server) {
> +            id_server = worker->server;
> +        }
> +        else {
> +            id_server = server;
> +        }
> +        if (id_server) {
>              server_addr_rec *addr;
>              /* Assumes the unique identifier of a vhost is its
> address(es)
>               * plus the ServerName:Port. Should two or more vhosts have
> this
>               * same identifier, the first one would always be elected to
>               * handle the requests, so this shouldn't be an issue...
>               */
> -            for (addr = server->addrs; addr; addr = addr->next) {
> +            for (addr = id_server->addrs; addr; addr = addr->next) {
>                  char host_ip[64]; /* for any IPv[46] string */
>                  apr_sockaddr_ip_getbuf(host_ip, sizeof host_ip,
>                                         addr->host_addr);
> @@ -1840,10 +1848,10 @@
>                  apr_md5_update(&ctx, (unsigned char *)&addr->host_port,
>                                 sizeof(addr->host_port));
>              }
> -            apr_md5_update(&ctx, (unsigned char *)server-
> >server_hostname,
> -                           strlen(server->server_hostname));
> -            apr_md5_update(&ctx, (unsigned char *)&server->port,
> -                           sizeof(server->port));
> +            apr_md5_update(&ctx, (unsigned char *)id_server-
> >server_hostname,
> +                           strlen(id_server->server_hostname));
> +            apr_md5_update(&ctx, (unsigned char *)&id_server->port,
> +                           sizeof(id_server->port));
>          }
>          apr_md5_final(digest, &ctx);
> 
> Index: modules/proxy/mod_proxy.c
> ===================================================================
> --- modules/proxy/mod_proxy.c	(revision 1697578)
> +++ modules/proxy/mod_proxy.c	(working copy)
> @@ -1129,6 +1129,7 @@
>      ps->badopt = bad_error;
>      ps->badopt_set = 0;
>      ps->pool = p;
> +    ps->s = s;
> 
>      return ps;
>  }
> @@ -1172,6 +1173,7 @@
>      ps->proxy_status = (overrides->proxy_status_set == 0) ? base-
> >proxy_status : overrides->proxy_status;
>      ps->proxy_status_set = overrides->proxy_status_set || base-
> >proxy_status_set;
>      ps->pool = p;
> +    ps->s = overrides->s;
>      return ps;
>  }
> 
> Index: modules/proxy/mod_proxy.h
> ===================================================================
> --- modules/proxy/mod_proxy.h	(revision 1697578)
> +++ modules/proxy/mod_proxy.h	(working copy)
> @@ -193,6 +193,7 @@
>      } proxy_status;             /* Status display options */
>      char proxy_status_set;
>      apr_pool_t *pool;           /* Pool used for allocating this struct
> */
> +    server_rec *s;              /* The server_rec where this
> configuration was created in */
>  } proxy_server_conf;
> 
> 
> @@ -369,6 +370,7 @@
>      char            disablereuse_set;
>      apr_interval_time_t conn_timeout;
>      char            conn_timeout_set;
> +    server_rec      *server;    /* The server_rec where this
> configuration was created in */
>  };
> 
>  /*
> Index: include/ap_mmn.h
> ===================================================================
> --- include/ap_mmn.h	(revision 1697578)
> +++ include/ap_mmn.h	(working copy)
> @@ -158,6 +158,8 @@
>   * 20051115.38 (2.2.30) Add ap_proxy_set_scoreboard_lb() in mod_proxy.h
>   * 20051115.39 (2.2.30) Add ap_proxy_connection_reusable()
>   * 20051115.40 (2.2.30) Add ap_map_http_request_error()
> + * 20051115.41 (2.2.32) Add s member to proxy_server_conf struct and
> server
> + *                      member to proxy_worker struct.
>   */
> 
>  #define MODULE_MAGIC_COOKIE 0x41503232UL /* "AP22" */
> @@ -165,7 +167,7 @@
>  #ifndef MODULE_MAGIC_NUMBER_MAJOR
>  #define MODULE_MAGIC_NUMBER_MAJOR 20051115
>  #endif
> -#define MODULE_MAGIC_NUMBER_MINOR 40                    /* 0...n */
> +#define MODULE_MAGIC_NUMBER_MINOR 41                    /* 0...n */
> 
>  /**
>   * Determine if the server's current MODULE_MAGIC_NUMBER is at least a
> 
> 
> Regards
> 
> Rüdiger
> 
> > -----Original Message-----
> > From: Plüm, Rüdiger, Vodafone Group
> > Sent: Dienstag, 25. August 2015 14:41
> > To: dev@httpd.apache.org
> > Subject: RE: PR 58267: Regression in 2.2.31 caused by r1680920
> >
> > Thanks for the pointer. It is missing because I removed it by accident
> > when excluding some debug code I setup locally for analysing the issue
> > from the patch I posted. I will post a proper version and if you agree
> put
> > it in STATUS for 2.2.x. As far as I can tell this change only applies to
> > 2.2.x. So it would be fine to propose it directly in STATUS without any
> > trunk commit.
> >
> > Regards
> >
> > Rüdiger
> >
> > > -----Original Message-----
> > > From: Jan Kaluža [mailto:jkaluza@redhat.com]
> > > Sent: Dienstag, 25. August 2015 14:15
> > > To: dev@httpd.apache.org
> > > Subject: Re: PR 58267: Regression in 2.2.31 caused by r1680920
> > >
> > > On 08/25/2015 11:39 AM, Plüm, Rüdiger, Vodafone Group wrote:
> > > > How about the following patch? It uses the server_rec of the server
> > that
> > > originally created the configuration item.
> > >
> > > This looks like good strategy. I've verified that the patch fixes this
> > > issue and does not break anything when testing briefly.
> > >
> > > What do you think, Yann?
> > >
> > > Note that "id_server" declaration is missing in the patch. Otherwise
> > > it's OK.
> > >
> > > Regards,
> > > Jan Kaluza
> > >
> > > > Index: modules/proxy/proxy_util.c
> > > > ===================================================================
> > > > --- modules/proxy/proxy_util.c	(revision 1697578)
> > > > +++ modules/proxy/proxy_util.c	(working copy)
> > > > @@ -1460,6 +1460,7 @@
> > > >       (*worker)->flush_packets = flush_off;
> > > >       (*worker)->flush_wait = PROXY_FLUSH_WAIT;
> > > >       (*worker)->smax = -1;
> > > > +    (*worker)->server = conf->s;
> > > >       /* Increase the total worker count */
> > > >       proxy_lb_workers++;
> > > >       init_conn_pool(p, *worker);
> > > > @@ -1824,14 +1829,20 @@
> > > >               apr_md5_update(&ctx, (unsigned char *)balancer->name,
> > > >                              strlen(balancer->name));
> > > >           }
> > > > -        if (server) {
> > > > +        if (worker->server) {
> > > > +            id_server = worker->server;
> > > > +        }
> > > > +        else {
> > > > +            id_server = server;
> > > > +        }
> > > > +        if (id_server) {
> > > >               server_addr_rec *addr;
> > > >               /* Assumes the unique identifier of a vhost is its
> > > address(es)
> > > >                * plus the ServerName:Port. Should two or more vhosts
> > > have this
> > > >                * same identifier, the first one would always be
> > elected
> > > to
> > > >                * handle the requests, so this shouldn't be an
> issue...
> > > >                */
> > > > -            for (addr = server->addrs; addr; addr = addr->next) {
> > > > +            for (addr = id_server->addrs; addr; addr = addr->next)
> {
> > > >                   char host_ip[64]; /* for any IPv[46] string */
> > > >                   apr_sockaddr_ip_getbuf(host_ip, sizeof host_ip,
> > > >                                          addr->host_addr);
> > > > @@ -1840,10 +1851,10 @@
> > > >                   apr_md5_update(&ctx, (unsigned char *)&addr-
> > > >host_port,
> > > >                                  sizeof(addr->host_port));
> > > >               }
> > > > -            apr_md5_update(&ctx, (unsigned char *)server-
> > > >server_hostname,
> > > > -                           strlen(server->server_hostname));
> > > > -            apr_md5_update(&ctx, (unsigned char *)&server->port,
> > > > -                           sizeof(server->port));
> > > > +            apr_md5_update(&ctx, (unsigned char *)id_server-
> > > >server_hostname,
> > > > +                           strlen(id_server->server_hostname));
> > > > +            apr_md5_update(&ctx, (unsigned char *)&id_server->port,
> > > > +                           sizeof(id_server->port));
> > > >           }
> > > >           apr_md5_final(digest, &ctx);
> > > >
> > > > Index: modules/proxy/mod_proxy.c
> > > > ===================================================================
> > > > --- modules/proxy/mod_proxy.c	(revision 1697578)
> > > > +++ modules/proxy/mod_proxy.c	(working copy)
> > > > @@ -1129,6 +1129,7 @@
> > > >       ps->badopt = bad_error;
> > > >       ps->badopt_set = 0;
> > > >       ps->pool = p;
> > > > +    ps->s = s;
> > > >
> > > >       return ps;
> > > >   }
> > > > @@ -1172,6 +1173,7 @@
> > > >       ps->proxy_status = (overrides->proxy_status_set == 0) ? base-
> > > >proxy_status : overrides->proxy_status;
> > > >       ps->proxy_status_set = overrides->proxy_status_set || base-
> > > >proxy_status_set;
> > > >       ps->pool = p;
> > > > +    ps->s = overrides->s;
> > > >       return ps;
> > > >   }
> > > >
> > > > Index: modules/proxy/mod_proxy.h
> > > > ===================================================================
> > > > --- modules/proxy/mod_proxy.h	(revision 1697578)
> > > > +++ modules/proxy/mod_proxy.h	(working copy)
> > > > @@ -193,6 +193,7 @@
> > > >       } proxy_status;             /* Status display options */
> > > >       char proxy_status_set;
> > > >       apr_pool_t *pool;           /* Pool used for allocating this
> > > struct */
> > > > +    server_rec *s;              /* The server_rec where this
> > > configuration was created in */
> > > >   } proxy_server_conf;
> > > >
> > > >
> > > > @@ -369,6 +370,7 @@
> > > >       char            disablereuse_set;
> > > >       apr_interval_time_t conn_timeout;
> > > >       char            conn_timeout_set;
> > > > +    server_rec      *server;    /* The server_rec where this
> > > configuration was created in */
> > > >   };
> > > >
> > > >   /*
> > > >
> > > >
> > > > Regards
> > > >
> > > > Rüdiger
> > > >
> > > >> -----Original Message-----
> > > >> From: Plüm, Rüdiger, Vodafone Group
> > > >> Sent: Dienstag, 25. August 2015 10:48
> > > >> To: dev@httpd.apache.org
> > > >> Subject: RE: PR 58267: Regression in 2.2.31 caused by r1680920
> > > >>
> > > >> I think the current state of 2.2.31 breaks existing 2.2.x
> > configuration
> > > >> prior to 2.2.31.
> > > >> Prior to 2.2.31 you could do the following:
> > > >>
> > > >> <Proxy Balancer://proxy1>
> > > >> BalancerMember ajp://127.0.0.1:7001
> > > >> BalancerMember ajp://127.0.0.2:7002
> > > >> </Proxy>
> > > >>
> > > >> <virtualhost *:80>
> > > >> ProxyPass / balancer://proxy1/
> > > >> </virtualhost>
> > > >>
> > > >> <virtualhost 127.0.0.1:8080>
> > > >>
> > > >> <Location /bmanager>
> > > >>     sethandler balancer-manager
> > > >> </Location>
> > > >>
> > > >> </virtualhost>
> > > >>
> > > >> With this configuration you could provide your reverse proxy to the
> > > >> outside world while having the
> > > >> balancermanager managing the balancer only listening to localhost.
> > This
> > > is
> > > >> not possible any longer with 2.2.31
> > > >> as the worker->s is now different in each virtualhost whereas
> before
> > it
> > > >> was the same as the worker->id was used
> > > >> to identify the scoreboard slot.
> > > >> The patches proposed so far would not change this. I think in order
> > to
> > > >> revert to the old behaviour we need to
> > > >> store with each worker in which server_rec context it was created.
> > e.g.
> > > >> adding a const char * field to the worker that would be filled with
> > > >> server->server_hostname. Then we could use this value for creating
> > the
> > > >> md5.
> > > >>
> > > >> Regards
> > > >>
> > > >> Rüdiger
> > > >>
> > > >>> -----Original Message-----
> > > >>> From: Jan Kaluža [mailto:jkaluza@redhat.com]
> > > >>> Sent: Dienstag, 25. August 2015 10:23
> > > >>> To: dev@httpd.apache.org
> > > >>> Subject: Re: PR 58267: Regression in 2.2.31 caused by r1680920
> > > >>>
> > > >>> On 08/24/2015 11:12 PM, Yann Ylavic wrote:
> > > >>>> On Mon, Aug 24, 2015 at 5:51 PM, Yann Ylavic
> <yl...@gmail.com>
> > > >>> wrote:
> > > >>>>>
> > > >>>>> On Mon, Aug 24, 2015 at 4:47 PM, Jan Kaluža <jk...@redhat.com>
> > > >> wrote:
> > > >>>>>>
> > > >>>>>> 2) Increment proxy_lb_workers according to number of workers in
> > > >>> balancer
> > > >>>>>> when using "ProxyPass /foobar/ Balancer://foobar/" in the
> > > >> VirtualHost.
> > > >>> The
> > > >>>>>> scoreboard would have right size and ap_proxy_set_scoreboard_lb
> > > >> would
> > > >>> not
> > > >>>>>> fail then.
> > > >>>>>
> > > >>>>> I think we can do this quite easily in merge_proxy_config(), by
> > > >>>>> incrementing proxy_lb_workers for each base->balancers->workers.
> I
> > > >> did
> > > >>>>> not test it yet though.
> > > >>>>
> > > >>>> I tested the below which seems to work.
> > > >>>
> > > >>> Hm, this reserves the slots in scoreboard even when the balancers
> > are
> > > >>> not used in the virtualhost, or am I wrong?
> > > >>>
> > > >>> I originally thought about increasing proxy_lb_workers only when
> > they
> > > >>> are used in the ProxyPass* in the vhost.
> > > >>>
> > > >>>> Index: modules/proxy/mod_proxy.c
> > > >>>>
> ===================================================================
> > > >>>> --- modules/proxy/mod_proxy.c    (revision 1697358)
> > > >>>> +++ modules/proxy/mod_proxy.c    (working copy)
> > > >>>> @@ -1135,6 +1135,7 @@ static void *
> create_proxy_config(apr_pool_t
> > > *p,
> > > >> s
> > > >>>>
> > > >>>>    static void * merge_proxy_config(apr_pool_t *p, void *basev,
> > void
> > > >>> *overridesv)
> > > >>>>    {
> > > >>>> +    int i;
> > > >>>>        proxy_server_conf *ps = apr_pcalloc(p,
> > > >> sizeof(proxy_server_conf));
> > > >>>>        proxy_server_conf *base = (proxy_server_conf *) basev;
> > > >>>>        proxy_server_conf *overrides = (proxy_server_conf *)
> > > overridesv;
> > > >>>> @@ -1147,6 +1148,13 @@ static void *
> merge_proxy_config(apr_pool_t
> > > *p,
> > > >>> vo
> > > >>>>        ps->allowed_connect_ports = apr_array_append(p,
> > > >>>> base->allowed_connect_ports, overrides->allowed_connect_ports);
> > > >>>>        ps->workers = apr_array_append(p, base->workers,
> overrides-
> > > >>>> workers);
> > > >>>>        ps->balancers = apr_array_append(p, base->balancers,
> > > overrides-
> > > >>>> balancers);
> > > >>>> +    /* The balancers inherited from base don't have their
> members
> > > >>> reserved on
> > > >>>> +     * the scorebord_lb for this server, account for them now.
> > > >>>> +     */
> > > >>>> +    for (i = 0; i < base->balancers->nelts; ++i) {
> > > >>>> +        proxy_balancer *balancer = (proxy_balancer *)base-
> > > >balancers-
> > > >>>> elts + i;
> > > >>>> +        proxy_lb_workers += balancer->workers->nelts;
> > > >>>> +    }
> > > >>>>        ps->forward = overrides->forward ? overrides->forward :
> > base-
> > > >>>> forward;
> > > >>>>        ps->reverse = overrides->reverse ? overrides->reverse :
> > base-
> > > >>>> reverse;
> > > >>>>
> > > >>>> --
> > > >>>>
> > > >>>> Please note that since all the workers would really be accounted
> in
> > > >>>> the scoreboard, configurations like the one of PR 58267 (with
> > > >>>> inherited balancers) would also need bigger SHMs (but no more)
> than
> > > >>>> currently...
> > > >>>>
> > > >>>> WDYT?
> > > >>>>
> > > >>>
> > > >>> Regards,
> > > >>> Jan Kaluza
> > > >>>
> > > >


RE: PR 58267: Regression in 2.2.31 caused by r1680920

Posted by Plüm, Rüdiger, Vodafone Group <ru...@vodafone.com>.
Now the more complete patch (including bump):

Index: modules/proxy/proxy_util.c
===================================================================
--- modules/proxy/proxy_util.c	(revision 1697578)
+++ modules/proxy/proxy_util.c	(working copy)
@@ -1460,6 +1460,7 @@
     (*worker)->flush_packets = flush_off;
     (*worker)->flush_wait = PROXY_FLUSH_WAIT;
     (*worker)->smax = -1;
+    (*worker)->server = conf->s;
     /* Increase the total worker count */
     proxy_lb_workers++;
     init_conn_pool(p, *worker);
@@ -1807,6 +1808,7 @@
                                                 server_rec *server)
 {
     if (ap_scoreboard_image && !worker->s) {
+        server_rec *id_server;
         int i = 0;
         proxy_worker_stat *free_slot = NULL;
         proxy_worker_stat *s;
@@ -1824,14 +1826,20 @@
             apr_md5_update(&ctx, (unsigned char *)balancer->name,
                            strlen(balancer->name));
         }
-        if (server) {
+        if (worker->server) {
+            id_server = worker->server;
+        }
+        else {
+            id_server = server;
+        }
+        if (id_server) {
             server_addr_rec *addr;
             /* Assumes the unique identifier of a vhost is its address(es)
              * plus the ServerName:Port. Should two or more vhosts have this
              * same identifier, the first one would always be elected to
              * handle the requests, so this shouldn't be an issue...
              */
-            for (addr = server->addrs; addr; addr = addr->next) {
+            for (addr = id_server->addrs; addr; addr = addr->next) {
                 char host_ip[64]; /* for any IPv[46] string */
                 apr_sockaddr_ip_getbuf(host_ip, sizeof host_ip,
                                        addr->host_addr);
@@ -1840,10 +1848,10 @@
                 apr_md5_update(&ctx, (unsigned char *)&addr->host_port,
                                sizeof(addr->host_port));
             }
-            apr_md5_update(&ctx, (unsigned char *)server->server_hostname,
-                           strlen(server->server_hostname));
-            apr_md5_update(&ctx, (unsigned char *)&server->port,
-                           sizeof(server->port));
+            apr_md5_update(&ctx, (unsigned char *)id_server->server_hostname,
+                           strlen(id_server->server_hostname));
+            apr_md5_update(&ctx, (unsigned char *)&id_server->port,
+                           sizeof(id_server->port));
         }
         apr_md5_final(digest, &ctx);
 
Index: modules/proxy/mod_proxy.c
===================================================================
--- modules/proxy/mod_proxy.c	(revision 1697578)
+++ modules/proxy/mod_proxy.c	(working copy)
@@ -1129,6 +1129,7 @@
     ps->badopt = bad_error;
     ps->badopt_set = 0;
     ps->pool = p;
+    ps->s = s;
 
     return ps;
 }
@@ -1172,6 +1173,7 @@
     ps->proxy_status = (overrides->proxy_status_set == 0) ? base->proxy_status : overrides->proxy_status;
     ps->proxy_status_set = overrides->proxy_status_set || base->proxy_status_set;
     ps->pool = p;
+    ps->s = overrides->s;
     return ps;
 }
 
Index: modules/proxy/mod_proxy.h
===================================================================
--- modules/proxy/mod_proxy.h	(revision 1697578)
+++ modules/proxy/mod_proxy.h	(working copy)
@@ -193,6 +193,7 @@
     } proxy_status;             /* Status display options */
     char proxy_status_set;
     apr_pool_t *pool;           /* Pool used for allocating this struct */
+    server_rec *s;              /* The server_rec where this configuration was created in */
 } proxy_server_conf;
 
 
@@ -369,6 +370,7 @@
     char            disablereuse_set;
     apr_interval_time_t conn_timeout;
     char            conn_timeout_set;
+    server_rec      *server;    /* The server_rec where this configuration was created in */
 };
 
 /*
Index: include/ap_mmn.h
===================================================================
--- include/ap_mmn.h	(revision 1697578)
+++ include/ap_mmn.h	(working copy)
@@ -158,6 +158,8 @@
  * 20051115.38 (2.2.30) Add ap_proxy_set_scoreboard_lb() in mod_proxy.h
  * 20051115.39 (2.2.30) Add ap_proxy_connection_reusable()
  * 20051115.40 (2.2.30) Add ap_map_http_request_error()
+ * 20051115.41 (2.2.32) Add s member to proxy_server_conf struct and server
+ *                      member to proxy_worker struct.
  */
 
 #define MODULE_MAGIC_COOKIE 0x41503232UL /* "AP22" */
@@ -165,7 +167,7 @@
 #ifndef MODULE_MAGIC_NUMBER_MAJOR
 #define MODULE_MAGIC_NUMBER_MAJOR 20051115
 #endif
-#define MODULE_MAGIC_NUMBER_MINOR 40                    /* 0...n */
+#define MODULE_MAGIC_NUMBER_MINOR 41                    /* 0...n */
 
 /**
  * Determine if the server's current MODULE_MAGIC_NUMBER is at least a


Regards

Rüdiger

> -----Original Message-----
> From: Plüm, Rüdiger, Vodafone Group
> Sent: Dienstag, 25. August 2015 14:41
> To: dev@httpd.apache.org
> Subject: RE: PR 58267: Regression in 2.2.31 caused by r1680920
> 
> Thanks for the pointer. It is missing because I removed it by accident
> when excluding some debug code I setup locally for analysing the issue
> from the patch I posted. I will post a proper version and if you agree put
> it in STATUS for 2.2.x. As far as I can tell this change only applies to
> 2.2.x. So it would be fine to propose it directly in STATUS without any
> trunk commit.
> 
> Regards
> 
> Rüdiger
> 
> > -----Original Message-----
> > From: Jan Kaluža [mailto:jkaluza@redhat.com]
> > Sent: Dienstag, 25. August 2015 14:15
> > To: dev@httpd.apache.org
> > Subject: Re: PR 58267: Regression in 2.2.31 caused by r1680920
> >
> > On 08/25/2015 11:39 AM, Plüm, Rüdiger, Vodafone Group wrote:
> > > How about the following patch? It uses the server_rec of the server
> that
> > originally created the configuration item.
> >
> > This looks like good strategy. I've verified that the patch fixes this
> > issue and does not break anything when testing briefly.
> >
> > What do you think, Yann?
> >
> > Note that "id_server" declaration is missing in the patch. Otherwise
> > it's OK.
> >
> > Regards,
> > Jan Kaluza
> >
> > > Index: modules/proxy/proxy_util.c
> > > ===================================================================
> > > --- modules/proxy/proxy_util.c	(revision 1697578)
> > > +++ modules/proxy/proxy_util.c	(working copy)
> > > @@ -1460,6 +1460,7 @@
> > >       (*worker)->flush_packets = flush_off;
> > >       (*worker)->flush_wait = PROXY_FLUSH_WAIT;
> > >       (*worker)->smax = -1;
> > > +    (*worker)->server = conf->s;
> > >       /* Increase the total worker count */
> > >       proxy_lb_workers++;
> > >       init_conn_pool(p, *worker);
> > > @@ -1824,14 +1829,20 @@
> > >               apr_md5_update(&ctx, (unsigned char *)balancer->name,
> > >                              strlen(balancer->name));
> > >           }
> > > -        if (server) {
> > > +        if (worker->server) {
> > > +            id_server = worker->server;
> > > +        }
> > > +        else {
> > > +            id_server = server;
> > > +        }
> > > +        if (id_server) {
> > >               server_addr_rec *addr;
> > >               /* Assumes the unique identifier of a vhost is its
> > address(es)
> > >                * plus the ServerName:Port. Should two or more vhosts
> > have this
> > >                * same identifier, the first one would always be
> elected
> > to
> > >                * handle the requests, so this shouldn't be an issue...
> > >                */
> > > -            for (addr = server->addrs; addr; addr = addr->next) {
> > > +            for (addr = id_server->addrs; addr; addr = addr->next) {
> > >                   char host_ip[64]; /* for any IPv[46] string */
> > >                   apr_sockaddr_ip_getbuf(host_ip, sizeof host_ip,
> > >                                          addr->host_addr);
> > > @@ -1840,10 +1851,10 @@
> > >                   apr_md5_update(&ctx, (unsigned char *)&addr-
> > >host_port,
> > >                                  sizeof(addr->host_port));
> > >               }
> > > -            apr_md5_update(&ctx, (unsigned char *)server-
> > >server_hostname,
> > > -                           strlen(server->server_hostname));
> > > -            apr_md5_update(&ctx, (unsigned char *)&server->port,
> > > -                           sizeof(server->port));
> > > +            apr_md5_update(&ctx, (unsigned char *)id_server-
> > >server_hostname,
> > > +                           strlen(id_server->server_hostname));
> > > +            apr_md5_update(&ctx, (unsigned char *)&id_server->port,
> > > +                           sizeof(id_server->port));
> > >           }
> > >           apr_md5_final(digest, &ctx);
> > >
> > > Index: modules/proxy/mod_proxy.c
> > > ===================================================================
> > > --- modules/proxy/mod_proxy.c	(revision 1697578)
> > > +++ modules/proxy/mod_proxy.c	(working copy)
> > > @@ -1129,6 +1129,7 @@
> > >       ps->badopt = bad_error;
> > >       ps->badopt_set = 0;
> > >       ps->pool = p;
> > > +    ps->s = s;
> > >
> > >       return ps;
> > >   }
> > > @@ -1172,6 +1173,7 @@
> > >       ps->proxy_status = (overrides->proxy_status_set == 0) ? base-
> > >proxy_status : overrides->proxy_status;
> > >       ps->proxy_status_set = overrides->proxy_status_set || base-
> > >proxy_status_set;
> > >       ps->pool = p;
> > > +    ps->s = overrides->s;
> > >       return ps;
> > >   }
> > >
> > > Index: modules/proxy/mod_proxy.h
> > > ===================================================================
> > > --- modules/proxy/mod_proxy.h	(revision 1697578)
> > > +++ modules/proxy/mod_proxy.h	(working copy)
> > > @@ -193,6 +193,7 @@
> > >       } proxy_status;             /* Status display options */
> > >       char proxy_status_set;
> > >       apr_pool_t *pool;           /* Pool used for allocating this
> > struct */
> > > +    server_rec *s;              /* The server_rec where this
> > configuration was created in */
> > >   } proxy_server_conf;
> > >
> > >
> > > @@ -369,6 +370,7 @@
> > >       char            disablereuse_set;
> > >       apr_interval_time_t conn_timeout;
> > >       char            conn_timeout_set;
> > > +    server_rec      *server;    /* The server_rec where this
> > configuration was created in */
> > >   };
> > >
> > >   /*
> > >
> > >
> > > Regards
> > >
> > > Rüdiger
> > >
> > >> -----Original Message-----
> > >> From: Plüm, Rüdiger, Vodafone Group
> > >> Sent: Dienstag, 25. August 2015 10:48
> > >> To: dev@httpd.apache.org
> > >> Subject: RE: PR 58267: Regression in 2.2.31 caused by r1680920
> > >>
> > >> I think the current state of 2.2.31 breaks existing 2.2.x
> configuration
> > >> prior to 2.2.31.
> > >> Prior to 2.2.31 you could do the following:
> > >>
> > >> <Proxy Balancer://proxy1>
> > >> BalancerMember ajp://127.0.0.1:7001
> > >> BalancerMember ajp://127.0.0.2:7002
> > >> </Proxy>
> > >>
> > >> <virtualhost *:80>
> > >> ProxyPass / balancer://proxy1/
> > >> </virtualhost>
> > >>
> > >> <virtualhost 127.0.0.1:8080>
> > >>
> > >> <Location /bmanager>
> > >>     sethandler balancer-manager
> > >> </Location>
> > >>
> > >> </virtualhost>
> > >>
> > >> With this configuration you could provide your reverse proxy to the
> > >> outside world while having the
> > >> balancermanager managing the balancer only listening to localhost.
> This
> > is
> > >> not possible any longer with 2.2.31
> > >> as the worker->s is now different in each virtualhost whereas before
> it
> > >> was the same as the worker->id was used
> > >> to identify the scoreboard slot.
> > >> The patches proposed so far would not change this. I think in order
> to
> > >> revert to the old behaviour we need to
> > >> store with each worker in which server_rec context it was created.
> e.g.
> > >> adding a const char * field to the worker that would be filled with
> > >> server->server_hostname. Then we could use this value for creating
> the
> > >> md5.
> > >>
> > >> Regards
> > >>
> > >> Rüdiger
> > >>
> > >>> -----Original Message-----
> > >>> From: Jan Kaluža [mailto:jkaluza@redhat.com]
> > >>> Sent: Dienstag, 25. August 2015 10:23
> > >>> To: dev@httpd.apache.org
> > >>> Subject: Re: PR 58267: Regression in 2.2.31 caused by r1680920
> > >>>
> > >>> On 08/24/2015 11:12 PM, Yann Ylavic wrote:
> > >>>> On Mon, Aug 24, 2015 at 5:51 PM, Yann Ylavic <yl...@gmail.com>
> > >>> wrote:
> > >>>>>
> > >>>>> On Mon, Aug 24, 2015 at 4:47 PM, Jan Kaluža <jk...@redhat.com>
> > >> wrote:
> > >>>>>>
> > >>>>>> 2) Increment proxy_lb_workers according to number of workers in
> > >>> balancer
> > >>>>>> when using "ProxyPass /foobar/ Balancer://foobar/" in the
> > >> VirtualHost.
> > >>> The
> > >>>>>> scoreboard would have right size and ap_proxy_set_scoreboard_lb
> > >> would
> > >>> not
> > >>>>>> fail then.
> > >>>>>
> > >>>>> I think we can do this quite easily in merge_proxy_config(), by
> > >>>>> incrementing proxy_lb_workers for each base->balancers->workers. I
> > >> did
> > >>>>> not test it yet though.
> > >>>>
> > >>>> I tested the below which seems to work.
> > >>>
> > >>> Hm, this reserves the slots in scoreboard even when the balancers
> are
> > >>> not used in the virtualhost, or am I wrong?
> > >>>
> > >>> I originally thought about increasing proxy_lb_workers only when
> they
> > >>> are used in the ProxyPass* in the vhost.
> > >>>
> > >>>> Index: modules/proxy/mod_proxy.c
> > >>>> ===================================================================
> > >>>> --- modules/proxy/mod_proxy.c    (revision 1697358)
> > >>>> +++ modules/proxy/mod_proxy.c    (working copy)
> > >>>> @@ -1135,6 +1135,7 @@ static void * create_proxy_config(apr_pool_t
> > *p,
> > >> s
> > >>>>
> > >>>>    static void * merge_proxy_config(apr_pool_t *p, void *basev,
> void
> > >>> *overridesv)
> > >>>>    {
> > >>>> +    int i;
> > >>>>        proxy_server_conf *ps = apr_pcalloc(p,
> > >> sizeof(proxy_server_conf));
> > >>>>        proxy_server_conf *base = (proxy_server_conf *) basev;
> > >>>>        proxy_server_conf *overrides = (proxy_server_conf *)
> > overridesv;
> > >>>> @@ -1147,6 +1148,13 @@ static void * merge_proxy_config(apr_pool_t
> > *p,
> > >>> vo
> > >>>>        ps->allowed_connect_ports = apr_array_append(p,
> > >>>> base->allowed_connect_ports, overrides->allowed_connect_ports);
> > >>>>        ps->workers = apr_array_append(p, base->workers, overrides-
> > >>>> workers);
> > >>>>        ps->balancers = apr_array_append(p, base->balancers,
> > overrides-
> > >>>> balancers);
> > >>>> +    /* The balancers inherited from base don't have their members
> > >>> reserved on
> > >>>> +     * the scorebord_lb for this server, account for them now.
> > >>>> +     */
> > >>>> +    for (i = 0; i < base->balancers->nelts; ++i) {
> > >>>> +        proxy_balancer *balancer = (proxy_balancer *)base-
> > >balancers-
> > >>>> elts + i;
> > >>>> +        proxy_lb_workers += balancer->workers->nelts;
> > >>>> +    }
> > >>>>        ps->forward = overrides->forward ? overrides->forward :
> base-
> > >>>> forward;
> > >>>>        ps->reverse = overrides->reverse ? overrides->reverse :
> base-
> > >>>> reverse;
> > >>>>
> > >>>> --
> > >>>>
> > >>>> Please note that since all the workers would really be accounted in
> > >>>> the scoreboard, configurations like the one of PR 58267 (with
> > >>>> inherited balancers) would also need bigger SHMs (but no more) than
> > >>>> currently...
> > >>>>
> > >>>> WDYT?
> > >>>>
> > >>>
> > >>> Regards,
> > >>> Jan Kaluza
> > >>>
> > >


Re: PR 58267: Regression in 2.2.31 caused by r1680920

Posted by Jan Kaluža <jk...@redhat.com>.
On 08/25/2015 02:41 PM, Plüm, Rüdiger, Vodafone Group wrote:
> Thanks for the pointer. It is missing because I removed it by accident when excluding some debug code I setup locally for analysing the issue from the patch I posted. I will post a proper version and if you agree put it in STATUS for 2.2.x. As far as I can tell this change only applies to 2.2.x. So it would be fine to propose it directly in STATUS without any trunk commit.

I agree.

Jan Kaluza

> Regards
>
> Rüdiger
>
>> -----Original Message-----
>> From: Jan Kaluža [mailto:jkaluza@redhat.com]
>> Sent: Dienstag, 25. August 2015 14:15
>> To: dev@httpd.apache.org
>> Subject: Re: PR 58267: Regression in 2.2.31 caused by r1680920
>>
>> On 08/25/2015 11:39 AM, Plüm, Rüdiger, Vodafone Group wrote:
>>> How about the following patch? It uses the server_rec of the server that
>> originally created the configuration item.
>>
>> This looks like good strategy. I've verified that the patch fixes this
>> issue and does not break anything when testing briefly.
>>
>> What do you think, Yann?
>>
>> Note that "id_server" declaration is missing in the patch. Otherwise
>> it's OK.
>>
>> Regards,
>> Jan Kaluza
>>
>>> Index: modules/proxy/proxy_util.c
>>> ===================================================================
>>> --- modules/proxy/proxy_util.c	(revision 1697578)
>>> +++ modules/proxy/proxy_util.c	(working copy)
>>> @@ -1460,6 +1460,7 @@
>>>        (*worker)->flush_packets = flush_off;
>>>        (*worker)->flush_wait = PROXY_FLUSH_WAIT;
>>>        (*worker)->smax = -1;
>>> +    (*worker)->server = conf->s;
>>>        /* Increase the total worker count */
>>>        proxy_lb_workers++;
>>>        init_conn_pool(p, *worker);
>>> @@ -1824,14 +1829,20 @@
>>>                apr_md5_update(&ctx, (unsigned char *)balancer->name,
>>>                               strlen(balancer->name));
>>>            }
>>> -        if (server) {
>>> +        if (worker->server) {
>>> +            id_server = worker->server;
>>> +        }
>>> +        else {
>>> +            id_server = server;
>>> +        }
>>> +        if (id_server) {
>>>                server_addr_rec *addr;
>>>                /* Assumes the unique identifier of a vhost is its
>> address(es)
>>>                 * plus the ServerName:Port. Should two or more vhosts
>> have this
>>>                 * same identifier, the first one would always be elected
>> to
>>>                 * handle the requests, so this shouldn't be an issue...
>>>                 */
>>> -            for (addr = server->addrs; addr; addr = addr->next) {
>>> +            for (addr = id_server->addrs; addr; addr = addr->next) {
>>>                    char host_ip[64]; /* for any IPv[46] string */
>>>                    apr_sockaddr_ip_getbuf(host_ip, sizeof host_ip,
>>>                                           addr->host_addr);
>>> @@ -1840,10 +1851,10 @@
>>>                    apr_md5_update(&ctx, (unsigned char *)&addr-
>>> host_port,
>>>                                   sizeof(addr->host_port));
>>>                }
>>> -            apr_md5_update(&ctx, (unsigned char *)server-
>>> server_hostname,
>>> -                           strlen(server->server_hostname));
>>> -            apr_md5_update(&ctx, (unsigned char *)&server->port,
>>> -                           sizeof(server->port));
>>> +            apr_md5_update(&ctx, (unsigned char *)id_server-
>>> server_hostname,
>>> +                           strlen(id_server->server_hostname));
>>> +            apr_md5_update(&ctx, (unsigned char *)&id_server->port,
>>> +                           sizeof(id_server->port));
>>>            }
>>>            apr_md5_final(digest, &ctx);
>>>
>>> Index: modules/proxy/mod_proxy.c
>>> ===================================================================
>>> --- modules/proxy/mod_proxy.c	(revision 1697578)
>>> +++ modules/proxy/mod_proxy.c	(working copy)
>>> @@ -1129,6 +1129,7 @@
>>>        ps->badopt = bad_error;
>>>        ps->badopt_set = 0;
>>>        ps->pool = p;
>>> +    ps->s = s;
>>>
>>>        return ps;
>>>    }
>>> @@ -1172,6 +1173,7 @@
>>>        ps->proxy_status = (overrides->proxy_status_set == 0) ? base-
>>> proxy_status : overrides->proxy_status;
>>>        ps->proxy_status_set = overrides->proxy_status_set || base-
>>> proxy_status_set;
>>>        ps->pool = p;
>>> +    ps->s = overrides->s;
>>>        return ps;
>>>    }
>>>
>>> Index: modules/proxy/mod_proxy.h
>>> ===================================================================
>>> --- modules/proxy/mod_proxy.h	(revision 1697578)
>>> +++ modules/proxy/mod_proxy.h	(working copy)
>>> @@ -193,6 +193,7 @@
>>>        } proxy_status;             /* Status display options */
>>>        char proxy_status_set;
>>>        apr_pool_t *pool;           /* Pool used for allocating this
>> struct */
>>> +    server_rec *s;              /* The server_rec where this
>> configuration was created in */
>>>    } proxy_server_conf;
>>>
>>>
>>> @@ -369,6 +370,7 @@
>>>        char            disablereuse_set;
>>>        apr_interval_time_t conn_timeout;
>>>        char            conn_timeout_set;
>>> +    server_rec      *server;    /* The server_rec where this
>> configuration was created in */
>>>    };
>>>
>>>    /*
>>>
>>>
>>> Regards
>>>
>>> Rüdiger
>>>
>>>> -----Original Message-----
>>>> From: Plüm, Rüdiger, Vodafone Group
>>>> Sent: Dienstag, 25. August 2015 10:48
>>>> To: dev@httpd.apache.org
>>>> Subject: RE: PR 58267: Regression in 2.2.31 caused by r1680920
>>>>
>>>> I think the current state of 2.2.31 breaks existing 2.2.x configuration
>>>> prior to 2.2.31.
>>>> Prior to 2.2.31 you could do the following:
>>>>
>>>> <Proxy Balancer://proxy1>
>>>> BalancerMember ajp://127.0.0.1:7001
>>>> BalancerMember ajp://127.0.0.2:7002
>>>> </Proxy>
>>>>
>>>> <virtualhost *:80>
>>>> ProxyPass / balancer://proxy1/
>>>> </virtualhost>
>>>>
>>>> <virtualhost 127.0.0.1:8080>
>>>>
>>>> <Location /bmanager>
>>>>      sethandler balancer-manager
>>>> </Location>
>>>>
>>>> </virtualhost>
>>>>
>>>> With this configuration you could provide your reverse proxy to the
>>>> outside world while having the
>>>> balancermanager managing the balancer only listening to localhost. This
>> is
>>>> not possible any longer with 2.2.31
>>>> as the worker->s is now different in each virtualhost whereas before it
>>>> was the same as the worker->id was used
>>>> to identify the scoreboard slot.
>>>> The patches proposed so far would not change this. I think in order to
>>>> revert to the old behaviour we need to
>>>> store with each worker in which server_rec context it was created. e.g.
>>>> adding a const char * field to the worker that would be filled with
>>>> server->server_hostname. Then we could use this value for creating the
>>>> md5.
>>>>
>>>> Regards
>>>>
>>>> Rüdiger
>>>>
>>>>> -----Original Message-----
>>>>> From: Jan Kaluža [mailto:jkaluza@redhat.com]
>>>>> Sent: Dienstag, 25. August 2015 10:23
>>>>> To: dev@httpd.apache.org
>>>>> Subject: Re: PR 58267: Regression in 2.2.31 caused by r1680920
>>>>>
>>>>> On 08/24/2015 11:12 PM, Yann Ylavic wrote:
>>>>>> On Mon, Aug 24, 2015 at 5:51 PM, Yann Ylavic <yl...@gmail.com>
>>>>> wrote:
>>>>>>>
>>>>>>> On Mon, Aug 24, 2015 at 4:47 PM, Jan Kaluža <jk...@redhat.com>
>>>> wrote:
>>>>>>>>
>>>>>>>> 2) Increment proxy_lb_workers according to number of workers in
>>>>> balancer
>>>>>>>> when using "ProxyPass /foobar/ Balancer://foobar/" in the
>>>> VirtualHost.
>>>>> The
>>>>>>>> scoreboard would have right size and ap_proxy_set_scoreboard_lb
>>>> would
>>>>> not
>>>>>>>> fail then.
>>>>>>>
>>>>>>> I think we can do this quite easily in merge_proxy_config(), by
>>>>>>> incrementing proxy_lb_workers for each base->balancers->workers. I
>>>> did
>>>>>>> not test it yet though.
>>>>>>
>>>>>> I tested the below which seems to work.
>>>>>
>>>>> Hm, this reserves the slots in scoreboard even when the balancers are
>>>>> not used in the virtualhost, or am I wrong?
>>>>>
>>>>> I originally thought about increasing proxy_lb_workers only when they
>>>>> are used in the ProxyPass* in the vhost.
>>>>>
>>>>>> Index: modules/proxy/mod_proxy.c
>>>>>> ===================================================================
>>>>>> --- modules/proxy/mod_proxy.c    (revision 1697358)
>>>>>> +++ modules/proxy/mod_proxy.c    (working copy)
>>>>>> @@ -1135,6 +1135,7 @@ static void * create_proxy_config(apr_pool_t
>> *p,
>>>> s
>>>>>>
>>>>>>     static void * merge_proxy_config(apr_pool_t *p, void *basev, void
>>>>> *overridesv)
>>>>>>     {
>>>>>> +    int i;
>>>>>>         proxy_server_conf *ps = apr_pcalloc(p,
>>>> sizeof(proxy_server_conf));
>>>>>>         proxy_server_conf *base = (proxy_server_conf *) basev;
>>>>>>         proxy_server_conf *overrides = (proxy_server_conf *)
>> overridesv;
>>>>>> @@ -1147,6 +1148,13 @@ static void * merge_proxy_config(apr_pool_t
>> *p,
>>>>> vo
>>>>>>         ps->allowed_connect_ports = apr_array_append(p,
>>>>>> base->allowed_connect_ports, overrides->allowed_connect_ports);
>>>>>>         ps->workers = apr_array_append(p, base->workers, overrides-
>>>>>> workers);
>>>>>>         ps->balancers = apr_array_append(p, base->balancers,
>> overrides-
>>>>>> balancers);
>>>>>> +    /* The balancers inherited from base don't have their members
>>>>> reserved on
>>>>>> +     * the scorebord_lb for this server, account for them now.
>>>>>> +     */
>>>>>> +    for (i = 0; i < base->balancers->nelts; ++i) {
>>>>>> +        proxy_balancer *balancer = (proxy_balancer *)base-
>>> balancers-
>>>>>> elts + i;
>>>>>> +        proxy_lb_workers += balancer->workers->nelts;
>>>>>> +    }
>>>>>>         ps->forward = overrides->forward ? overrides->forward : base-
>>>>>> forward;
>>>>>>         ps->reverse = overrides->reverse ? overrides->reverse : base-
>>>>>> reverse;
>>>>>>
>>>>>> --
>>>>>>
>>>>>> Please note that since all the workers would really be accounted in
>>>>>> the scoreboard, configurations like the one of PR 58267 (with
>>>>>> inherited balancers) would also need bigger SHMs (but no more) than
>>>>>> currently...
>>>>>>
>>>>>> WDYT?
>>>>>>
>>>>>
>>>>> Regards,
>>>>> Jan Kaluza
>>>>>
>>>
>


RE: PR 58267: Regression in 2.2.31 caused by r1680920

Posted by Plüm, Rüdiger, Vodafone Group <ru...@vodafone.com>.
Thanks for the pointer. It is missing because I removed it by accident when excluding some debug code I setup locally for analysing the issue from the patch I posted. I will post a proper version and if you agree put it in STATUS for 2.2.x. As far as I can tell this change only applies to 2.2.x. So it would be fine to propose it directly in STATUS without any trunk commit.

Regards

Rüdiger

> -----Original Message-----
> From: Jan Kaluža [mailto:jkaluza@redhat.com]
> Sent: Dienstag, 25. August 2015 14:15
> To: dev@httpd.apache.org
> Subject: Re: PR 58267: Regression in 2.2.31 caused by r1680920
> 
> On 08/25/2015 11:39 AM, Plüm, Rüdiger, Vodafone Group wrote:
> > How about the following patch? It uses the server_rec of the server that
> originally created the configuration item.
> 
> This looks like good strategy. I've verified that the patch fixes this
> issue and does not break anything when testing briefly.
> 
> What do you think, Yann?
> 
> Note that "id_server" declaration is missing in the patch. Otherwise
> it's OK.
> 
> Regards,
> Jan Kaluza
> 
> > Index: modules/proxy/proxy_util.c
> > ===================================================================
> > --- modules/proxy/proxy_util.c	(revision 1697578)
> > +++ modules/proxy/proxy_util.c	(working copy)
> > @@ -1460,6 +1460,7 @@
> >       (*worker)->flush_packets = flush_off;
> >       (*worker)->flush_wait = PROXY_FLUSH_WAIT;
> >       (*worker)->smax = -1;
> > +    (*worker)->server = conf->s;
> >       /* Increase the total worker count */
> >       proxy_lb_workers++;
> >       init_conn_pool(p, *worker);
> > @@ -1824,14 +1829,20 @@
> >               apr_md5_update(&ctx, (unsigned char *)balancer->name,
> >                              strlen(balancer->name));
> >           }
> > -        if (server) {
> > +        if (worker->server) {
> > +            id_server = worker->server;
> > +        }
> > +        else {
> > +            id_server = server;
> > +        }
> > +        if (id_server) {
> >               server_addr_rec *addr;
> >               /* Assumes the unique identifier of a vhost is its
> address(es)
> >                * plus the ServerName:Port. Should two or more vhosts
> have this
> >                * same identifier, the first one would always be elected
> to
> >                * handle the requests, so this shouldn't be an issue...
> >                */
> > -            for (addr = server->addrs; addr; addr = addr->next) {
> > +            for (addr = id_server->addrs; addr; addr = addr->next) {
> >                   char host_ip[64]; /* for any IPv[46] string */
> >                   apr_sockaddr_ip_getbuf(host_ip, sizeof host_ip,
> >                                          addr->host_addr);
> > @@ -1840,10 +1851,10 @@
> >                   apr_md5_update(&ctx, (unsigned char *)&addr-
> >host_port,
> >                                  sizeof(addr->host_port));
> >               }
> > -            apr_md5_update(&ctx, (unsigned char *)server-
> >server_hostname,
> > -                           strlen(server->server_hostname));
> > -            apr_md5_update(&ctx, (unsigned char *)&server->port,
> > -                           sizeof(server->port));
> > +            apr_md5_update(&ctx, (unsigned char *)id_server-
> >server_hostname,
> > +                           strlen(id_server->server_hostname));
> > +            apr_md5_update(&ctx, (unsigned char *)&id_server->port,
> > +                           sizeof(id_server->port));
> >           }
> >           apr_md5_final(digest, &ctx);
> >
> > Index: modules/proxy/mod_proxy.c
> > ===================================================================
> > --- modules/proxy/mod_proxy.c	(revision 1697578)
> > +++ modules/proxy/mod_proxy.c	(working copy)
> > @@ -1129,6 +1129,7 @@
> >       ps->badopt = bad_error;
> >       ps->badopt_set = 0;
> >       ps->pool = p;
> > +    ps->s = s;
> >
> >       return ps;
> >   }
> > @@ -1172,6 +1173,7 @@
> >       ps->proxy_status = (overrides->proxy_status_set == 0) ? base-
> >proxy_status : overrides->proxy_status;
> >       ps->proxy_status_set = overrides->proxy_status_set || base-
> >proxy_status_set;
> >       ps->pool = p;
> > +    ps->s = overrides->s;
> >       return ps;
> >   }
> >
> > Index: modules/proxy/mod_proxy.h
> > ===================================================================
> > --- modules/proxy/mod_proxy.h	(revision 1697578)
> > +++ modules/proxy/mod_proxy.h	(working copy)
> > @@ -193,6 +193,7 @@
> >       } proxy_status;             /* Status display options */
> >       char proxy_status_set;
> >       apr_pool_t *pool;           /* Pool used for allocating this
> struct */
> > +    server_rec *s;              /* The server_rec where this
> configuration was created in */
> >   } proxy_server_conf;
> >
> >
> > @@ -369,6 +370,7 @@
> >       char            disablereuse_set;
> >       apr_interval_time_t conn_timeout;
> >       char            conn_timeout_set;
> > +    server_rec      *server;    /* The server_rec where this
> configuration was created in */
> >   };
> >
> >   /*
> >
> >
> > Regards
> >
> > Rüdiger
> >
> >> -----Original Message-----
> >> From: Plüm, Rüdiger, Vodafone Group
> >> Sent: Dienstag, 25. August 2015 10:48
> >> To: dev@httpd.apache.org
> >> Subject: RE: PR 58267: Regression in 2.2.31 caused by r1680920
> >>
> >> I think the current state of 2.2.31 breaks existing 2.2.x configuration
> >> prior to 2.2.31.
> >> Prior to 2.2.31 you could do the following:
> >>
> >> <Proxy Balancer://proxy1>
> >> BalancerMember ajp://127.0.0.1:7001
> >> BalancerMember ajp://127.0.0.2:7002
> >> </Proxy>
> >>
> >> <virtualhost *:80>
> >> ProxyPass / balancer://proxy1/
> >> </virtualhost>
> >>
> >> <virtualhost 127.0.0.1:8080>
> >>
> >> <Location /bmanager>
> >>     sethandler balancer-manager
> >> </Location>
> >>
> >> </virtualhost>
> >>
> >> With this configuration you could provide your reverse proxy to the
> >> outside world while having the
> >> balancermanager managing the balancer only listening to localhost. This
> is
> >> not possible any longer with 2.2.31
> >> as the worker->s is now different in each virtualhost whereas before it
> >> was the same as the worker->id was used
> >> to identify the scoreboard slot.
> >> The patches proposed so far would not change this. I think in order to
> >> revert to the old behaviour we need to
> >> store with each worker in which server_rec context it was created. e.g.
> >> adding a const char * field to the worker that would be filled with
> >> server->server_hostname. Then we could use this value for creating the
> >> md5.
> >>
> >> Regards
> >>
> >> Rüdiger
> >>
> >>> -----Original Message-----
> >>> From: Jan Kaluža [mailto:jkaluza@redhat.com]
> >>> Sent: Dienstag, 25. August 2015 10:23
> >>> To: dev@httpd.apache.org
> >>> Subject: Re: PR 58267: Regression in 2.2.31 caused by r1680920
> >>>
> >>> On 08/24/2015 11:12 PM, Yann Ylavic wrote:
> >>>> On Mon, Aug 24, 2015 at 5:51 PM, Yann Ylavic <yl...@gmail.com>
> >>> wrote:
> >>>>>
> >>>>> On Mon, Aug 24, 2015 at 4:47 PM, Jan Kaluža <jk...@redhat.com>
> >> wrote:
> >>>>>>
> >>>>>> 2) Increment proxy_lb_workers according to number of workers in
> >>> balancer
> >>>>>> when using "ProxyPass /foobar/ Balancer://foobar/" in the
> >> VirtualHost.
> >>> The
> >>>>>> scoreboard would have right size and ap_proxy_set_scoreboard_lb
> >> would
> >>> not
> >>>>>> fail then.
> >>>>>
> >>>>> I think we can do this quite easily in merge_proxy_config(), by
> >>>>> incrementing proxy_lb_workers for each base->balancers->workers. I
> >> did
> >>>>> not test it yet though.
> >>>>
> >>>> I tested the below which seems to work.
> >>>
> >>> Hm, this reserves the slots in scoreboard even when the balancers are
> >>> not used in the virtualhost, or am I wrong?
> >>>
> >>> I originally thought about increasing proxy_lb_workers only when they
> >>> are used in the ProxyPass* in the vhost.
> >>>
> >>>> Index: modules/proxy/mod_proxy.c
> >>>> ===================================================================
> >>>> --- modules/proxy/mod_proxy.c    (revision 1697358)
> >>>> +++ modules/proxy/mod_proxy.c    (working copy)
> >>>> @@ -1135,6 +1135,7 @@ static void * create_proxy_config(apr_pool_t
> *p,
> >> s
> >>>>
> >>>>    static void * merge_proxy_config(apr_pool_t *p, void *basev, void
> >>> *overridesv)
> >>>>    {
> >>>> +    int i;
> >>>>        proxy_server_conf *ps = apr_pcalloc(p,
> >> sizeof(proxy_server_conf));
> >>>>        proxy_server_conf *base = (proxy_server_conf *) basev;
> >>>>        proxy_server_conf *overrides = (proxy_server_conf *)
> overridesv;
> >>>> @@ -1147,6 +1148,13 @@ static void * merge_proxy_config(apr_pool_t
> *p,
> >>> vo
> >>>>        ps->allowed_connect_ports = apr_array_append(p,
> >>>> base->allowed_connect_ports, overrides->allowed_connect_ports);
> >>>>        ps->workers = apr_array_append(p, base->workers, overrides-
> >>>> workers);
> >>>>        ps->balancers = apr_array_append(p, base->balancers,
> overrides-
> >>>> balancers);
> >>>> +    /* The balancers inherited from base don't have their members
> >>> reserved on
> >>>> +     * the scorebord_lb for this server, account for them now.
> >>>> +     */
> >>>> +    for (i = 0; i < base->balancers->nelts; ++i) {
> >>>> +        proxy_balancer *balancer = (proxy_balancer *)base-
> >balancers-
> >>>> elts + i;
> >>>> +        proxy_lb_workers += balancer->workers->nelts;
> >>>> +    }
> >>>>        ps->forward = overrides->forward ? overrides->forward : base-
> >>>> forward;
> >>>>        ps->reverse = overrides->reverse ? overrides->reverse : base-
> >>>> reverse;
> >>>>
> >>>> --
> >>>>
> >>>> Please note that since all the workers would really be accounted in
> >>>> the scoreboard, configurations like the one of PR 58267 (with
> >>>> inherited balancers) would also need bigger SHMs (but no more) than
> >>>> currently...
> >>>>
> >>>> WDYT?
> >>>>
> >>>
> >>> Regards,
> >>> Jan Kaluza
> >>>
> >


Re: PR 58267: Regression in 2.2.31 caused by r1680920

Posted by Jan Kaluža <jk...@redhat.com>.
On 08/25/2015 11:39 AM, Plüm, Rüdiger, Vodafone Group wrote:
> How about the following patch? It uses the server_rec of the server that originally created the configuration item.

This looks like good strategy. I've verified that the patch fixes this 
issue and does not break anything when testing briefly.

What do you think, Yann?

Note that "id_server" declaration is missing in the patch. Otherwise 
it's OK.

Regards,
Jan Kaluza

> Index: modules/proxy/proxy_util.c
> ===================================================================
> --- modules/proxy/proxy_util.c	(revision 1697578)
> +++ modules/proxy/proxy_util.c	(working copy)
> @@ -1460,6 +1460,7 @@
>       (*worker)->flush_packets = flush_off;
>       (*worker)->flush_wait = PROXY_FLUSH_WAIT;
>       (*worker)->smax = -1;
> +    (*worker)->server = conf->s;
>       /* Increase the total worker count */
>       proxy_lb_workers++;
>       init_conn_pool(p, *worker);
> @@ -1824,14 +1829,20 @@
>               apr_md5_update(&ctx, (unsigned char *)balancer->name,
>                              strlen(balancer->name));
>           }
> -        if (server) {
> +        if (worker->server) {
> +            id_server = worker->server;
> +        }
> +        else {
> +            id_server = server;
> +        }
> +        if (id_server) {
>               server_addr_rec *addr;
>               /* Assumes the unique identifier of a vhost is its address(es)
>                * plus the ServerName:Port. Should two or more vhosts have this
>                * same identifier, the first one would always be elected to
>                * handle the requests, so this shouldn't be an issue...
>                */
> -            for (addr = server->addrs; addr; addr = addr->next) {
> +            for (addr = id_server->addrs; addr; addr = addr->next) {
>                   char host_ip[64]; /* for any IPv[46] string */
>                   apr_sockaddr_ip_getbuf(host_ip, sizeof host_ip,
>                                          addr->host_addr);
> @@ -1840,10 +1851,10 @@
>                   apr_md5_update(&ctx, (unsigned char *)&addr->host_port,
>                                  sizeof(addr->host_port));
>               }
> -            apr_md5_update(&ctx, (unsigned char *)server->server_hostname,
> -                           strlen(server->server_hostname));
> -            apr_md5_update(&ctx, (unsigned char *)&server->port,
> -                           sizeof(server->port));
> +            apr_md5_update(&ctx, (unsigned char *)id_server->server_hostname,
> +                           strlen(id_server->server_hostname));
> +            apr_md5_update(&ctx, (unsigned char *)&id_server->port,
> +                           sizeof(id_server->port));
>           }
>           apr_md5_final(digest, &ctx);
>
> Index: modules/proxy/mod_proxy.c
> ===================================================================
> --- modules/proxy/mod_proxy.c	(revision 1697578)
> +++ modules/proxy/mod_proxy.c	(working copy)
> @@ -1129,6 +1129,7 @@
>       ps->badopt = bad_error;
>       ps->badopt_set = 0;
>       ps->pool = p;
> +    ps->s = s;
>
>       return ps;
>   }
> @@ -1172,6 +1173,7 @@
>       ps->proxy_status = (overrides->proxy_status_set == 0) ? base->proxy_status : overrides->proxy_status;
>       ps->proxy_status_set = overrides->proxy_status_set || base->proxy_status_set;
>       ps->pool = p;
> +    ps->s = overrides->s;
>       return ps;
>   }
>
> Index: modules/proxy/mod_proxy.h
> ===================================================================
> --- modules/proxy/mod_proxy.h	(revision 1697578)
> +++ modules/proxy/mod_proxy.h	(working copy)
> @@ -193,6 +193,7 @@
>       } proxy_status;             /* Status display options */
>       char proxy_status_set;
>       apr_pool_t *pool;           /* Pool used for allocating this struct */
> +    server_rec *s;              /* The server_rec where this configuration was created in */
>   } proxy_server_conf;
>
>
> @@ -369,6 +370,7 @@
>       char            disablereuse_set;
>       apr_interval_time_t conn_timeout;
>       char            conn_timeout_set;
> +    server_rec      *server;    /* The server_rec where this configuration was created in */
>   };
>
>   /*
>
>
> Regards
>
> Rüdiger
>
>> -----Original Message-----
>> From: Plüm, Rüdiger, Vodafone Group
>> Sent: Dienstag, 25. August 2015 10:48
>> To: dev@httpd.apache.org
>> Subject: RE: PR 58267: Regression in 2.2.31 caused by r1680920
>>
>> I think the current state of 2.2.31 breaks existing 2.2.x configuration
>> prior to 2.2.31.
>> Prior to 2.2.31 you could do the following:
>>
>> <Proxy Balancer://proxy1>
>> BalancerMember ajp://127.0.0.1:7001
>> BalancerMember ajp://127.0.0.2:7002
>> </Proxy>
>>
>> <virtualhost *:80>
>> ProxyPass / balancer://proxy1/
>> </virtualhost>
>>
>> <virtualhost 127.0.0.1:8080>
>>
>> <Location /bmanager>
>>     sethandler balancer-manager
>> </Location>
>>
>> </virtualhost>
>>
>> With this configuration you could provide your reverse proxy to the
>> outside world while having the
>> balancermanager managing the balancer only listening to localhost. This is
>> not possible any longer with 2.2.31
>> as the worker->s is now different in each virtualhost whereas before it
>> was the same as the worker->id was used
>> to identify the scoreboard slot.
>> The patches proposed so far would not change this. I think in order to
>> revert to the old behaviour we need to
>> store with each worker in which server_rec context it was created. e.g.
>> adding a const char * field to the worker that would be filled with
>> server->server_hostname. Then we could use this value for creating the
>> md5.
>>
>> Regards
>>
>> Rüdiger
>>
>>> -----Original Message-----
>>> From: Jan Kaluža [mailto:jkaluza@redhat.com]
>>> Sent: Dienstag, 25. August 2015 10:23
>>> To: dev@httpd.apache.org
>>> Subject: Re: PR 58267: Regression in 2.2.31 caused by r1680920
>>>
>>> On 08/24/2015 11:12 PM, Yann Ylavic wrote:
>>>> On Mon, Aug 24, 2015 at 5:51 PM, Yann Ylavic <yl...@gmail.com>
>>> wrote:
>>>>>
>>>>> On Mon, Aug 24, 2015 at 4:47 PM, Jan Kaluža <jk...@redhat.com>
>> wrote:
>>>>>>
>>>>>> 2) Increment proxy_lb_workers according to number of workers in
>>> balancer
>>>>>> when using "ProxyPass /foobar/ Balancer://foobar/" in the
>> VirtualHost.
>>> The
>>>>>> scoreboard would have right size and ap_proxy_set_scoreboard_lb
>> would
>>> not
>>>>>> fail then.
>>>>>
>>>>> I think we can do this quite easily in merge_proxy_config(), by
>>>>> incrementing proxy_lb_workers for each base->balancers->workers. I
>> did
>>>>> not test it yet though.
>>>>
>>>> I tested the below which seems to work.
>>>
>>> Hm, this reserves the slots in scoreboard even when the balancers are
>>> not used in the virtualhost, or am I wrong?
>>>
>>> I originally thought about increasing proxy_lb_workers only when they
>>> are used in the ProxyPass* in the vhost.
>>>
>>>> Index: modules/proxy/mod_proxy.c
>>>> ===================================================================
>>>> --- modules/proxy/mod_proxy.c    (revision 1697358)
>>>> +++ modules/proxy/mod_proxy.c    (working copy)
>>>> @@ -1135,6 +1135,7 @@ static void * create_proxy_config(apr_pool_t *p,
>> s
>>>>
>>>>    static void * merge_proxy_config(apr_pool_t *p, void *basev, void
>>> *overridesv)
>>>>    {
>>>> +    int i;
>>>>        proxy_server_conf *ps = apr_pcalloc(p,
>> sizeof(proxy_server_conf));
>>>>        proxy_server_conf *base = (proxy_server_conf *) basev;
>>>>        proxy_server_conf *overrides = (proxy_server_conf *) overridesv;
>>>> @@ -1147,6 +1148,13 @@ static void * merge_proxy_config(apr_pool_t *p,
>>> vo
>>>>        ps->allowed_connect_ports = apr_array_append(p,
>>>> base->allowed_connect_ports, overrides->allowed_connect_ports);
>>>>        ps->workers = apr_array_append(p, base->workers, overrides-
>>>> workers);
>>>>        ps->balancers = apr_array_append(p, base->balancers, overrides-
>>>> balancers);
>>>> +    /* The balancers inherited from base don't have their members
>>> reserved on
>>>> +     * the scorebord_lb for this server, account for them now.
>>>> +     */
>>>> +    for (i = 0; i < base->balancers->nelts; ++i) {
>>>> +        proxy_balancer *balancer = (proxy_balancer *)base->balancers-
>>>> elts + i;
>>>> +        proxy_lb_workers += balancer->workers->nelts;
>>>> +    }
>>>>        ps->forward = overrides->forward ? overrides->forward : base-
>>>> forward;
>>>>        ps->reverse = overrides->reverse ? overrides->reverse : base-
>>>> reverse;
>>>>
>>>> --
>>>>
>>>> Please note that since all the workers would really be accounted in
>>>> the scoreboard, configurations like the one of PR 58267 (with
>>>> inherited balancers) would also need bigger SHMs (but no more) than
>>>> currently...
>>>>
>>>> WDYT?
>>>>
>>>
>>> Regards,
>>> Jan Kaluza
>>>
>


RE: PR 58267: Regression in 2.2.31 caused by r1680920

Posted by Plüm, Rüdiger, Vodafone Group <ru...@vodafone.com>.
How about the following patch? It uses the server_rec of the server that originally created the configuration item.

Index: modules/proxy/proxy_util.c
===================================================================
--- modules/proxy/proxy_util.c	(revision 1697578)
+++ modules/proxy/proxy_util.c	(working copy)
@@ -1460,6 +1460,7 @@
     (*worker)->flush_packets = flush_off;
     (*worker)->flush_wait = PROXY_FLUSH_WAIT;
     (*worker)->smax = -1;
+    (*worker)->server = conf->s;
     /* Increase the total worker count */
     proxy_lb_workers++;
     init_conn_pool(p, *worker);
@@ -1824,14 +1829,20 @@
             apr_md5_update(&ctx, (unsigned char *)balancer->name,
                            strlen(balancer->name));
         }
-        if (server) {
+        if (worker->server) {
+            id_server = worker->server;
+        }
+        else {
+            id_server = server;
+        }
+        if (id_server) {
             server_addr_rec *addr;
             /* Assumes the unique identifier of a vhost is its address(es)
              * plus the ServerName:Port. Should two or more vhosts have this
              * same identifier, the first one would always be elected to
              * handle the requests, so this shouldn't be an issue...
              */
-            for (addr = server->addrs; addr; addr = addr->next) {
+            for (addr = id_server->addrs; addr; addr = addr->next) {
                 char host_ip[64]; /* for any IPv[46] string */
                 apr_sockaddr_ip_getbuf(host_ip, sizeof host_ip,
                                        addr->host_addr);
@@ -1840,10 +1851,10 @@
                 apr_md5_update(&ctx, (unsigned char *)&addr->host_port,
                                sizeof(addr->host_port));
             }
-            apr_md5_update(&ctx, (unsigned char *)server->server_hostname,
-                           strlen(server->server_hostname));
-            apr_md5_update(&ctx, (unsigned char *)&server->port,
-                           sizeof(server->port));
+            apr_md5_update(&ctx, (unsigned char *)id_server->server_hostname,
+                           strlen(id_server->server_hostname));
+            apr_md5_update(&ctx, (unsigned char *)&id_server->port,
+                           sizeof(id_server->port));
         }
         apr_md5_final(digest, &ctx);
 
Index: modules/proxy/mod_proxy.c
===================================================================
--- modules/proxy/mod_proxy.c	(revision 1697578)
+++ modules/proxy/mod_proxy.c	(working copy)
@@ -1129,6 +1129,7 @@
     ps->badopt = bad_error;
     ps->badopt_set = 0;
     ps->pool = p;
+    ps->s = s;
 
     return ps;
 }
@@ -1172,6 +1173,7 @@
     ps->proxy_status = (overrides->proxy_status_set == 0) ? base->proxy_status : overrides->proxy_status;
     ps->proxy_status_set = overrides->proxy_status_set || base->proxy_status_set;
     ps->pool = p;
+    ps->s = overrides->s;
     return ps;
 }
 
Index: modules/proxy/mod_proxy.h
===================================================================
--- modules/proxy/mod_proxy.h	(revision 1697578)
+++ modules/proxy/mod_proxy.h	(working copy)
@@ -193,6 +193,7 @@
     } proxy_status;             /* Status display options */
     char proxy_status_set;
     apr_pool_t *pool;           /* Pool used for allocating this struct */
+    server_rec *s;              /* The server_rec where this configuration was created in */
 } proxy_server_conf;
 
 
@@ -369,6 +370,7 @@
     char            disablereuse_set;
     apr_interval_time_t conn_timeout;
     char            conn_timeout_set;
+    server_rec      *server;    /* The server_rec where this configuration was created in */
 };
 
 /*


Regards

Rüdiger

> -----Original Message-----
> From: Plüm, Rüdiger, Vodafone Group
> Sent: Dienstag, 25. August 2015 10:48
> To: dev@httpd.apache.org
> Subject: RE: PR 58267: Regression in 2.2.31 caused by r1680920
> 
> I think the current state of 2.2.31 breaks existing 2.2.x configuration
> prior to 2.2.31.
> Prior to 2.2.31 you could do the following:
> 
> <Proxy Balancer://proxy1>
> BalancerMember ajp://127.0.0.1:7001
> BalancerMember ajp://127.0.0.2:7002
> </Proxy>
> 
> <virtualhost *:80>
> ProxyPass / balancer://proxy1/
> </virtualhost>
> 
> <virtualhost 127.0.0.1:8080>
> 
> <Location /bmanager>
>    sethandler balancer-manager
> </Location>
> 
> </virtualhost>
> 
> With this configuration you could provide your reverse proxy to the
> outside world while having the
> balancermanager managing the balancer only listening to localhost. This is
> not possible any longer with 2.2.31
> as the worker->s is now different in each virtualhost whereas before it
> was the same as the worker->id was used
> to identify the scoreboard slot.
> The patches proposed so far would not change this. I think in order to
> revert to the old behaviour we need to
> store with each worker in which server_rec context it was created. e.g.
> adding a const char * field to the worker that would be filled with
> server->server_hostname. Then we could use this value for creating the
> md5.
> 
> Regards
> 
> Rüdiger
> 
> > -----Original Message-----
> > From: Jan Kaluža [mailto:jkaluza@redhat.com]
> > Sent: Dienstag, 25. August 2015 10:23
> > To: dev@httpd.apache.org
> > Subject: Re: PR 58267: Regression in 2.2.31 caused by r1680920
> >
> > On 08/24/2015 11:12 PM, Yann Ylavic wrote:
> > > On Mon, Aug 24, 2015 at 5:51 PM, Yann Ylavic <yl...@gmail.com>
> > wrote:
> > >>
> > >> On Mon, Aug 24, 2015 at 4:47 PM, Jan Kaluža <jk...@redhat.com>
> wrote:
> > >>>
> > >>> 2) Increment proxy_lb_workers according to number of workers in
> > balancer
> > >>> when using "ProxyPass /foobar/ Balancer://foobar/" in the
> VirtualHost.
> > The
> > >>> scoreboard would have right size and ap_proxy_set_scoreboard_lb
> would
> > not
> > >>> fail then.
> > >>
> > >> I think we can do this quite easily in merge_proxy_config(), by
> > >> incrementing proxy_lb_workers for each base->balancers->workers. I
> did
> > >> not test it yet though.
> > >
> > > I tested the below which seems to work.
> >
> > Hm, this reserves the slots in scoreboard even when the balancers are
> > not used in the virtualhost, or am I wrong?
> >
> > I originally thought about increasing proxy_lb_workers only when they
> > are used in the ProxyPass* in the vhost.
> >
> > > Index: modules/proxy/mod_proxy.c
> > > ===================================================================
> > > --- modules/proxy/mod_proxy.c    (revision 1697358)
> > > +++ modules/proxy/mod_proxy.c    (working copy)
> > > @@ -1135,6 +1135,7 @@ static void * create_proxy_config(apr_pool_t *p,
> s
> > >
> > >   static void * merge_proxy_config(apr_pool_t *p, void *basev, void
> > *overridesv)
> > >   {
> > > +    int i;
> > >       proxy_server_conf *ps = apr_pcalloc(p,
> sizeof(proxy_server_conf));
> > >       proxy_server_conf *base = (proxy_server_conf *) basev;
> > >       proxy_server_conf *overrides = (proxy_server_conf *) overridesv;
> > > @@ -1147,6 +1148,13 @@ static void * merge_proxy_config(apr_pool_t *p,
> > vo
> > >       ps->allowed_connect_ports = apr_array_append(p,
> > > base->allowed_connect_ports, overrides->allowed_connect_ports);
> > >       ps->workers = apr_array_append(p, base->workers, overrides-
> > >workers);
> > >       ps->balancers = apr_array_append(p, base->balancers, overrides-
> > >balancers);
> > > +    /* The balancers inherited from base don't have their members
> > reserved on
> > > +     * the scorebord_lb for this server, account for them now.
> > > +     */
> > > +    for (i = 0; i < base->balancers->nelts; ++i) {
> > > +        proxy_balancer *balancer = (proxy_balancer *)base->balancers-
> > >elts + i;
> > > +        proxy_lb_workers += balancer->workers->nelts;
> > > +    }
> > >       ps->forward = overrides->forward ? overrides->forward : base-
> > >forward;
> > >       ps->reverse = overrides->reverse ? overrides->reverse : base-
> > >reverse;
> > >
> > > --
> > >
> > > Please note that since all the workers would really be accounted in
> > > the scoreboard, configurations like the one of PR 58267 (with
> > > inherited balancers) would also need bigger SHMs (but no more) than
> > > currently...
> > >
> > > WDYT?
> > >
> >
> > Regards,
> > Jan Kaluza
> >


Re: PR 58267: Regression in 2.2.31 caused by r1680920

Posted by Jim Jagielski <ji...@jaguNET.com>.
Looks good to me... Even though this is 2.2.x I did some quick
and dirty testing :)

> On Aug 25, 2015, at 5:44 AM, Plüm, Rüdiger, Vodafone Group <ru...@vodafone.com> wrote:
> 
> Of course it requires a minor bump.
> 
> Regards
> 
> Rüdiger
> 
>> -----Original Message-----
>> From: Plüm, Rüdiger, Vodafone Group
>> Sent: Dienstag, 25. August 2015 11:39
>> To: dev@httpd.apache.org
>> Subject: RE: PR 58267: Regression in 2.2.31 caused by r1680920
>> 
>> How about the following patch? It uses the server_rec of the server that
>> originally created the configuration item.
>> 
>> Index: modules/proxy/proxy_util.c
>> ===================================================================
>> --- modules/proxy/proxy_util.c	(revision 1697578)
>> +++ modules/proxy/proxy_util.c	(working copy)
>> @@ -1460,6 +1460,7 @@
>>     (*worker)->flush_packets = flush_off;
>>     (*worker)->flush_wait = PROXY_FLUSH_WAIT;
>>     (*worker)->smax = -1;
>> +    (*worker)->server = conf->s;
>>     /* Increase the total worker count */
>>     proxy_lb_workers++;
>>     init_conn_pool(p, *worker);
>> @@ -1824,14 +1829,20 @@
>>             apr_md5_update(&ctx, (unsigned char *)balancer->name,
>>                            strlen(balancer->name));
>>         }
>> -        if (server) {
>> +        if (worker->server) {
>> +            id_server = worker->server;
>> +        }
>> +        else {
>> +            id_server = server;
>> +        }
>> +        if (id_server) {
>>             server_addr_rec *addr;
>>             /* Assumes the unique identifier of a vhost is its
>> address(es)
>>              * plus the ServerName:Port. Should two or more vhosts have
>> this
>>              * same identifier, the first one would always be elected to
>>              * handle the requests, so this shouldn't be an issue...
>>              */
>> -            for (addr = server->addrs; addr; addr = addr->next) {
>> +            for (addr = id_server->addrs; addr; addr = addr->next) {
>>                 char host_ip[64]; /* for any IPv[46] string */
>>                 apr_sockaddr_ip_getbuf(host_ip, sizeof host_ip,
>>                                        addr->host_addr);
>> @@ -1840,10 +1851,10 @@
>>                 apr_md5_update(&ctx, (unsigned char *)&addr->host_port,
>>                                sizeof(addr->host_port));
>>             }
>> -            apr_md5_update(&ctx, (unsigned char *)server-
>>> server_hostname,
>> -                           strlen(server->server_hostname));
>> -            apr_md5_update(&ctx, (unsigned char *)&server->port,
>> -                           sizeof(server->port));
>> +            apr_md5_update(&ctx, (unsigned char *)id_server-
>>> server_hostname,
>> +                           strlen(id_server->server_hostname));
>> +            apr_md5_update(&ctx, (unsigned char *)&id_server->port,
>> +                           sizeof(id_server->port));
>>         }
>>         apr_md5_final(digest, &ctx);
>> 
>> Index: modules/proxy/mod_proxy.c
>> ===================================================================
>> --- modules/proxy/mod_proxy.c	(revision 1697578)
>> +++ modules/proxy/mod_proxy.c	(working copy)
>> @@ -1129,6 +1129,7 @@
>>     ps->badopt = bad_error;
>>     ps->badopt_set = 0;
>>     ps->pool = p;
>> +    ps->s = s;
>> 
>>     return ps;
>> }
>> @@ -1172,6 +1173,7 @@
>>     ps->proxy_status = (overrides->proxy_status_set == 0) ? base-
>>> proxy_status : overrides->proxy_status;
>>     ps->proxy_status_set = overrides->proxy_status_set || base-
>>> proxy_status_set;
>>     ps->pool = p;
>> +    ps->s = overrides->s;
>>     return ps;
>> }
>> 
>> Index: modules/proxy/mod_proxy.h
>> ===================================================================
>> --- modules/proxy/mod_proxy.h	(revision 1697578)
>> +++ modules/proxy/mod_proxy.h	(working copy)
>> @@ -193,6 +193,7 @@
>>     } proxy_status;             /* Status display options */
>>     char proxy_status_set;
>>     apr_pool_t *pool;           /* Pool used for allocating this struct
>> */
>> +    server_rec *s;              /* The server_rec where this
>> configuration was created in */
>> } proxy_server_conf;
>> 
>> 
>> @@ -369,6 +370,7 @@
>>     char            disablereuse_set;
>>     apr_interval_time_t conn_timeout;
>>     char            conn_timeout_set;
>> +    server_rec      *server;    /* The server_rec where this
>> configuration was created in */
>> };
>> 
>> /*
>> 
>> 
>> Regards
>> 
>> Rüdiger
>> 
>>> -----Original Message-----
>>> From: Plüm, Rüdiger, Vodafone Group
>>> Sent: Dienstag, 25. August 2015 10:48
>>> To: dev@httpd.apache.org
>>> Subject: RE: PR 58267: Regression in 2.2.31 caused by r1680920
>>> 
>>> I think the current state of 2.2.31 breaks existing 2.2.x configuration
>>> prior to 2.2.31.
>>> Prior to 2.2.31 you could do the following:
>>> 
>>> <Proxy Balancer://proxy1>
>>> BalancerMember ajp://127.0.0.1:7001
>>> BalancerMember ajp://127.0.0.2:7002
>>> </Proxy>
>>> 
>>> <virtualhost *:80>
>>> ProxyPass / balancer://proxy1/
>>> </virtualhost>
>>> 
>>> <virtualhost 127.0.0.1:8080>
>>> 
>>> <Location /bmanager>
>>>   sethandler balancer-manager
>>> </Location>
>>> 
>>> </virtualhost>
>>> 
>>> With this configuration you could provide your reverse proxy to the
>>> outside world while having the
>>> balancermanager managing the balancer only listening to localhost. This
>> is
>>> not possible any longer with 2.2.31
>>> as the worker->s is now different in each virtualhost whereas before it
>>> was the same as the worker->id was used
>>> to identify the scoreboard slot.
>>> The patches proposed so far would not change this. I think in order to
>>> revert to the old behaviour we need to
>>> store with each worker in which server_rec context it was created. e.g.
>>> adding a const char * field to the worker that would be filled with
>>> server->server_hostname. Then we could use this value for creating the
>>> md5.
>>> 
>>> Regards
>>> 
>>> Rüdiger
>>> 
>>>> -----Original Message-----
>>>> From: Jan Kaluža [mailto:jkaluza@redhat.com]
>>>> Sent: Dienstag, 25. August 2015 10:23
>>>> To: dev@httpd.apache.org
>>>> Subject: Re: PR 58267: Regression in 2.2.31 caused by r1680920
>>>> 
>>>> On 08/24/2015 11:12 PM, Yann Ylavic wrote:
>>>>> On Mon, Aug 24, 2015 at 5:51 PM, Yann Ylavic <yl...@gmail.com>
>>>> wrote:
>>>>>> 
>>>>>> On Mon, Aug 24, 2015 at 4:47 PM, Jan Kaluža <jk...@redhat.com>
>>> wrote:
>>>>>>> 
>>>>>>> 2) Increment proxy_lb_workers according to number of workers in
>>>> balancer
>>>>>>> when using "ProxyPass /foobar/ Balancer://foobar/" in the
>>> VirtualHost.
>>>> The
>>>>>>> scoreboard would have right size and ap_proxy_set_scoreboard_lb
>>> would
>>>> not
>>>>>>> fail then.
>>>>>> 
>>>>>> I think we can do this quite easily in merge_proxy_config(), by
>>>>>> incrementing proxy_lb_workers for each base->balancers->workers. I
>>> did
>>>>>> not test it yet though.
>>>>> 
>>>>> I tested the below which seems to work.
>>>> 
>>>> Hm, this reserves the slots in scoreboard even when the balancers are
>>>> not used in the virtualhost, or am I wrong?
>>>> 
>>>> I originally thought about increasing proxy_lb_workers only when they
>>>> are used in the ProxyPass* in the vhost.
>>>> 
>>>>> Index: modules/proxy/mod_proxy.c
>>>>> ===================================================================
>>>>> --- modules/proxy/mod_proxy.c    (revision 1697358)
>>>>> +++ modules/proxy/mod_proxy.c    (working copy)
>>>>> @@ -1135,6 +1135,7 @@ static void * create_proxy_config(apr_pool_t
>> *p,
>>> s
>>>>> 
>>>>>  static void * merge_proxy_config(apr_pool_t *p, void *basev, void
>>>> *overridesv)
>>>>>  {
>>>>> +    int i;
>>>>>      proxy_server_conf *ps = apr_pcalloc(p,
>>> sizeof(proxy_server_conf));
>>>>>      proxy_server_conf *base = (proxy_server_conf *) basev;
>>>>>      proxy_server_conf *overrides = (proxy_server_conf *)
>> overridesv;
>>>>> @@ -1147,6 +1148,13 @@ static void * merge_proxy_config(apr_pool_t
>> *p,
>>>> vo
>>>>>      ps->allowed_connect_ports = apr_array_append(p,
>>>>> base->allowed_connect_ports, overrides->allowed_connect_ports);
>>>>>      ps->workers = apr_array_append(p, base->workers, overrides-
>>>>> workers);
>>>>>      ps->balancers = apr_array_append(p, base->balancers,
>> overrides-
>>>>> balancers);
>>>>> +    /* The balancers inherited from base don't have their members
>>>> reserved on
>>>>> +     * the scorebord_lb for this server, account for them now.
>>>>> +     */
>>>>> +    for (i = 0; i < base->balancers->nelts; ++i) {
>>>>> +        proxy_balancer *balancer = (proxy_balancer *)base-
>>> balancers-
>>>>> elts + i;
>>>>> +        proxy_lb_workers += balancer->workers->nelts;
>>>>> +    }
>>>>>      ps->forward = overrides->forward ? overrides->forward : base-
>>>>> forward;
>>>>>      ps->reverse = overrides->reverse ? overrides->reverse : base-
>>>>> reverse;
>>>>> 
>>>>> --
>>>>> 
>>>>> Please note that since all the workers would really be accounted in
>>>>> the scoreboard, configurations like the one of PR 58267 (with
>>>>> inherited balancers) would also need bigger SHMs (but no more) than
>>>>> currently...
>>>>> 
>>>>> WDYT?
>>>>> 
>>>> 
>>>> Regards,
>>>> Jan Kaluza
>>>> 
> 


RE: PR 58267: Regression in 2.2.31 caused by r1680920

Posted by Plüm, Rüdiger, Vodafone Group <ru...@vodafone.com>.
Of course it requires a minor bump.

Regards

Rüdiger

> -----Original Message-----
> From: Plüm, Rüdiger, Vodafone Group
> Sent: Dienstag, 25. August 2015 11:39
> To: dev@httpd.apache.org
> Subject: RE: PR 58267: Regression in 2.2.31 caused by r1680920
> 
> How about the following patch? It uses the server_rec of the server that
> originally created the configuration item.
> 
> Index: modules/proxy/proxy_util.c
> ===================================================================
> --- modules/proxy/proxy_util.c	(revision 1697578)
> +++ modules/proxy/proxy_util.c	(working copy)
> @@ -1460,6 +1460,7 @@
>      (*worker)->flush_packets = flush_off;
>      (*worker)->flush_wait = PROXY_FLUSH_WAIT;
>      (*worker)->smax = -1;
> +    (*worker)->server = conf->s;
>      /* Increase the total worker count */
>      proxy_lb_workers++;
>      init_conn_pool(p, *worker);
> @@ -1824,14 +1829,20 @@
>              apr_md5_update(&ctx, (unsigned char *)balancer->name,
>                             strlen(balancer->name));
>          }
> -        if (server) {
> +        if (worker->server) {
> +            id_server = worker->server;
> +        }
> +        else {
> +            id_server = server;
> +        }
> +        if (id_server) {
>              server_addr_rec *addr;
>              /* Assumes the unique identifier of a vhost is its
> address(es)
>               * plus the ServerName:Port. Should two or more vhosts have
> this
>               * same identifier, the first one would always be elected to
>               * handle the requests, so this shouldn't be an issue...
>               */
> -            for (addr = server->addrs; addr; addr = addr->next) {
> +            for (addr = id_server->addrs; addr; addr = addr->next) {
>                  char host_ip[64]; /* for any IPv[46] string */
>                  apr_sockaddr_ip_getbuf(host_ip, sizeof host_ip,
>                                         addr->host_addr);
> @@ -1840,10 +1851,10 @@
>                  apr_md5_update(&ctx, (unsigned char *)&addr->host_port,
>                                 sizeof(addr->host_port));
>              }
> -            apr_md5_update(&ctx, (unsigned char *)server-
> >server_hostname,
> -                           strlen(server->server_hostname));
> -            apr_md5_update(&ctx, (unsigned char *)&server->port,
> -                           sizeof(server->port));
> +            apr_md5_update(&ctx, (unsigned char *)id_server-
> >server_hostname,
> +                           strlen(id_server->server_hostname));
> +            apr_md5_update(&ctx, (unsigned char *)&id_server->port,
> +                           sizeof(id_server->port));
>          }
>          apr_md5_final(digest, &ctx);
> 
> Index: modules/proxy/mod_proxy.c
> ===================================================================
> --- modules/proxy/mod_proxy.c	(revision 1697578)
> +++ modules/proxy/mod_proxy.c	(working copy)
> @@ -1129,6 +1129,7 @@
>      ps->badopt = bad_error;
>      ps->badopt_set = 0;
>      ps->pool = p;
> +    ps->s = s;
> 
>      return ps;
>  }
> @@ -1172,6 +1173,7 @@
>      ps->proxy_status = (overrides->proxy_status_set == 0) ? base-
> >proxy_status : overrides->proxy_status;
>      ps->proxy_status_set = overrides->proxy_status_set || base-
> >proxy_status_set;
>      ps->pool = p;
> +    ps->s = overrides->s;
>      return ps;
>  }
> 
> Index: modules/proxy/mod_proxy.h
> ===================================================================
> --- modules/proxy/mod_proxy.h	(revision 1697578)
> +++ modules/proxy/mod_proxy.h	(working copy)
> @@ -193,6 +193,7 @@
>      } proxy_status;             /* Status display options */
>      char proxy_status_set;
>      apr_pool_t *pool;           /* Pool used for allocating this struct
> */
> +    server_rec *s;              /* The server_rec where this
> configuration was created in */
>  } proxy_server_conf;
> 
> 
> @@ -369,6 +370,7 @@
>      char            disablereuse_set;
>      apr_interval_time_t conn_timeout;
>      char            conn_timeout_set;
> +    server_rec      *server;    /* The server_rec where this
> configuration was created in */
>  };
> 
>  /*
> 
> 
> Regards
> 
> Rüdiger
> 
> > -----Original Message-----
> > From: Plüm, Rüdiger, Vodafone Group
> > Sent: Dienstag, 25. August 2015 10:48
> > To: dev@httpd.apache.org
> > Subject: RE: PR 58267: Regression in 2.2.31 caused by r1680920
> >
> > I think the current state of 2.2.31 breaks existing 2.2.x configuration
> > prior to 2.2.31.
> > Prior to 2.2.31 you could do the following:
> >
> > <Proxy Balancer://proxy1>
> > BalancerMember ajp://127.0.0.1:7001
> > BalancerMember ajp://127.0.0.2:7002
> > </Proxy>
> >
> > <virtualhost *:80>
> > ProxyPass / balancer://proxy1/
> > </virtualhost>
> >
> > <virtualhost 127.0.0.1:8080>
> >
> > <Location /bmanager>
> >    sethandler balancer-manager
> > </Location>
> >
> > </virtualhost>
> >
> > With this configuration you could provide your reverse proxy to the
> > outside world while having the
> > balancermanager managing the balancer only listening to localhost. This
> is
> > not possible any longer with 2.2.31
> > as the worker->s is now different in each virtualhost whereas before it
> > was the same as the worker->id was used
> > to identify the scoreboard slot.
> > The patches proposed so far would not change this. I think in order to
> > revert to the old behaviour we need to
> > store with each worker in which server_rec context it was created. e.g.
> > adding a const char * field to the worker that would be filled with
> > server->server_hostname. Then we could use this value for creating the
> > md5.
> >
> > Regards
> >
> > Rüdiger
> >
> > > -----Original Message-----
> > > From: Jan Kaluža [mailto:jkaluza@redhat.com]
> > > Sent: Dienstag, 25. August 2015 10:23
> > > To: dev@httpd.apache.org
> > > Subject: Re: PR 58267: Regression in 2.2.31 caused by r1680920
> > >
> > > On 08/24/2015 11:12 PM, Yann Ylavic wrote:
> > > > On Mon, Aug 24, 2015 at 5:51 PM, Yann Ylavic <yl...@gmail.com>
> > > wrote:
> > > >>
> > > >> On Mon, Aug 24, 2015 at 4:47 PM, Jan Kaluža <jk...@redhat.com>
> > wrote:
> > > >>>
> > > >>> 2) Increment proxy_lb_workers according to number of workers in
> > > balancer
> > > >>> when using "ProxyPass /foobar/ Balancer://foobar/" in the
> > VirtualHost.
> > > The
> > > >>> scoreboard would have right size and ap_proxy_set_scoreboard_lb
> > would
> > > not
> > > >>> fail then.
> > > >>
> > > >> I think we can do this quite easily in merge_proxy_config(), by
> > > >> incrementing proxy_lb_workers for each base->balancers->workers. I
> > did
> > > >> not test it yet though.
> > > >
> > > > I tested the below which seems to work.
> > >
> > > Hm, this reserves the slots in scoreboard even when the balancers are
> > > not used in the virtualhost, or am I wrong?
> > >
> > > I originally thought about increasing proxy_lb_workers only when they
> > > are used in the ProxyPass* in the vhost.
> > >
> > > > Index: modules/proxy/mod_proxy.c
> > > > ===================================================================
> > > > --- modules/proxy/mod_proxy.c    (revision 1697358)
> > > > +++ modules/proxy/mod_proxy.c    (working copy)
> > > > @@ -1135,6 +1135,7 @@ static void * create_proxy_config(apr_pool_t
> *p,
> > s
> > > >
> > > >   static void * merge_proxy_config(apr_pool_t *p, void *basev, void
> > > *overridesv)
> > > >   {
> > > > +    int i;
> > > >       proxy_server_conf *ps = apr_pcalloc(p,
> > sizeof(proxy_server_conf));
> > > >       proxy_server_conf *base = (proxy_server_conf *) basev;
> > > >       proxy_server_conf *overrides = (proxy_server_conf *)
> overridesv;
> > > > @@ -1147,6 +1148,13 @@ static void * merge_proxy_config(apr_pool_t
> *p,
> > > vo
> > > >       ps->allowed_connect_ports = apr_array_append(p,
> > > > base->allowed_connect_ports, overrides->allowed_connect_ports);
> > > >       ps->workers = apr_array_append(p, base->workers, overrides-
> > > >workers);
> > > >       ps->balancers = apr_array_append(p, base->balancers,
> overrides-
> > > >balancers);
> > > > +    /* The balancers inherited from base don't have their members
> > > reserved on
> > > > +     * the scorebord_lb for this server, account for them now.
> > > > +     */
> > > > +    for (i = 0; i < base->balancers->nelts; ++i) {
> > > > +        proxy_balancer *balancer = (proxy_balancer *)base-
> >balancers-
> > > >elts + i;
> > > > +        proxy_lb_workers += balancer->workers->nelts;
> > > > +    }
> > > >       ps->forward = overrides->forward ? overrides->forward : base-
> > > >forward;
> > > >       ps->reverse = overrides->reverse ? overrides->reverse : base-
> > > >reverse;
> > > >
> > > > --
> > > >
> > > > Please note that since all the workers would really be accounted in
> > > > the scoreboard, configurations like the one of PR 58267 (with
> > > > inherited balancers) would also need bigger SHMs (but no more) than
> > > > currently...
> > > >
> > > > WDYT?
> > > >
> > >
> > > Regards,
> > > Jan Kaluza
> > >


RE: PR 58267: Regression in 2.2.31 caused by r1680920

Posted by Plüm, Rüdiger, Vodafone Group <ru...@vodafone.com>.
I think the current state of 2.2.31 breaks existing 2.2.x configuration prior to 2.2.31.
Prior to 2.2.31 you could do the following:

<Proxy Balancer://proxy1>
BalancerMember ajp://127.0.0.1:7001
BalancerMember ajp://127.0.0.2:7002
</Proxy>

<virtualhost *:80>
ProxyPass / balancer://proxy1/
</virtualhost>

<virtualhost 127.0.0.1:8080>

<Location /bmanager>
   sethandler balancer-manager
</Location>

</virtualhost>

With this configuration you could provide your reverse proxy to the outside world while having the
balancermanager managing the balancer only listening to localhost. This is not possible any longer with 2.2.31
as the worker->s is now different in each virtualhost whereas before it was the same as the worker->id was used
to identify the scoreboard slot.
The patches proposed so far would not change this. I think in order to revert to the old behaviour we need to
store with each worker in which server_rec context it was created. e.g. adding a const char * field to the worker that would be filled with
server->server_hostname. Then we could use this value for creating the md5.

Regards

Rüdiger

> -----Original Message-----
> From: Jan Kaluža [mailto:jkaluza@redhat.com]
> Sent: Dienstag, 25. August 2015 10:23
> To: dev@httpd.apache.org
> Subject: Re: PR 58267: Regression in 2.2.31 caused by r1680920
> 
> On 08/24/2015 11:12 PM, Yann Ylavic wrote:
> > On Mon, Aug 24, 2015 at 5:51 PM, Yann Ylavic <yl...@gmail.com>
> wrote:
> >>
> >> On Mon, Aug 24, 2015 at 4:47 PM, Jan Kaluža <jk...@redhat.com> wrote:
> >>>
> >>> 2) Increment proxy_lb_workers according to number of workers in
> balancer
> >>> when using "ProxyPass /foobar/ Balancer://foobar/" in the VirtualHost.
> The
> >>> scoreboard would have right size and ap_proxy_set_scoreboard_lb would
> not
> >>> fail then.
> >>
> >> I think we can do this quite easily in merge_proxy_config(), by
> >> incrementing proxy_lb_workers for each base->balancers->workers. I did
> >> not test it yet though.
> >
> > I tested the below which seems to work.
> 
> Hm, this reserves the slots in scoreboard even when the balancers are
> not used in the virtualhost, or am I wrong?
> 
> I originally thought about increasing proxy_lb_workers only when they
> are used in the ProxyPass* in the vhost.
> 
> > Index: modules/proxy/mod_proxy.c
> > ===================================================================
> > --- modules/proxy/mod_proxy.c    (revision 1697358)
> > +++ modules/proxy/mod_proxy.c    (working copy)
> > @@ -1135,6 +1135,7 @@ static void * create_proxy_config(apr_pool_t *p, s
> >
> >   static void * merge_proxy_config(apr_pool_t *p, void *basev, void
> *overridesv)
> >   {
> > +    int i;
> >       proxy_server_conf *ps = apr_pcalloc(p, sizeof(proxy_server_conf));
> >       proxy_server_conf *base = (proxy_server_conf *) basev;
> >       proxy_server_conf *overrides = (proxy_server_conf *) overridesv;
> > @@ -1147,6 +1148,13 @@ static void * merge_proxy_config(apr_pool_t *p,
> vo
> >       ps->allowed_connect_ports = apr_array_append(p,
> > base->allowed_connect_ports, overrides->allowed_connect_ports);
> >       ps->workers = apr_array_append(p, base->workers, overrides-
> >workers);
> >       ps->balancers = apr_array_append(p, base->balancers, overrides-
> >balancers);
> > +    /* The balancers inherited from base don't have their members
> reserved on
> > +     * the scorebord_lb for this server, account for them now.
> > +     */
> > +    for (i = 0; i < base->balancers->nelts; ++i) {
> > +        proxy_balancer *balancer = (proxy_balancer *)base->balancers-
> >elts + i;
> > +        proxy_lb_workers += balancer->workers->nelts;
> > +    }
> >       ps->forward = overrides->forward ? overrides->forward : base-
> >forward;
> >       ps->reverse = overrides->reverse ? overrides->reverse : base-
> >reverse;
> >
> > --
> >
> > Please note that since all the workers would really be accounted in
> > the scoreboard, configurations like the one of PR 58267 (with
> > inherited balancers) would also need bigger SHMs (but no more) than
> > currently...
> >
> > WDYT?
> >
> 
> Regards,
> Jan Kaluza
> 


Re: PR 58267: Regression in 2.2.31 caused by r1680920

Posted by Jan Kaluža <jk...@redhat.com>.
On 08/24/2015 11:12 PM, Yann Ylavic wrote:
> On Mon, Aug 24, 2015 at 5:51 PM, Yann Ylavic <yl...@gmail.com> wrote:
>>
>> On Mon, Aug 24, 2015 at 4:47 PM, Jan Kaluža <jk...@redhat.com> wrote:
>>>
>>> 2) Increment proxy_lb_workers according to number of workers in balancer
>>> when using "ProxyPass /foobar/ Balancer://foobar/" in the VirtualHost. The
>>> scoreboard would have right size and ap_proxy_set_scoreboard_lb would not
>>> fail then.
>>
>> I think we can do this quite easily in merge_proxy_config(), by
>> incrementing proxy_lb_workers for each base->balancers->workers. I did
>> not test it yet though.
>
> I tested the below which seems to work.

Hm, this reserves the slots in scoreboard even when the balancers are 
not used in the virtualhost, or am I wrong?

I originally thought about increasing proxy_lb_workers only when they 
are used in the ProxyPass* in the vhost.

> Index: modules/proxy/mod_proxy.c
> ===================================================================
> --- modules/proxy/mod_proxy.c    (revision 1697358)
> +++ modules/proxy/mod_proxy.c    (working copy)
> @@ -1135,6 +1135,7 @@ static void * create_proxy_config(apr_pool_t *p, s
>
>   static void * merge_proxy_config(apr_pool_t *p, void *basev, void *overridesv)
>   {
> +    int i;
>       proxy_server_conf *ps = apr_pcalloc(p, sizeof(proxy_server_conf));
>       proxy_server_conf *base = (proxy_server_conf *) basev;
>       proxy_server_conf *overrides = (proxy_server_conf *) overridesv;
> @@ -1147,6 +1148,13 @@ static void * merge_proxy_config(apr_pool_t *p, vo
>       ps->allowed_connect_ports = apr_array_append(p,
> base->allowed_connect_ports, overrides->allowed_connect_ports);
>       ps->workers = apr_array_append(p, base->workers, overrides->workers);
>       ps->balancers = apr_array_append(p, base->balancers, overrides->balancers);
> +    /* The balancers inherited from base don't have their members reserved on
> +     * the scorebord_lb for this server, account for them now.
> +     */
> +    for (i = 0; i < base->balancers->nelts; ++i) {
> +        proxy_balancer *balancer = (proxy_balancer *)base->balancers->elts + i;
> +        proxy_lb_workers += balancer->workers->nelts;
> +    }
>       ps->forward = overrides->forward ? overrides->forward : base->forward;
>       ps->reverse = overrides->reverse ? overrides->reverse : base->reverse;
>
> --
>
> Please note that since all the workers would really be accounted in
> the scoreboard, configurations like the one of PR 58267 (with
> inherited balancers) would also need bigger SHMs (but no more) than
> currently...
>
> WDYT?
>

Regards,
Jan Kaluza



Re: PR 58267: Regression in 2.2.31 caused by r1680920

Posted by Yann Ylavic <yl...@gmail.com>.
On Mon, Aug 24, 2015 at 5:51 PM, Yann Ylavic <yl...@gmail.com> wrote:
>
> On Mon, Aug 24, 2015 at 4:47 PM, Jan Kaluža <jk...@redhat.com> wrote:
>>
>> 2) Increment proxy_lb_workers according to number of workers in balancer
>> when using "ProxyPass /foobar/ Balancer://foobar/" in the VirtualHost. The
>> scoreboard would have right size and ap_proxy_set_scoreboard_lb would not
>> fail then.
>
> I think we can do this quite easily in merge_proxy_config(), by
> incrementing proxy_lb_workers for each base->balancers->workers. I did
> not test it yet though.

I tested the below which seems to work.

Index: modules/proxy/mod_proxy.c
===================================================================
--- modules/proxy/mod_proxy.c    (revision 1697358)
+++ modules/proxy/mod_proxy.c    (working copy)
@@ -1135,6 +1135,7 @@ static void * create_proxy_config(apr_pool_t *p, s

 static void * merge_proxy_config(apr_pool_t *p, void *basev, void *overridesv)
 {
+    int i;
     proxy_server_conf *ps = apr_pcalloc(p, sizeof(proxy_server_conf));
     proxy_server_conf *base = (proxy_server_conf *) basev;
     proxy_server_conf *overrides = (proxy_server_conf *) overridesv;
@@ -1147,6 +1148,13 @@ static void * merge_proxy_config(apr_pool_t *p, vo
     ps->allowed_connect_ports = apr_array_append(p,
base->allowed_connect_ports, overrides->allowed_connect_ports);
     ps->workers = apr_array_append(p, base->workers, overrides->workers);
     ps->balancers = apr_array_append(p, base->balancers, overrides->balancers);
+    /* The balancers inherited from base don't have their members reserved on
+     * the scorebord_lb for this server, account for them now.
+     */
+    for (i = 0; i < base->balancers->nelts; ++i) {
+        proxy_balancer *balancer = (proxy_balancer *)base->balancers->elts + i;
+        proxy_lb_workers += balancer->workers->nelts;
+    }
     ps->forward = overrides->forward ? overrides->forward : base->forward;
     ps->reverse = overrides->reverse ? overrides->reverse : base->reverse;

--

Please note that since all the workers would really be accounted in
the scoreboard, configurations like the one of PR 58267 (with
inherited balancers) would also need bigger SHMs (but no more) than
currently...

WDYT?

Re: PR 58267: Regression in 2.2.31 caused by r1680920

Posted by Yann Ylavic <yl...@gmail.com>.
Hi Jan,

I was working on the same issue... and was going to implement 2) :)

On Mon, Aug 24, 2015 at 4:47 PM, Jan Kaluža <jk...@redhat.com> wrote:
>
> Now, the root of the error is that the scoreboard size is static (set to
> proxy_lb_workers + PROXY_DYNAMIC_BALANCER_LIMIT), but it is not incremented
> when ProxyPass with balancer is used in the virtualhost. This leads to lack
> of space in scoreboard when Balancers are used in multiple virtualhosts.

I came to the same conclusion.

>
> I think there are two possible fixes:
>
> 1) Do not use server->server_hostname when computing hash which is used to
> determine right scoreboard field. I think this would fix this bug, but I'm
> not sure what would happen in situations when you define 2 balancers with
> the same name in different virtualhosts...

They should be different balancers...

>
> On the other-side, when there is global Proxy balancer, it make sense to use
> the same worker->s for all the ProxyPass in virtualhosts.

This would break compatibility.

>
> 2) Increment proxy_lb_workers according to number of workers in balancer
> when using "ProxyPass /foobar/ Balancer://foobar/" in the VirtualHost. The
> scoreboard would have right size and ap_proxy_set_scoreboard_lb would not
> fail then.

I think we can do this quite easily in merge_proxy_config(), by
incrementing proxy_lb_workers for each base->balancers->workers. I did
not test it yet though.

Regards,
Yann.