You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@httpd.apache.org by Stefan Eissing <st...@greenbytes.de> on 2018/05/08 14:50:13 UTC

slotmem + balancer

r1831192 on trunk. Every time I stop/start my test server, I get a new set of slotmem-shm-p*.sh m files and the log says 10 times: 
...
[Tue May 08 14:43:12.728333 2018] [proxy_balancer:emerg] [pid 49764:tid 140736151831424] AH01205: slotmem_attach failed

There are 10 sets of files. I have 5 balancers defined and initially, 2 httpd processes run. Coincidence?

-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p120bed0a.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p120bed0a_h2_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p120bed0a_h2c_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p120bed0a_http_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p120bed0a_https_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p120bed0a_nghttp2.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p2f1c2700.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p2f1c2700_h2_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p2f1c2700_h2c_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p2f1c2700_http_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p2f1c2700_https_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p2f1c2700_nghttp2.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p39dac199.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p39dac199_h2_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p39dac199_h2c_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p39dac199_http_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p39dac199_https_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p39dac199_nghttp2.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p42fd765c.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p42fd765c_h2_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p42fd765c_h2c_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p42fd765c_http_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p42fd765c_https_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p42fd765c_nghttp2.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p689c1ea2.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p689c1ea2_h2_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p689c1ea2_h2c_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p689c1ea2_http_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p689c1ea2_https_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p689c1ea2_nghttp2.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p72dffeb5.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p72dffeb5_h2_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p72dffeb5_h2c_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p72dffeb5_http_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p72dffeb5_https_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p72dffeb5_nghttp2.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p84c8ab74.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p84c8ab74_h2_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p84c8ab74_h2c_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p84c8ab74_h2x_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p84c8ab74_http_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p84c8ab74_https_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p84c8ab74_nghttp2.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb14e9343.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb14e9343_h2_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb14e9343_h2c_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb14e9343_http_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb14e9343_https_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb14e9343_nghttp2.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb2a77e76.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb2a77e76_h2_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb2a77e76_h2c_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb2a77e76_http_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb2a77e76_https_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb2a77e76_nghttp2.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pd1f3ef78.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pd1f3ef78_h2_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pd1f3ef78_h2c_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pd1f3ef78_http_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pd1f3ef78_https_local.shm
-rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pd1f3ef78_nghttp2.shm


Re: slotmem + balancer

Posted by Jim Jagielski <ji...@jaguNET.com>.

> On May 12, 2018, at 10:23 AM, Mark Blackman <ma...@exonetric.com> wrote:
> 
> I think you will find it difficult to re-work effectively unless you can identify representative test cases possibly including a segfault.
> 
> For me the most important characteristics of the fix were (a) to more accurately identify genuine virtual host changes (rather than simple line number shifts) that might invalidate balancer state and at least in some cases, to pick up existing SHMs left-over from the last httpd start. 
> 

Thx... I still think that all the recent changes to the module has made it more fragile than it was before... But whatever 


Re: slotmem + balancer

Posted by Mark Blackman <ma...@exonetric.com>.
> On 8 May 2018, at 18:19, Jim Jagielski <ji...@jaguNET.com> wrote:
> 
> I am under the impression is that we should likely restore mod_slotmem_shm
> back to its "orig" condition, either:
> 
>    o http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/slotmem/mod_slotmem_shm.c?view=markup&pathrev=1822341 <http://svn.apache.org/viewvc?view=revision&revision=1822341>
>    o http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/slotmem/mod_slotmem_shm.c?view=markup&pathrev=1782069 <http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/slotmem/mod_slotmem_shm.c?view=markup&pathrev=1782069>
> 
> and try to rework all of this. I fear that all the subsequent work has really made this module extremely fragile. We need, IMO, a very minimal fix for PR 62044
> 
> Just for the heck of it, didn't r1822341 <http://svn.apache.org/viewvc?view=revision&revision=1822341> actually *FIX* the PR? 


Hi,


To follow-up as the reporter of 62044, the original problem was a segmentation fault due to an entirely different 3rd party vendor module, which Apache 2.4.32 magically fixed, no idea how. However, the segfaults meant that the SHMs and/or SHM placeholder files weren't getting correctly cleaned up on restarts (in 2.4.29). I think it is important for httpd to handle the segfault case well because 3rd party modules can cause problems that httpd can’t anticipate. Ultimately, httpd is creating a bunch of persistent external state that it should make an effort to deal with cleanly when httpd stops unexpectedly and is subsequently restarted.

We restart/reload Apache frequently enough that preserving balancer state is useful but not critical.

I think you will find it difficult to re-work effectively unless you can identify representative test cases possibly including a segfault.

For me the most important characteristics of the fix were (a) to more accurately identify genuine virtual host changes (rather than simple line number shifts) that might invalidate balancer state and at least in some cases, to pick up existing SHMs left-over from the last httpd start. 

- Mark

Re: slotmem + balancer

Posted by Jim Jagielski <ji...@jaguNET.com>.

> On May 20, 2018, at 4:59 PM, Yann Ylavic <yl...@gmail.com> wrote:
> 
> 
> Now we are back to 2.4.29 code, r1822341 is in again, and I committed
> additional changes (minimal hopefully) to address the issues reported
> in PR 62308 (and PR 62044 still). My own testing, based on the tests
> run by the OP (his on Windows, mine on Linux), all passes right now.
> So I'm waiting for the OP's latest results to propose a backport.
> 
> WDYT of this approach (and patches), do it sound better?
> 

It does and I appreciate you taking the time and effort not only on the code but also with this most excellent email!

Sorry I was such a "stickler" about all this... most of it is due to the resistance in large scale code changes being done which try to address separate issues in one large chunk. Past history w/i the project have shown that, in general, these cause more harm than good because too much is changed in one go, making it difficult to more realistically assess the impacts. So when I see these big refactors, I tend to recall the risk and have a semi-immediate knee-jerk about them.


Re: slotmem + balancer

Posted by Daniel Ruggeri <dr...@primary.net>.
Thanks, Yann;
   This does help explain the rationale and I appreciate you taking the
time to walk us through the reasoning.

-- 
Daniel Ruggeri

On 5/20/2018 3:59 PM, Yann Ylavic wrote:
> On Tue, May 8, 2018 at 7:19 PM, Jim Jagielski <ji...@jagunet.com> wrote:
>> I am under the impression is that we should likely restore mod_slotmem_shm
>> back to its "orig" condition,
> So I did this (r1831868),
>
>> either:
>>
>>    o
>> http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/slotmem/mod_slotmem_shm.c?view=markup&pathrev=1822341
> and this too (r1831869).
>
>> Just for the heck of it, didn't r1822341 actually *FIX* the PR?
> Not enough to pass all the tests raised by both PRs 62044 and 62308,
> hence follow ups r1831870, r1831871+r1831935 and finally
> r1831938.
>
> I think I owe an explanation on to why I made those changes in
> mod_slotmem_shm for 2.4.33, and why until now it looked to me like the
> right thing to do.
>
> Initially I thought that, on unixes MPMs, mod_slotmem_shm was
> reusing/preserving SHMs on restart (some parts of the code were quite
> misleading in this regard), and thus switching to per generation SHMs
> (like on Windows) was potentially going to break Unix users.
> Actually we never have reused SHMs on restart on any system unless
> BalancerPersist is enabled, while for me BalancerPersist was meant to
> preserve data on stop/start only.
> In any case this led me to take the "SHMs maintained in pglobal"
> approach for all OSes (creating new ones only when sizes change), as I
> thought it would allow to preserve SHMs on Windows too (IOW, was meant
> to fix Windows rather than break Unix).
> This has been my reasoning from the start, I only cared about fixing
> the reported bugs and preserving the existing behaviour (supposedly),
> not really an irrepressible desire from me to change/refactor the code
> (as you have suggested several times).
>
> Anyway, this was before I started to work on the last issue reported
> in PR 62308 (change some BalancerMember name/port and httpd won't
> restart), where I realized that sharing SHMs between old and new
> generations can't work in all cases (at least without more non-small
> changes), even when the sizes don't change. So I wondered if this
> particular case worked in 2.4.29, and found that SHMs were re-created
> each time, so it worked.. until BalancerPersist was enabled (same
> failure).
>
> So you were right about the potential to break things with my changes
> in the code, though it didn't work as expected already.
> Btw, I would have preferred more constructive feedbacks, including in
> the discussions about PR 62044 and the original r1822341 commit thread
> (where I explained why I was going to revert it..).
>
> Now we are back to 2.4.29 code, r1822341 is in again, and I committed
> additional changes (minimal hopefully) to address the issues reported
> in PR 62308 (and PR 62044 still). My own testing, based on the tests
> run by the OP (his on Windows, mine on Linux), all passes right now.
> So I'm waiting for the OP's latest results to propose a backport.
>
> WDYT of this approach (and patches), do it sound better?
>
> Regards,
> Yann.


Re: slotmem + balancer

Posted by Stefan Eissing <st...@greenbytes.de>.

> Am 22.05.2018 um 10:16 schrieb Yann Ylavic <yl...@gmail.com>:
> 
> On Tue, May 22, 2018 at 10:06 AM, Stefan Eissing
> <st...@greenbytes.de> wrote:
>> 
>> Could you, just as a rough description, list which
>> test cases would have prevented the bugs? Maybe someone
>> would feel like implementing them (or in case of a future
>> code change there, could at least manually find some
>> instructions on what to test in the mailing list archive).
>> 
>> E.g.
>> - configure slotmem as 1) XYZ, 2) ABC with persistence, 3) DEF...
>> - start, request something, expect bla1
>> - stop+start request another thing, expect bla2
>> - graceful, request, expect bla3
>> 
>> Just while it is fresh in your mind...
> 
> Looks like we had the same kind of idea :) Just asked the OP (PR
> 62308) to provide his/her tests to see if we can integrate them in our
> test suite.
> I'm not sure it can be done easily though (how to add/del balancers
> and members between restarts in our perl framework?), so I agree that
> in the meantime a least a description is important, will try to cook
> something.

Yeah, not sure that the effort to put that into the perl magic
would be worth it. 

In my mod_md pytest suite, I exchange a test.conf file that is 
always included between restarts to simulate changes in MDomain 
settings. Not hard to do it that way and with a small server config, 
actually quite fast to run.

Cheers,

Stefan


Re: slotmem + balancer

Posted by Yann Ylavic <yl...@gmail.com>.
On Tue, May 22, 2018 at 10:06 AM, Stefan Eissing
<st...@greenbytes.de> wrote:
>
> Could you, just as a rough description, list which
> test cases would have prevented the bugs? Maybe someone
> would feel like implementing them (or in case of a future
> code change there, could at least manually find some
> instructions on what to test in the mailing list archive).
>
> E.g.
> - configure slotmem as 1) XYZ, 2) ABC with persistence, 3) DEF...
> - start, request something, expect bla1
> - stop+start request another thing, expect bla2
> - graceful, request, expect bla3
>
> Just while it is fresh in your mind...

Looks like we had the same kind of idea :) Just asked the OP (PR
62308) to provide his/her tests to see if we can integrate them in our
test suite.
I'm not sure it can be done easily though (how to add/del balancers
and members between restarts in our perl framework?), so I agree that
in the meantime a least a description is important, will try to cook
something.

Re: slotmem + balancer

Posted by Stefan Eissing <st...@greenbytes.de>.
Yann, thanks for your perseverance on this.

Could you, just as a rough description, list which
test cases would have prevented the bugs? Maybe someone
would feel like implementing them (or in case of a future
code change there, could at least manually find some
instructions on what to test in the mailing list archive).

E.g.
- configure slotmem as 1) XYZ, 2) ABC with persistence, 3) DEF...
- start, request something, expect bla1
- stop+start request another thing, expect bla2
- graceful, request, expect bla3

Just while it is fresh in your mind... 

Cheers,

Stefan

> Am 20.05.2018 um 22:59 schrieb Yann Ylavic <yl...@gmail.com>:
> 
> On Tue, May 8, 2018 at 7:19 PM, Jim Jagielski <ji...@jagunet.com> wrote:
>> I am under the impression is that we should likely restore mod_slotmem_shm
>> back to its "orig" condition,
> 
> So I did this (r1831868),
> 
>> either:
>> 
>>   o
>> http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/slotmem/mod_slotmem_shm.c?view=markup&pathrev=1822341
> 
> and this too (r1831869).
> 
>> 
>> Just for the heck of it, didn't r1822341 actually *FIX* the PR?
> 
> Not enough to pass all the tests raised by both PRs 62044 and 62308,
> hence follow ups r1831870, r1831871+r1831935 and finally
> r1831938.
> 
> I think I owe an explanation on to why I made those changes in
> mod_slotmem_shm for 2.4.33, and why until now it looked to me like the
> right thing to do.
> 
> Initially I thought that, on unixes MPMs, mod_slotmem_shm was
> reusing/preserving SHMs on restart (some parts of the code were quite
> misleading in this regard), and thus switching to per generation SHMs
> (like on Windows) was potentially going to break Unix users.
> Actually we never have reused SHMs on restart on any system unless
> BalancerPersist is enabled, while for me BalancerPersist was meant to
> preserve data on stop/start only.
> In any case this led me to take the "SHMs maintained in pglobal"
> approach for all OSes (creating new ones only when sizes change), as I
> thought it would allow to preserve SHMs on Windows too (IOW, was meant
> to fix Windows rather than break Unix).
> This has been my reasoning from the start, I only cared about fixing
> the reported bugs and preserving the existing behaviour (supposedly),
> not really an irrepressible desire from me to change/refactor the code
> (as you have suggested several times).
> 
> Anyway, this was before I started to work on the last issue reported
> in PR 62308 (change some BalancerMember name/port and httpd won't
> restart), where I realized that sharing SHMs between old and new
> generations can't work in all cases (at least without more non-small
> changes), even when the sizes don't change. So I wondered if this
> particular case worked in 2.4.29, and found that SHMs were re-created
> each time, so it worked.. until BalancerPersist was enabled (same
> failure).
> 
> So you were right about the potential to break things with my changes
> in the code, though it didn't work as expected already.
> Btw, I would have preferred more constructive feedbacks, including in
> the discussions about PR 62044 and the original r1822341 commit thread
> (where I explained why I was going to revert it..).
> 
> Now we are back to 2.4.29 code, r1822341 is in again, and I committed
> additional changes (minimal hopefully) to address the issues reported
> in PR 62308 (and PR 62044 still). My own testing, based on the tests
> run by the OP (his on Windows, mine on Linux), all passes right now.
> So I'm waiting for the OP's latest results to propose a backport.
> 
> WDYT of this approach (and patches), do it sound better?
> 
> Regards,
> Yann.


Re: slotmem + balancer

Posted by Yann Ylavic <yl...@gmail.com>.
On Tue, May 8, 2018 at 7:19 PM, Jim Jagielski <ji...@jagunet.com> wrote:
> I am under the impression is that we should likely restore mod_slotmem_shm
> back to its "orig" condition,

So I did this (r1831868),

> either:
>
>    o
> http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/slotmem/mod_slotmem_shm.c?view=markup&pathrev=1822341

and this too (r1831869).

>
> Just for the heck of it, didn't r1822341 actually *FIX* the PR?

Not enough to pass all the tests raised by both PRs 62044 and 62308,
hence follow ups r1831870, r1831871+r1831935 and finally
r1831938.

I think I owe an explanation on to why I made those changes in
mod_slotmem_shm for 2.4.33, and why until now it looked to me like the
right thing to do.

Initially I thought that, on unixes MPMs, mod_slotmem_shm was
reusing/preserving SHMs on restart (some parts of the code were quite
misleading in this regard), and thus switching to per generation SHMs
(like on Windows) was potentially going to break Unix users.
Actually we never have reused SHMs on restart on any system unless
BalancerPersist is enabled, while for me BalancerPersist was meant to
preserve data on stop/start only.
In any case this led me to take the "SHMs maintained in pglobal"
approach for all OSes (creating new ones only when sizes change), as I
thought it would allow to preserve SHMs on Windows too (IOW, was meant
to fix Windows rather than break Unix).
This has been my reasoning from the start, I only cared about fixing
the reported bugs and preserving the existing behaviour (supposedly),
not really an irrepressible desire from me to change/refactor the code
(as you have suggested several times).

Anyway, this was before I started to work on the last issue reported
in PR 62308 (change some BalancerMember name/port and httpd won't
restart), where I realized that sharing SHMs between old and new
generations can't work in all cases (at least without more non-small
changes), even when the sizes don't change. So I wondered if this
particular case worked in 2.4.29, and found that SHMs were re-created
each time, so it worked.. until BalancerPersist was enabled (same
failure).

So you were right about the potential to break things with my changes
in the code, though it didn't work as expected already.
Btw, I would have preferred more constructive feedbacks, including in
the discussions about PR 62044 and the original r1822341 commit thread
(where I explained why I was going to revert it..).

Now we are back to 2.4.29 code, r1822341 is in again, and I committed
additional changes (minimal hopefully) to address the issues reported
in PR 62308 (and PR 62044 still). My own testing, based on the tests
run by the OP (his on Windows, mine on Linux), all passes right now.
So I'm waiting for the OP's latest results to propose a backport.

WDYT of this approach (and patches), do it sound better?

Regards,
Yann.

Re: slotmem + balancer

Posted by Jim Jagielski <ji...@jaguNET.com>.
I am under the impression is that we should likely restore mod_slotmem_shm
back to its "orig" condition, either:

   o http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/slotmem/mod_slotmem_shm.c?view=markup&pathrev=1822341 <http://svn.apache.org/viewvc?view=revision&revision=1822341>
   o http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/slotmem/mod_slotmem_shm.c?view=markup&pathrev=1782069 <http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/slotmem/mod_slotmem_shm.c?view=markup&pathrev=1782069>

and try to rework all of this. I fear that all the subsequent work has really made this module extremely fragile. We need, IMO, a very minimal fix for PR 62044

Just for the heck of it, didn't r1822341 <http://svn.apache.org/viewvc?view=revision&revision=1822341> actually *FIX* the PR? 

> On May 8, 2018, at 10:50 AM, Stefan Eissing <st...@greenbytes.de> wrote:
> 
> r1831192 on trunk. Every time I stop/start my test server, I get a new set of slotmem-shm-p*.sh m files and the log says 10 times: 
> ...
> [Tue May 08 14:43:12.728333 2018] [proxy_balancer:emerg] [pid 49764:tid 140736151831424] AH01205: slotmem_attach failed
> 
> There are 10 sets of files. I have 5 balancers defined and initially, 2 httpd processes run. Coincidence?
> 
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p120bed0a.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p120bed0a_h2_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p120bed0a_h2c_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p120bed0a_http_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p120bed0a_https_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p120bed0a_nghttp2.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p2f1c2700.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p2f1c2700_h2_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p2f1c2700_h2c_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p2f1c2700_http_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p2f1c2700_https_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p2f1c2700_nghttp2.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p39dac199.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p39dac199_h2_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p39dac199_h2c_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p39dac199_http_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p39dac199_https_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p39dac199_nghttp2.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p42fd765c.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p42fd765c_h2_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p42fd765c_h2c_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p42fd765c_http_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p42fd765c_https_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p42fd765c_nghttp2.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p689c1ea2.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p689c1ea2_h2_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p689c1ea2_h2c_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p689c1ea2_http_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p689c1ea2_https_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p689c1ea2_nghttp2.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p72dffeb5.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p72dffeb5_h2_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p72dffeb5_h2c_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p72dffeb5_http_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p72dffeb5_https_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p72dffeb5_nghttp2.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p84c8ab74.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p84c8ab74_h2_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p84c8ab74_h2c_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p84c8ab74_h2x_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p84c8ab74_http_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p84c8ab74_https_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p84c8ab74_nghttp2.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb14e9343.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb14e9343_h2_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb14e9343_h2c_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb14e9343_http_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb14e9343_https_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb14e9343_nghttp2.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb2a77e76.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb2a77e76_h2_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb2a77e76_h2c_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb2a77e76_http_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb2a77e76_https_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb2a77e76_nghttp2.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pd1f3ef78.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pd1f3ef78_h2_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pd1f3ef78_h2c_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pd1f3ef78_http_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pd1f3ef78_https_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pd1f3ef78_nghttp2.shm
> 


Re: slotmem + balancer

Posted by Yann Ylavic <yl...@gmail.com>.
r1831218 was reverted (for the reasons explained in r1831396).
r1831394 is the right/compatible fix I think, does it still work for
you with latest trunk (r1831396+)?

On Wed, May 9, 2018 at 9:11 AM, Stefan Eissing
<st...@greenbytes.de> wrote:
> I can confirm. This solves the problem in my setup.
>
>> Am 09.05.2018 um 03:25 schrieb Yann Ylavic <yl...@gmail.com>:
>>
>> On Wed, May 9, 2018 at 1:25 AM, Yann Ylavic <yl...@gmail.com> wrote:
>>> I can reproduce with global balancers (10 is your number of vhosts
>>> presumably, hence with global balancers there are as many sets of
>>> files).
>>> Let me look at what's happening for the failure...
>>
>> Should be fixed in r1831218.
>

Re: slotmem + balancer

Posted by Stefan Eissing <st...@greenbytes.de>.
I can confirm. This solves the problem in my setup.

> Am 09.05.2018 um 03:25 schrieb Yann Ylavic <yl...@gmail.com>:
> 
> On Wed, May 9, 2018 at 1:25 AM, Yann Ylavic <yl...@gmail.com> wrote:
>> I can reproduce with global balancers (10 is your number of vhosts
>> presumably, hence with global balancers there are as many sets of
>> files).
>> Let me look at what's happening for the failure...
> 
> Should be fixed in r1831218.


Re: slotmem + balancer

Posted by Yann Ylavic <yl...@gmail.com>.
On Wed, May 9, 2018 at 1:25 AM, Yann Ylavic <yl...@gmail.com> wrote:
> I can reproduce with global balancers (10 is your number of vhosts
> presumably, hence with global balancers there are as many sets of
> files).
> Let me look at what's happening for the failure...

Should be fixed in r1831218.

Re: slotmem + balancer

Posted by Yann Ylavic <yl...@gmail.com>.
I can reproduce with global balancers (10 is your number of vhosts
presumably, hence with global balancers there are as many sets of
files).
Let me look at what's happening for the failure...

On Wed, May 9, 2018 at 12:45 AM, Yann Ylavic <yl...@gmail.com> wrote:
> Hi Stefan,
>
> what system is this, and which SHM mechanism (ie. APR_USE_SHMEM_*
> defined in your "include/apr.h")?
> The children processes fail to init (attach SHMs), although they
> should be inherited on unixes (found in global list), could you please
> provide [debug] logs?
>
> Thanks,
> Yann.
>
>
> On Tue, May 8, 2018 at 5:00 PM, Stefan Eissing
> <st...@greenbytes.de> wrote:
>> Correction, the log seems to be filling with these entries every 1-2 seconds. The server does not progress further and does not answer to requests. Any idea?
>>
>>> Am 08.05.2018 um 16:50 schrieb Stefan Eissing <st...@greenbytes.de>:
>>>
>>> r1831192 on trunk. Every time I stop/start my test server, I get a new set of slotmem-shm-p*.sh m files and the log says 10 times:
>>> ...
>>> [Tue May 08 14:43:12.728333 2018] [proxy_balancer:emerg] [pid 49764:tid 140736151831424] AH01205: slotmem_attach failed
>>>
>>> There are 10 sets of files. I have 5 balancers defined and initially, 2 httpd processes run. Coincidence?
>>>
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p120bed0a.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p120bed0a_h2_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p120bed0a_h2c_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p120bed0a_http_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p120bed0a_https_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p120bed0a_nghttp2.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p2f1c2700.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p2f1c2700_h2_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p2f1c2700_h2c_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p2f1c2700_http_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p2f1c2700_https_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p2f1c2700_nghttp2.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p39dac199.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p39dac199_h2_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p39dac199_h2c_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p39dac199_http_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p39dac199_https_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p39dac199_nghttp2.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p42fd765c.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p42fd765c_h2_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p42fd765c_h2c_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p42fd765c_http_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p42fd765c_https_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p42fd765c_nghttp2.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p689c1ea2.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p689c1ea2_h2_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p689c1ea2_h2c_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p689c1ea2_http_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p689c1ea2_https_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p689c1ea2_nghttp2.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p72dffeb5.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p72dffeb5_h2_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p72dffeb5_h2c_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p72dffeb5_http_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p72dffeb5_https_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p72dffeb5_nghttp2.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p84c8ab74.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p84c8ab74_h2_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p84c8ab74_h2c_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p84c8ab74_h2x_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p84c8ab74_http_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p84c8ab74_https_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p84c8ab74_nghttp2.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb14e9343.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb14e9343_h2_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb14e9343_h2c_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb14e9343_http_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb14e9343_https_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb14e9343_nghttp2.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb2a77e76.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb2a77e76_h2_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb2a77e76_h2c_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb2a77e76_http_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb2a77e76_https_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb2a77e76_nghttp2.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pd1f3ef78.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pd1f3ef78_h2_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pd1f3ef78_h2c_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pd1f3ef78_http_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pd1f3ef78_https_local.shm
>>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pd1f3ef78_nghttp2.shm
>>>
>>

Re: slotmem + balancer

Posted by Yann Ylavic <yl...@gmail.com>.
Hi Stefan,

what system is this, and which SHM mechanism (ie. APR_USE_SHMEM_*
defined in your "include/apr.h")?
The children processes fail to init (attach SHMs), although they
should be inherited on unixes (found in global list), could you please
provide [debug] logs?

Thanks,
Yann.


On Tue, May 8, 2018 at 5:00 PM, Stefan Eissing
<st...@greenbytes.de> wrote:
> Correction, the log seems to be filling with these entries every 1-2 seconds. The server does not progress further and does not answer to requests. Any idea?
>
>> Am 08.05.2018 um 16:50 schrieb Stefan Eissing <st...@greenbytes.de>:
>>
>> r1831192 on trunk. Every time I stop/start my test server, I get a new set of slotmem-shm-p*.sh m files and the log says 10 times:
>> ...
>> [Tue May 08 14:43:12.728333 2018] [proxy_balancer:emerg] [pid 49764:tid 140736151831424] AH01205: slotmem_attach failed
>>
>> There are 10 sets of files. I have 5 balancers defined and initially, 2 httpd processes run. Coincidence?
>>
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p120bed0a.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p120bed0a_h2_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p120bed0a_h2c_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p120bed0a_http_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p120bed0a_https_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p120bed0a_nghttp2.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p2f1c2700.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p2f1c2700_h2_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p2f1c2700_h2c_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p2f1c2700_http_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p2f1c2700_https_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p2f1c2700_nghttp2.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p39dac199.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p39dac199_h2_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p39dac199_h2c_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p39dac199_http_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p39dac199_https_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p39dac199_nghttp2.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p42fd765c.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p42fd765c_h2_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p42fd765c_h2c_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p42fd765c_http_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p42fd765c_https_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p42fd765c_nghttp2.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p689c1ea2.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p689c1ea2_h2_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p689c1ea2_h2c_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p689c1ea2_http_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p689c1ea2_https_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p689c1ea2_nghttp2.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p72dffeb5.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p72dffeb5_h2_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p72dffeb5_h2c_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p72dffeb5_http_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p72dffeb5_https_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p72dffeb5_nghttp2.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p84c8ab74.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p84c8ab74_h2_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p84c8ab74_h2c_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p84c8ab74_h2x_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p84c8ab74_http_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p84c8ab74_https_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p84c8ab74_nghttp2.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb14e9343.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb14e9343_h2_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb14e9343_h2c_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb14e9343_http_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb14e9343_https_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb14e9343_nghttp2.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb2a77e76.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb2a77e76_h2_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb2a77e76_h2c_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb2a77e76_http_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb2a77e76_https_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb2a77e76_nghttp2.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pd1f3ef78.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pd1f3ef78_h2_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pd1f3ef78_h2c_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pd1f3ef78_http_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pd1f3ef78_https_local.shm
>> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pd1f3ef78_nghttp2.shm
>>
>

Re: slotmem + balancer

Posted by Stefan Eissing <st...@greenbytes.de>.
Correction, the log seems to be filling with these entries every 1-2 seconds. The server does not progress further and does not answer to requests. Any idea?

> Am 08.05.2018 um 16:50 schrieb Stefan Eissing <st...@greenbytes.de>:
> 
> r1831192 on trunk. Every time I stop/start my test server, I get a new set of slotmem-shm-p*.sh m files and the log says 10 times: 
> ...
> [Tue May 08 14:43:12.728333 2018] [proxy_balancer:emerg] [pid 49764:tid 140736151831424] AH01205: slotmem_attach failed
> 
> There are 10 sets of files. I have 5 balancers defined and initially, 2 httpd processes run. Coincidence?
> 
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p120bed0a.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p120bed0a_h2_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p120bed0a_h2c_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p120bed0a_http_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p120bed0a_https_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p120bed0a_nghttp2.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p2f1c2700.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p2f1c2700_h2_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p2f1c2700_h2c_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p2f1c2700_http_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p2f1c2700_https_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p2f1c2700_nghttp2.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p39dac199.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p39dac199_h2_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p39dac199_h2c_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p39dac199_http_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p39dac199_https_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p39dac199_nghttp2.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p42fd765c.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p42fd765c_h2_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p42fd765c_h2c_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p42fd765c_http_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p42fd765c_https_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p42fd765c_nghttp2.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p689c1ea2.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p689c1ea2_h2_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p689c1ea2_h2c_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p689c1ea2_http_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p689c1ea2_https_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p689c1ea2_nghttp2.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p72dffeb5.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p72dffeb5_h2_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p72dffeb5_h2c_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p72dffeb5_http_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p72dffeb5_https_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p72dffeb5_nghttp2.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p84c8ab74.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p84c8ab74_h2_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p84c8ab74_h2c_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p84c8ab74_h2x_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p84c8ab74_http_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p84c8ab74_https_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-p84c8ab74_nghttp2.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb14e9343.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb14e9343_h2_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb14e9343_h2c_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb14e9343_http_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb14e9343_https_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb14e9343_nghttp2.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb2a77e76.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb2a77e76_h2_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb2a77e76_h2c_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb2a77e76_http_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb2a77e76_https_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pb2a77e76_nghttp2.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pd1f3ef78.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pd1f3ef78_h2_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pd1f3ef78_h2c_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pd1f3ef78_http_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pd1f3ef78_https_local.shm
> -rw-r--r--  1 sei  staff      8  8 Mai 16:43 slotmem-shm-pd1f3ef78_nghttp2.shm
>