You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@activemq.apache.org by Dan Langford <da...@gmail.com> on 2017/09/21 18:43:41 UTC

[Artemis 2.1-2.3] Configuration Reload on slave broker.xml causes slave to start/enable acceptors which disables backups

Quick Summary: If I make any change at all to the slave broker.xml file the
"configuration reload" feature takes effect and starts/enables the
acceptors on the Slave. This causes the slave to stop backing up the master
and start accepting its own connections. also address and security settings
that have been made via management api are lost and only the broker.xml
file is considered. Im wondering if this is intended behavior, a config
setting i need to change, or a possible bug. specific details and examples
follow. also i erroneously created an issue with this already that, based
on our findings, may need to be closed: ARTEMIS-1429

======

NODE CONFIG

I am running in a simple Master / Slave cluster. Each node is configured
such that the cluster is defined with a static connector to the other.
Start up looks fine and the Slave stops accepting connections and backup is
announced.

QUEUE CONFIG

Lets set up a scenario here that demonstrates a few things. lets say that
in the broker xml an address named FOO (anycast to a queue named FOO) is
defined. Security settings also allow role MAVERICK to send and consume.
Lets also say that after the system started via management operations we
created another address named BAR (anycast to queue named BAR). We also at
runtime added security settings to allow role GOOSE to send and consume
both FOO and BAR

*broker.xml*
address FOO
role MAVERICK send to FOO

*runtime management*
address BAR
role GOOSE send to BAR
role GOOSE send to FOO

FAILOVER & FAILBACK WORKING

so Master is "serving", if you will, FOO and BAR. GOOSE can send to both
FOO and BAR. If we turn off Master then Slave starts listening on the
acceptors and continues to serve FOO and BAR. The security settings also
replicated so GOOSE can still send to FOO and BAR. replication is working
fine. Start Master back up and Master takes over and the Slave turns off
its acceptors. This is just as expected and it works great behind our
F5/VIP which sees active pool members based off of who is accepting
requests to 5672.

PROBLEMS WITH CONFIGURATION RELOAD & BACKUPS

If I make any change at all to the slave broker.xml file the "configuration
reload" feature take effect and starts/enables the acceptors on the Slave.
The Slave is only "serving" any queues that are defined in the broker.xml
so in this case its only serving FOO. since our VIP now sees that another
pool member is active it starts routing traffic to the slave. the slave can
only take FOO traffic because we have auto-create of queues turned off. so
BAR traffic that happens to go to the slave is denied. also Replication now
seem problematic as the Slave is no longer backing up the Master and the
messages now being sent to FOO on the Slave are not being backed up by
anybody.

In fact anything configured via management is no longer considered. GOOSE
can no longer send to FOO. MAVERICK still can.

QUESTIONS

Is this by design? is there a way to completely disable configuration
reload all together? Can configuration reload be configured to also take
into account address and security configuration that has happened via the
management api? is there a way to configure the configuration reload to
consider the fact that it is supposed to be part of a cluster?

i am completely open to this being a problem with my set up. i wanted to
quickly throw this out there if i need to come back and supply broker XML
files i can create some that use these examples. but maybe this is
something that has been brought up before

Re: [Artemis 2.1-2.3] Configuration Reload on slave broker.xml causes slave to start/enable acceptors which disables backups

Posted by Clebert Suconic <cl...@gmail.com>.
You edited the Broker in slave and it became live ?

If you edited live the only thing I can think of wasn’t a bug fixed in 2.3
where queues would be removed if edited.  But I don’t see it was the case
here.

On Sat, Oct 21, 2017 at 12:13 AM Dan Langford <da...@gmail.com> wrote:

> I am just getting back from back to back tech conferences. thanks for the
> follow up.
>
> i attached a broker0.xml (master, :61616, :5672, :8161) and broker1.xml
> (slave, :61617, :5673, :8162)
> when i start broker0 and broker1 i see that broker1 announces itself as
> the backup. also both broker show Artemis version 2.3.0 in the logs. i can
> log into the included console http://localhost:8162/console and see that
> broker1 is a part of a cluster and has no queues deployed. which is correct
> because it is just in slave/backup mode right now
>
> When i edit broker1.xml while these are running and save the file i see
> these logs printed by the slave node (broker1)
>
> 22:01:22,727 INFO  [org.apache.activemq.artemis.core.server] AMQ221056:
> Reloading configuration ...security
> 22:01:22,727 INFO  [org.apache.activemq.artemis.core.server] AMQ221056:
> Reloading configuration ...address settings
> 22:01:22,728 INFO  [org.apache.activemq.artemis.core.server] AMQ221056:
> Reloading configuration ...diverts
> 22:01:22,728 INFO  [org.apache.activemq.artemis.core.server] AMQ221056:
> Reloading configuration ...addresses
> 22:01:22,737 INFO  [org.apache.activemq.artemis.core.server] AMQ221003:
> Deploying queue FOO
> 22:01:22,752 WARN  [org.apache.activemq.artemis.core.server] AMQ222165: No
> Dead Letter Address configured for queue FOO in AddressSettings
> 22:01:22,752 WARN  [org.apache.activemq.artemis.core.server] AMQ222166: No
> Expiry Address configured for queue FOO in AddressSettings
>
> (the master node, broker0, printed no logs) and its at this point in which
> broker1, which was the slave, deploys a queue and only takes into account
> the security settings from this broker1.xml file. the slave should not have
> any queues deployed. if i look at the console though i can see the FOO
> queue deployed.
>
> if/when it does decide to deploy queues due to a failure of the master
> node it should use the security settings that are currently used in the
> cluster which could be a combination of things defined in the broker file
> and other settings changed via the api. and all that functionality does
> work just fine in a failover scenario but things get in a weird state if
> the broker.xml files are changed.
>
> if i rerun this scenario with broker0.xml being the one to be edited it
> also gets in an odd state.
>
> i know that this does not demonstrate all of the initial claims i made. i
> will get back to working on that. however this does show that a slave that
> was an announced backup does deploy a queue and reload configuration in a
> scenario that it probably should not.
>
> if the attached files do not make it through let me know and i can go
> upload them somewhere
>
>
> On Thu, Oct 19, 2017 at 8:53 AM Justin Bertram <jb...@apache.org>
> wrote:
>
>> Any progress here?
>>
>>
>> Justin
>>
>> On Thu, Sep 21, 2017 at 3:51 PM, Dan Langford <da...@gmail.com>
>> wrote:
>>
>> > a quick note and then i will work on providing a more reproducible use
>> set
>> > of artifacts.
>> >
>> > > How did you test that?
>> >
>> > Three things i notice. 1)  in the JMX console (viewed via Hawtio/jolokia
>> > api in 2.1, and the skinned version of such in the new 2.3 console [very
>> > nice BTW]) the slave will typically NOT show the addresses if the slave
>> is
>> > only set up to be a backup of the master. it will also not show the
>> > acceptors. 2) "netstat -tunlp" on my box on the slave will show that the
>> > slave is not listening for connections on the ports. When i change the
>> file
>> > and save it i see 3) the slave broker starts logging that it is starting
>> > acceptors and it logs the addresses/queues that are coming online and it
>> > mentions something about SSL in regards to the amqps port. i go back and
>> > check 1) the JMX console and sure enough it now is showing addresses and
>> > acceptors. but only the addresses mentioned in the broker.xml none of
>> the
>> > ones added since then. and then over to 2) the command line "netstat
>> > -tunlp" and the slave is now listening on 5671. a nother side effect
>> that i
>> > see that 4) authentication may not work if the role was added
>> > programatically.
>> >
>> > a restart of the Slave resolves all of this and it comes online as
>> simply
>> > just a backup to the Master
>> >
>> > >  If reloading broker.xml causes queues added via the management API to
>> > disappear I think that's likely a bug.
>> >
>> > that is what i observed but i would clarify that is only on that Slave
>> > node. the Master node is still working just fine with all the queues
>> added
>> > from the management api. and when the slave restarts it goes back to
>> > working as expected and failover/failback with all those additional
>> queues
>> > work. so its not a permanent delete in the cluster its just not
>> accessible
>> > on that slave node after the configuration reload.
>> >
>> > i have not modified the delete policy.
>> >
>> > i will whip up the simplest set of broker.xml files to show this as
>> soon as
>> > i can here at work
>> >
>> >
>> >
>> >
>> > On Thu, Sep 21, 2017 at 1:46 PM Michael André Pearce <
>> > michael.andre.pearce@me.com> wrote:
>> >
>> > > The only scenario I can think here on the loss of address/queues ,
>> noting
>> > > that somehow your slave is thinking it can active as master (aka
>> > acceptors
>> > > start up) is that auto-delete-queues/auto-delete-address is kicking in
>> > > (which default is true I believe) as it deletes queue on no
>> subscription
>> > > and then address will delete on no queues. Which would occur if the
>> slave
>> > > is activating somehow as you’d have no subscriptions.
>> > >
>> > > Seems that getting to bottom of why the slave is activating is prob
>> the
>> > > main priority here.
>> > >
>> > > Sent from my iPhone
>> > >
>> > > > On 21 Sep 2017, at 20:36, Michael André Pearce <
>> > > michael.andre.pearce@me.com> wrote:
>> > > >
>> > > > I’ve just tested manually (in a HA setup) that if you set delete
>> policy
>> > > to OFF which by default it is set to OFF, then queues and address do
>> not
>> > > get undeployed on reload. Eg queues and addresses if created in GUI or
>> > CLI
>> > > remain.
>> > > >
>> > > > Only if you change/override that to FORCE would it remove an
>> address or
>> > > queue not defined in broker.xml. I assume here you have not set
>> deletion
>> > > policy to FORCE, and just on default OFF
>> > > >
>> > > > It would be good/great help if you are able to make any form or
>> > > reproducer integration test if you still see this issue.
>> > > >
>> > > >
>> > > > Cheers
>> > > > Mike
>> > > >
>> > > >
>> > > >
>> > > > Sent from my iPhone
>> > > >
>> > > >> On 21 Sep 2017, at 20:16, Justin Bertram <jb...@redhat.com>
>> wrote:
>> > > >>
>> > > >> Many of the changes made via the management API are volatile.
>> > However,
>> > > >> adding queues should be persistent.  If reloading broker.xml causes
>> > > queues
>> > > >> added via the management API to disappear I think that's likely a
>> bug.
>> > >
>> >
>>
> --
Clebert Suconic

Re: [Artemis 2.1-2.3] Configuration Reload on slave broker.xml causes slave to start/enable acceptors which disables backups

Posted by Dan Langford <da...@gmail.com>.
I am just getting back from back to back tech conferences. thanks for the
follow up.

i attached a broker0.xml (master, :61616, :5672, :8161) and broker1.xml
(slave, :61617, :5673, :8162)
when i start broker0 and broker1 i see that broker1 announces itself as the
backup. also both broker show Artemis version 2.3.0 in the logs. i can log
into the included console http://localhost:8162/console and see that
broker1 is a part of a cluster and has no queues deployed. which is correct
because it is just in slave/backup mode right now

When i edit broker1.xml while these are running and save the file i see
these logs printed by the slave node (broker1)

22:01:22,727 INFO  [org.apache.activemq.artemis.core.server] AMQ221056:
Reloading configuration ...security
22:01:22,727 INFO  [org.apache.activemq.artemis.core.server] AMQ221056:
Reloading configuration ...address settings
22:01:22,728 INFO  [org.apache.activemq.artemis.core.server] AMQ221056:
Reloading configuration ...diverts
22:01:22,728 INFO  [org.apache.activemq.artemis.core.server] AMQ221056:
Reloading configuration ...addresses
22:01:22,737 INFO  [org.apache.activemq.artemis.core.server] AMQ221003:
Deploying queue FOO
22:01:22,752 WARN  [org.apache.activemq.artemis.core.server] AMQ222165: No
Dead Letter Address configured for queue FOO in AddressSettings
22:01:22,752 WARN  [org.apache.activemq.artemis.core.server] AMQ222166: No
Expiry Address configured for queue FOO in AddressSettings

(the master node, broker0, printed no logs) and its at this point in which
broker1, which was the slave, deploys a queue and only takes into account
the security settings from this broker1.xml file. the slave should not have
any queues deployed. if i look at the console though i can see the FOO
queue deployed.

if/when it does decide to deploy queues due to a failure of the master node
it should use the security settings that are currently used in the cluster
which could be a combination of things defined in the broker file and other
settings changed via the api. and all that functionality does work just
fine in a failover scenario but things get in a weird state if the
broker.xml files are changed.

if i rerun this scenario with broker0.xml being the one to be edited it
also gets in an odd state.

i know that this does not demonstrate all of the initial claims i made. i
will get back to working on that. however this does show that a slave that
was an announced backup does deploy a queue and reload configuration in a
scenario that it probably should not.

if the attached files do not make it through let me know and i can go
upload them somewhere

On Thu, Oct 19, 2017 at 8:53 AM Justin Bertram <jb...@apache.org> wrote:

> Any progress here?
>
>
> Justin
>
> On Thu, Sep 21, 2017 at 3:51 PM, Dan Langford <da...@gmail.com>
> wrote:
>
> > a quick note and then i will work on providing a more reproducible use
> set
> > of artifacts.
> >
> > > How did you test that?
> >
> > Three things i notice. 1)  in the JMX console (viewed via Hawtio/jolokia
> > api in 2.1, and the skinned version of such in the new 2.3 console [very
> > nice BTW]) the slave will typically NOT show the addresses if the slave
> is
> > only set up to be a backup of the master. it will also not show the
> > acceptors. 2) "netstat -tunlp" on my box on the slave will show that the
> > slave is not listening for connections on the ports. When i change the
> file
> > and save it i see 3) the slave broker starts logging that it is starting
> > acceptors and it logs the addresses/queues that are coming online and it
> > mentions something about SSL in regards to the amqps port. i go back and
> > check 1) the JMX console and sure enough it now is showing addresses and
> > acceptors. but only the addresses mentioned in the broker.xml none of the
> > ones added since then. and then over to 2) the command line "netstat
> > -tunlp" and the slave is now listening on 5671. a nother side effect
> that i
> > see that 4) authentication may not work if the role was added
> > programatically.
> >
> > a restart of the Slave resolves all of this and it comes online as simply
> > just a backup to the Master
> >
> > >  If reloading broker.xml causes queues added via the management API to
> > disappear I think that's likely a bug.
> >
> > that is what i observed but i would clarify that is only on that Slave
> > node. the Master node is still working just fine with all the queues
> added
> > from the management api. and when the slave restarts it goes back to
> > working as expected and failover/failback with all those additional
> queues
> > work. so its not a permanent delete in the cluster its just not
> accessible
> > on that slave node after the configuration reload.
> >
> > i have not modified the delete policy.
> >
> > i will whip up the simplest set of broker.xml files to show this as soon
> as
> > i can here at work
> >
> >
> >
> >
> > On Thu, Sep 21, 2017 at 1:46 PM Michael André Pearce <
> > michael.andre.pearce@me.com> wrote:
> >
> > > The only scenario I can think here on the loss of address/queues ,
> noting
> > > that somehow your slave is thinking it can active as master (aka
> > acceptors
> > > start up) is that auto-delete-queues/auto-delete-address is kicking in
> > > (which default is true I believe) as it deletes queue on no
> subscription
> > > and then address will delete on no queues. Which would occur if the
> slave
> > > is activating somehow as you’d have no subscriptions.
> > >
> > > Seems that getting to bottom of why the slave is activating is prob the
> > > main priority here.
> > >
> > > Sent from my iPhone
> > >
> > > > On 21 Sep 2017, at 20:36, Michael André Pearce <
> > > michael.andre.pearce@me.com> wrote:
> > > >
> > > > I’ve just tested manually (in a HA setup) that if you set delete
> policy
> > > to OFF which by default it is set to OFF, then queues and address do
> not
> > > get undeployed on reload. Eg queues and addresses if created in GUI or
> > CLI
> > > remain.
> > > >
> > > > Only if you change/override that to FORCE would it remove an address
> or
> > > queue not defined in broker.xml. I assume here you have not set
> deletion
> > > policy to FORCE, and just on default OFF
> > > >
> > > > It would be good/great help if you are able to make any form or
> > > reproducer integration test if you still see this issue.
> > > >
> > > >
> > > > Cheers
> > > > Mike
> > > >
> > > >
> > > >
> > > > Sent from my iPhone
> > > >
> > > >> On 21 Sep 2017, at 20:16, Justin Bertram <jb...@redhat.com>
> wrote:
> > > >>
> > > >> Many of the changes made via the management API are volatile.
> > However,
> > > >> adding queues should be persistent.  If reloading broker.xml causes
> > > queues
> > > >> added via the management API to disappear I think that's likely a
> bug.
> > >
> >
>

Re: [Artemis 2.1-2.3] Configuration Reload on slave broker.xml causes slave to start/enable acceptors which disables backups

Posted by Justin Bertram <jb...@apache.org>.
Any progress here?


Justin

On Thu, Sep 21, 2017 at 3:51 PM, Dan Langford <da...@gmail.com> wrote:

> a quick note and then i will work on providing a more reproducible use set
> of artifacts.
>
> > How did you test that?
>
> Three things i notice. 1)  in the JMX console (viewed via Hawtio/jolokia
> api in 2.1, and the skinned version of such in the new 2.3 console [very
> nice BTW]) the slave will typically NOT show the addresses if the slave is
> only set up to be a backup of the master. it will also not show the
> acceptors. 2) "netstat -tunlp" on my box on the slave will show that the
> slave is not listening for connections on the ports. When i change the file
> and save it i see 3) the slave broker starts logging that it is starting
> acceptors and it logs the addresses/queues that are coming online and it
> mentions something about SSL in regards to the amqps port. i go back and
> check 1) the JMX console and sure enough it now is showing addresses and
> acceptors. but only the addresses mentioned in the broker.xml none of the
> ones added since then. and then over to 2) the command line "netstat
> -tunlp" and the slave is now listening on 5671. a nother side effect that i
> see that 4) authentication may not work if the role was added
> programatically.
>
> a restart of the Slave resolves all of this and it comes online as simply
> just a backup to the Master
>
> >  If reloading broker.xml causes queues added via the management API to
> disappear I think that's likely a bug.
>
> that is what i observed but i would clarify that is only on that Slave
> node. the Master node is still working just fine with all the queues added
> from the management api. and when the slave restarts it goes back to
> working as expected and failover/failback with all those additional queues
> work. so its not a permanent delete in the cluster its just not accessible
> on that slave node after the configuration reload.
>
> i have not modified the delete policy.
>
> i will whip up the simplest set of broker.xml files to show this as soon as
> i can here at work
>
>
>
>
> On Thu, Sep 21, 2017 at 1:46 PM Michael André Pearce <
> michael.andre.pearce@me.com> wrote:
>
> > The only scenario I can think here on the loss of address/queues , noting
> > that somehow your slave is thinking it can active as master (aka
> acceptors
> > start up) is that auto-delete-queues/auto-delete-address is kicking in
> > (which default is true I believe) as it deletes queue on no subscription
> > and then address will delete on no queues. Which would occur if the slave
> > is activating somehow as you’d have no subscriptions.
> >
> > Seems that getting to bottom of why the slave is activating is prob the
> > main priority here.
> >
> > Sent from my iPhone
> >
> > > On 21 Sep 2017, at 20:36, Michael André Pearce <
> > michael.andre.pearce@me.com> wrote:
> > >
> > > I’ve just tested manually (in a HA setup) that if you set delete policy
> > to OFF which by default it is set to OFF, then queues and address do not
> > get undeployed on reload. Eg queues and addresses if created in GUI or
> CLI
> > remain.
> > >
> > > Only if you change/override that to FORCE would it remove an address or
> > queue not defined in broker.xml. I assume here you have not set deletion
> > policy to FORCE, and just on default OFF
> > >
> > > It would be good/great help if you are able to make any form or
> > reproducer integration test if you still see this issue.
> > >
> > >
> > > Cheers
> > > Mike
> > >
> > >
> > >
> > > Sent from my iPhone
> > >
> > >> On 21 Sep 2017, at 20:16, Justin Bertram <jb...@redhat.com> wrote:
> > >>
> > >> Many of the changes made via the management API are volatile.
> However,
> > >> adding queues should be persistent.  If reloading broker.xml causes
> > queues
> > >> added via the management API to disappear I think that's likely a bug.
> >
>

Re: [Artemis 2.1-2.3] Configuration Reload on slave broker.xml causes slave to start/enable acceptors which disables backups

Posted by Dan Langford <da...@gmail.com>.
a quick note and then i will work on providing a more reproducible use set
of artifacts.

> How did you test that?

Three things i notice. 1)  in the JMX console (viewed via Hawtio/jolokia
api in 2.1, and the skinned version of such in the new 2.3 console [very
nice BTW]) the slave will typically NOT show the addresses if the slave is
only set up to be a backup of the master. it will also not show the
acceptors. 2) "netstat -tunlp" on my box on the slave will show that the
slave is not listening for connections on the ports. When i change the file
and save it i see 3) the slave broker starts logging that it is starting
acceptors and it logs the addresses/queues that are coming online and it
mentions something about SSL in regards to the amqps port. i go back and
check 1) the JMX console and sure enough it now is showing addresses and
acceptors. but only the addresses mentioned in the broker.xml none of the
ones added since then. and then over to 2) the command line "netstat
-tunlp" and the slave is now listening on 5671. a nother side effect that i
see that 4) authentication may not work if the role was added
programatically.

a restart of the Slave resolves all of this and it comes online as simply
just a backup to the Master

>  If reloading broker.xml causes queues added via the management API to
disappear I think that's likely a bug.

that is what i observed but i would clarify that is only on that Slave
node. the Master node is still working just fine with all the queues added
from the management api. and when the slave restarts it goes back to
working as expected and failover/failback with all those additional queues
work. so its not a permanent delete in the cluster its just not accessible
on that slave node after the configuration reload.

i have not modified the delete policy.

i will whip up the simplest set of broker.xml files to show this as soon as
i can here at work




On Thu, Sep 21, 2017 at 1:46 PM Michael André Pearce <
michael.andre.pearce@me.com> wrote:

> The only scenario I can think here on the loss of address/queues , noting
> that somehow your slave is thinking it can active as master (aka acceptors
> start up) is that auto-delete-queues/auto-delete-address is kicking in
> (which default is true I believe) as it deletes queue on no subscription
> and then address will delete on no queues. Which would occur if the slave
> is activating somehow as you’d have no subscriptions.
>
> Seems that getting to bottom of why the slave is activating is prob the
> main priority here.
>
> Sent from my iPhone
>
> > On 21 Sep 2017, at 20:36, Michael André Pearce <
> michael.andre.pearce@me.com> wrote:
> >
> > I’ve just tested manually (in a HA setup) that if you set delete policy
> to OFF which by default it is set to OFF, then queues and address do not
> get undeployed on reload. Eg queues and addresses if created in GUI or CLI
> remain.
> >
> > Only if you change/override that to FORCE would it remove an address or
> queue not defined in broker.xml. I assume here you have not set deletion
> policy to FORCE, and just on default OFF
> >
> > It would be good/great help if you are able to make any form or
> reproducer integration test if you still see this issue.
> >
> >
> > Cheers
> > Mike
> >
> >
> >
> > Sent from my iPhone
> >
> >> On 21 Sep 2017, at 20:16, Justin Bertram <jb...@redhat.com> wrote:
> >>
> >> Many of the changes made via the management API are volatile.  However,
> >> adding queues should be persistent.  If reloading broker.xml causes
> queues
> >> added via the management API to disappear I think that's likely a bug.
>

Re: [Artemis 2.1-2.3] Configuration Reload on slave broker.xml causes slave to start/enable acceptors which disables backups

Posted by Michael André Pearce <mi...@me.com>.
The only scenario I can think here on the loss of address/queues , noting that somehow your slave is thinking it can active as master (aka acceptors start up) is that auto-delete-queues/auto-delete-address is kicking in (which default is true I believe) as it deletes queue on no subscription and then address will delete on no queues. Which would occur if the slave is activating somehow as you’d have no subscriptions.

Seems that getting to bottom of why the slave is activating is prob the main priority here.

Sent from my iPhone

> On 21 Sep 2017, at 20:36, Michael André Pearce <mi...@me.com> wrote:
> 
> I’ve just tested manually (in a HA setup) that if you set delete policy to OFF which by default it is set to OFF, then queues and address do not get undeployed on reload. Eg queues and addresses if created in GUI or CLI remain. 
> 
> Only if you change/override that to FORCE would it remove an address or queue not defined in broker.xml. I assume here you have not set deletion policy to FORCE, and just on default OFF
> 
> It would be good/great help if you are able to make any form or reproducer integration test if you still see this issue.
> 
> 
> Cheers
> Mike
> 
> 
> 
> Sent from my iPhone
> 
>> On 21 Sep 2017, at 20:16, Justin Bertram <jb...@redhat.com> wrote:
>> 
>> Many of the changes made via the management API are volatile.  However,
>> adding queues should be persistent.  If reloading broker.xml causes queues
>> added via the management API to disappear I think that's likely a bug.

Re: [Artemis 2.1-2.3] Configuration Reload on slave broker.xml causes slave to start/enable acceptors which disables backups

Posted by Michael André Pearce <mi...@me.com>.
I’ve just tested manually (in a HA setup) that if you set delete policy to OFF which by default it is set to OFF, then queues and address do not get undeployed on reload. Eg queues and addresses if created in GUI or CLI remain. 

Only if you change/override that to FORCE would it remove an address or queue not defined in broker.xml. I assume here you have not set deletion policy to FORCE, and just on default OFF

It would be good/great help if you are able to make any form or reproducer integration test if you still see this issue.


Cheers
Mike



Sent from my iPhone

> On 21 Sep 2017, at 20:16, Justin Bertram <jb...@redhat.com> wrote:
> 
> Many of the changes made via the management API are volatile.  However,
> adding queues should be persistent.  If reloading broker.xml causes queues
> added via the management API to disappear I think that's likely a bug.

Re: [Artemis 2.1-2.3] Configuration Reload on slave broker.xml causes slave to start/enable acceptors which disables backups

Posted by Justin Bertram <jb...@redhat.com>.
> If I make any change at all to the slave broker.xml file the
"configuration reload" feature takes effect and starts/enables the
acceptors on the Slave.

How did you test that?  Looking at the code it appears the configuration
reload logic shouldn't touch the acceptors.  Also, I just tested this on a
simple replicated live/backup pair and when I updated a security-setting on
the backup the acceptors didn't activate and it continued backing up the
live broker as expected.

> is there a way to completely disable configuration reload all together?

Set <configuration-file-refresh-period> to a really high number.  This
won't completely disable it technically speaking, but will effectively
disable it.

> Can configuration reload be configured to also take into account address
and security configuration that has happened via the
management api?

Many of the changes made via the management API are volatile.  However,
adding queues should be persistent.  If reloading broker.xml causes queues
added via the management API to disappear I think that's likely a bug.

> is there a way to configure the configuration reload to consider the fact
that it is supposed to be part of a cluster?

I'd need to understand the problematic use-case better before commenting on
that further.


Justin

On Thu, Sep 21, 2017 at 1:43 PM, Dan Langford <da...@gmail.com> wrote:

> Quick Summary: If I make any change at all to the slave broker.xml file the
> "configuration reload" feature takes effect and starts/enables the
> acceptors on the Slave. This causes the slave to stop backing up the master
> and start accepting its own connections. also address and security settings
> that have been made via management api are lost and only the broker.xml
> file is considered. Im wondering if this is intended behavior, a config
> setting i need to change, or a possible bug. specific details and examples
> follow. also i erroneously created an issue with this already that, based
> on our findings, may need to be closed: ARTEMIS-1429
>
> ======
>
> NODE CONFIG
>
> I am running in a simple Master / Slave cluster. Each node is configured
> such that the cluster is defined with a static connector to the other.
> Start up looks fine and the Slave stops accepting connections and backup is
> announced.
>
> QUEUE CONFIG
>
> Lets set up a scenario here that demonstrates a few things. lets say that
> in the broker xml an address named FOO (anycast to a queue named FOO) is
> defined. Security settings also allow role MAVERICK to send and consume.
> Lets also say that after the system started via management operations we
> created another address named BAR (anycast to queue named BAR). We also at
> runtime added security settings to allow role GOOSE to send and consume
> both FOO and BAR
>
> *broker.xml*
> address FOO
> role MAVERICK send to FOO
>
> *runtime management*
> address BAR
> role GOOSE send to BAR
> role GOOSE send to FOO
>
> FAILOVER & FAILBACK WORKING
>
> so Master is "serving", if you will, FOO and BAR. GOOSE can send to both
> FOO and BAR. If we turn off Master then Slave starts listening on the
> acceptors and continues to serve FOO and BAR. The security settings also
> replicated so GOOSE can still send to FOO and BAR. replication is working
> fine. Start Master back up and Master takes over and the Slave turns off
> its acceptors. This is just as expected and it works great behind our
> F5/VIP which sees active pool members based off of who is accepting
> requests to 5672.
>
> PROBLEMS WITH CONFIGURATION RELOAD & BACKUPS
>
> If I make any change at all to the slave broker.xml file the "configuration
> reload" feature take effect and starts/enables the acceptors on the Slave.
> The Slave is only "serving" any queues that are defined in the broker.xml
> so in this case its only serving FOO. since our VIP now sees that another
> pool member is active it starts routing traffic to the slave. the slave can
> only take FOO traffic because we have auto-create of queues turned off. so
> BAR traffic that happens to go to the slave is denied. also Replication now
> seem problematic as the Slave is no longer backing up the Master and the
> messages now being sent to FOO on the Slave are not being backed up by
> anybody.
>
> In fact anything configured via management is no longer considered. GOOSE
> can no longer send to FOO. MAVERICK still can.
>
> QUESTIONS
>
> Is this by design? is there a way to completely disable configuration
> reload all together? Can configuration reload be configured to also take
> into account address and security configuration that has happened via the
> management api? is there a way to configure the configuration reload to
> consider the fact that it is supposed to be part of a cluster?
>
> i am completely open to this being a problem with my set up. i wanted to
> quickly throw this out there if i need to come back and supply broker XML
> files i can create some that use these examples. but maybe this is
> something that has been brought up before
>