You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@activemq.apache.org by Justin Bertram <jb...@apache.org> on 2017/10/19 14:53:39 UTC

Re: [Artemis 2.1-2.3] Configuration Reload on slave broker.xml causes slave to start/enable acceptors which disables backups

Any progress here?


Justin

On Thu, Sep 21, 2017 at 3:51 PM, Dan Langford <da...@gmail.com> wrote:

> a quick note and then i will work on providing a more reproducible use set
> of artifacts.
>
> > How did you test that?
>
> Three things i notice. 1)  in the JMX console (viewed via Hawtio/jolokia
> api in 2.1, and the skinned version of such in the new 2.3 console [very
> nice BTW]) the slave will typically NOT show the addresses if the slave is
> only set up to be a backup of the master. it will also not show the
> acceptors. 2) "netstat -tunlp" on my box on the slave will show that the
> slave is not listening for connections on the ports. When i change the file
> and save it i see 3) the slave broker starts logging that it is starting
> acceptors and it logs the addresses/queues that are coming online and it
> mentions something about SSL in regards to the amqps port. i go back and
> check 1) the JMX console and sure enough it now is showing addresses and
> acceptors. but only the addresses mentioned in the broker.xml none of the
> ones added since then. and then over to 2) the command line "netstat
> -tunlp" and the slave is now listening on 5671. a nother side effect that i
> see that 4) authentication may not work if the role was added
> programatically.
>
> a restart of the Slave resolves all of this and it comes online as simply
> just a backup to the Master
>
> >  If reloading broker.xml causes queues added via the management API to
> disappear I think that's likely a bug.
>
> that is what i observed but i would clarify that is only on that Slave
> node. the Master node is still working just fine with all the queues added
> from the management api. and when the slave restarts it goes back to
> working as expected and failover/failback with all those additional queues
> work. so its not a permanent delete in the cluster its just not accessible
> on that slave node after the configuration reload.
>
> i have not modified the delete policy.
>
> i will whip up the simplest set of broker.xml files to show this as soon as
> i can here at work
>
>
>
>
> On Thu, Sep 21, 2017 at 1:46 PM Michael André Pearce <
> michael.andre.pearce@me.com> wrote:
>
> > The only scenario I can think here on the loss of address/queues , noting
> > that somehow your slave is thinking it can active as master (aka
> acceptors
> > start up) is that auto-delete-queues/auto-delete-address is kicking in
> > (which default is true I believe) as it deletes queue on no subscription
> > and then address will delete on no queues. Which would occur if the slave
> > is activating somehow as you’d have no subscriptions.
> >
> > Seems that getting to bottom of why the slave is activating is prob the
> > main priority here.
> >
> > Sent from my iPhone
> >
> > > On 21 Sep 2017, at 20:36, Michael André Pearce <
> > michael.andre.pearce@me.com> wrote:
> > >
> > > I’ve just tested manually (in a HA setup) that if you set delete policy
> > to OFF which by default it is set to OFF, then queues and address do not
> > get undeployed on reload. Eg queues and addresses if created in GUI or
> CLI
> > remain.
> > >
> > > Only if you change/override that to FORCE would it remove an address or
> > queue not defined in broker.xml. I assume here you have not set deletion
> > policy to FORCE, and just on default OFF
> > >
> > > It would be good/great help if you are able to make any form or
> > reproducer integration test if you still see this issue.
> > >
> > >
> > > Cheers
> > > Mike
> > >
> > >
> > >
> > > Sent from my iPhone
> > >
> > >> On 21 Sep 2017, at 20:16, Justin Bertram <jb...@redhat.com> wrote:
> > >>
> > >> Many of the changes made via the management API are volatile.
> However,
> > >> adding queues should be persistent.  If reloading broker.xml causes
> > queues
> > >> added via the management API to disappear I think that's likely a bug.
> >
>

Re: [Artemis 2.1-2.3] Configuration Reload on slave broker.xml causes slave to start/enable acceptors which disables backups

Posted by Clebert Suconic <cl...@gmail.com>.
You edited the Broker in slave and it became live ?

If you edited live the only thing I can think of wasn’t a bug fixed in 2.3
where queues would be removed if edited.  But I don’t see it was the case
here.

On Sat, Oct 21, 2017 at 12:13 AM Dan Langford <da...@gmail.com> wrote:

> I am just getting back from back to back tech conferences. thanks for the
> follow up.
>
> i attached a broker0.xml (master, :61616, :5672, :8161) and broker1.xml
> (slave, :61617, :5673, :8162)
> when i start broker0 and broker1 i see that broker1 announces itself as
> the backup. also both broker show Artemis version 2.3.0 in the logs. i can
> log into the included console http://localhost:8162/console and see that
> broker1 is a part of a cluster and has no queues deployed. which is correct
> because it is just in slave/backup mode right now
>
> When i edit broker1.xml while these are running and save the file i see
> these logs printed by the slave node (broker1)
>
> 22:01:22,727 INFO  [org.apache.activemq.artemis.core.server] AMQ221056:
> Reloading configuration ...security
> 22:01:22,727 INFO  [org.apache.activemq.artemis.core.server] AMQ221056:
> Reloading configuration ...address settings
> 22:01:22,728 INFO  [org.apache.activemq.artemis.core.server] AMQ221056:
> Reloading configuration ...diverts
> 22:01:22,728 INFO  [org.apache.activemq.artemis.core.server] AMQ221056:
> Reloading configuration ...addresses
> 22:01:22,737 INFO  [org.apache.activemq.artemis.core.server] AMQ221003:
> Deploying queue FOO
> 22:01:22,752 WARN  [org.apache.activemq.artemis.core.server] AMQ222165: No
> Dead Letter Address configured for queue FOO in AddressSettings
> 22:01:22,752 WARN  [org.apache.activemq.artemis.core.server] AMQ222166: No
> Expiry Address configured for queue FOO in AddressSettings
>
> (the master node, broker0, printed no logs) and its at this point in which
> broker1, which was the slave, deploys a queue and only takes into account
> the security settings from this broker1.xml file. the slave should not have
> any queues deployed. if i look at the console though i can see the FOO
> queue deployed.
>
> if/when it does decide to deploy queues due to a failure of the master
> node it should use the security settings that are currently used in the
> cluster which could be a combination of things defined in the broker file
> and other settings changed via the api. and all that functionality does
> work just fine in a failover scenario but things get in a weird state if
> the broker.xml files are changed.
>
> if i rerun this scenario with broker0.xml being the one to be edited it
> also gets in an odd state.
>
> i know that this does not demonstrate all of the initial claims i made. i
> will get back to working on that. however this does show that a slave that
> was an announced backup does deploy a queue and reload configuration in a
> scenario that it probably should not.
>
> if the attached files do not make it through let me know and i can go
> upload them somewhere
>
>
> On Thu, Oct 19, 2017 at 8:53 AM Justin Bertram <jb...@apache.org>
> wrote:
>
>> Any progress here?
>>
>>
>> Justin
>>
>> On Thu, Sep 21, 2017 at 3:51 PM, Dan Langford <da...@gmail.com>
>> wrote:
>>
>> > a quick note and then i will work on providing a more reproducible use
>> set
>> > of artifacts.
>> >
>> > > How did you test that?
>> >
>> > Three things i notice. 1)  in the JMX console (viewed via Hawtio/jolokia
>> > api in 2.1, and the skinned version of such in the new 2.3 console [very
>> > nice BTW]) the slave will typically NOT show the addresses if the slave
>> is
>> > only set up to be a backup of the master. it will also not show the
>> > acceptors. 2) "netstat -tunlp" on my box on the slave will show that the
>> > slave is not listening for connections on the ports. When i change the
>> file
>> > and save it i see 3) the slave broker starts logging that it is starting
>> > acceptors and it logs the addresses/queues that are coming online and it
>> > mentions something about SSL in regards to the amqps port. i go back and
>> > check 1) the JMX console and sure enough it now is showing addresses and
>> > acceptors. but only the addresses mentioned in the broker.xml none of
>> the
>> > ones added since then. and then over to 2) the command line "netstat
>> > -tunlp" and the slave is now listening on 5671. a nother side effect
>> that i
>> > see that 4) authentication may not work if the role was added
>> > programatically.
>> >
>> > a restart of the Slave resolves all of this and it comes online as
>> simply
>> > just a backup to the Master
>> >
>> > >  If reloading broker.xml causes queues added via the management API to
>> > disappear I think that's likely a bug.
>> >
>> > that is what i observed but i would clarify that is only on that Slave
>> > node. the Master node is still working just fine with all the queues
>> added
>> > from the management api. and when the slave restarts it goes back to
>> > working as expected and failover/failback with all those additional
>> queues
>> > work. so its not a permanent delete in the cluster its just not
>> accessible
>> > on that slave node after the configuration reload.
>> >
>> > i have not modified the delete policy.
>> >
>> > i will whip up the simplest set of broker.xml files to show this as
>> soon as
>> > i can here at work
>> >
>> >
>> >
>> >
>> > On Thu, Sep 21, 2017 at 1:46 PM Michael André Pearce <
>> > michael.andre.pearce@me.com> wrote:
>> >
>> > > The only scenario I can think here on the loss of address/queues ,
>> noting
>> > > that somehow your slave is thinking it can active as master (aka
>> > acceptors
>> > > start up) is that auto-delete-queues/auto-delete-address is kicking in
>> > > (which default is true I believe) as it deletes queue on no
>> subscription
>> > > and then address will delete on no queues. Which would occur if the
>> slave
>> > > is activating somehow as you’d have no subscriptions.
>> > >
>> > > Seems that getting to bottom of why the slave is activating is prob
>> the
>> > > main priority here.
>> > >
>> > > Sent from my iPhone
>> > >
>> > > > On 21 Sep 2017, at 20:36, Michael André Pearce <
>> > > michael.andre.pearce@me.com> wrote:
>> > > >
>> > > > I’ve just tested manually (in a HA setup) that if you set delete
>> policy
>> > > to OFF which by default it is set to OFF, then queues and address do
>> not
>> > > get undeployed on reload. Eg queues and addresses if created in GUI or
>> > CLI
>> > > remain.
>> > > >
>> > > > Only if you change/override that to FORCE would it remove an
>> address or
>> > > queue not defined in broker.xml. I assume here you have not set
>> deletion
>> > > policy to FORCE, and just on default OFF
>> > > >
>> > > > It would be good/great help if you are able to make any form or
>> > > reproducer integration test if you still see this issue.
>> > > >
>> > > >
>> > > > Cheers
>> > > > Mike
>> > > >
>> > > >
>> > > >
>> > > > Sent from my iPhone
>> > > >
>> > > >> On 21 Sep 2017, at 20:16, Justin Bertram <jb...@redhat.com>
>> wrote:
>> > > >>
>> > > >> Many of the changes made via the management API are volatile.
>> > However,
>> > > >> adding queues should be persistent.  If reloading broker.xml causes
>> > > queues
>> > > >> added via the management API to disappear I think that's likely a
>> bug.
>> > >
>> >
>>
> --
Clebert Suconic

Re: [Artemis 2.1-2.3] Configuration Reload on slave broker.xml causes slave to start/enable acceptors which disables backups

Posted by Dan Langford <da...@gmail.com>.
I am just getting back from back to back tech conferences. thanks for the
follow up.

i attached a broker0.xml (master, :61616, :5672, :8161) and broker1.xml
(slave, :61617, :5673, :8162)
when i start broker0 and broker1 i see that broker1 announces itself as the
backup. also both broker show Artemis version 2.3.0 in the logs. i can log
into the included console http://localhost:8162/console and see that
broker1 is a part of a cluster and has no queues deployed. which is correct
because it is just in slave/backup mode right now

When i edit broker1.xml while these are running and save the file i see
these logs printed by the slave node (broker1)

22:01:22,727 INFO  [org.apache.activemq.artemis.core.server] AMQ221056:
Reloading configuration ...security
22:01:22,727 INFO  [org.apache.activemq.artemis.core.server] AMQ221056:
Reloading configuration ...address settings
22:01:22,728 INFO  [org.apache.activemq.artemis.core.server] AMQ221056:
Reloading configuration ...diverts
22:01:22,728 INFO  [org.apache.activemq.artemis.core.server] AMQ221056:
Reloading configuration ...addresses
22:01:22,737 INFO  [org.apache.activemq.artemis.core.server] AMQ221003:
Deploying queue FOO
22:01:22,752 WARN  [org.apache.activemq.artemis.core.server] AMQ222165: No
Dead Letter Address configured for queue FOO in AddressSettings
22:01:22,752 WARN  [org.apache.activemq.artemis.core.server] AMQ222166: No
Expiry Address configured for queue FOO in AddressSettings

(the master node, broker0, printed no logs) and its at this point in which
broker1, which was the slave, deploys a queue and only takes into account
the security settings from this broker1.xml file. the slave should not have
any queues deployed. if i look at the console though i can see the FOO
queue deployed.

if/when it does decide to deploy queues due to a failure of the master node
it should use the security settings that are currently used in the cluster
which could be a combination of things defined in the broker file and other
settings changed via the api. and all that functionality does work just
fine in a failover scenario but things get in a weird state if the
broker.xml files are changed.

if i rerun this scenario with broker0.xml being the one to be edited it
also gets in an odd state.

i know that this does not demonstrate all of the initial claims i made. i
will get back to working on that. however this does show that a slave that
was an announced backup does deploy a queue and reload configuration in a
scenario that it probably should not.

if the attached files do not make it through let me know and i can go
upload them somewhere

On Thu, Oct 19, 2017 at 8:53 AM Justin Bertram <jb...@apache.org> wrote:

> Any progress here?
>
>
> Justin
>
> On Thu, Sep 21, 2017 at 3:51 PM, Dan Langford <da...@gmail.com>
> wrote:
>
> > a quick note and then i will work on providing a more reproducible use
> set
> > of artifacts.
> >
> > > How did you test that?
> >
> > Three things i notice. 1)  in the JMX console (viewed via Hawtio/jolokia
> > api in 2.1, and the skinned version of such in the new 2.3 console [very
> > nice BTW]) the slave will typically NOT show the addresses if the slave
> is
> > only set up to be a backup of the master. it will also not show the
> > acceptors. 2) "netstat -tunlp" on my box on the slave will show that the
> > slave is not listening for connections on the ports. When i change the
> file
> > and save it i see 3) the slave broker starts logging that it is starting
> > acceptors and it logs the addresses/queues that are coming online and it
> > mentions something about SSL in regards to the amqps port. i go back and
> > check 1) the JMX console and sure enough it now is showing addresses and
> > acceptors. but only the addresses mentioned in the broker.xml none of the
> > ones added since then. and then over to 2) the command line "netstat
> > -tunlp" and the slave is now listening on 5671. a nother side effect
> that i
> > see that 4) authentication may not work if the role was added
> > programatically.
> >
> > a restart of the Slave resolves all of this and it comes online as simply
> > just a backup to the Master
> >
> > >  If reloading broker.xml causes queues added via the management API to
> > disappear I think that's likely a bug.
> >
> > that is what i observed but i would clarify that is only on that Slave
> > node. the Master node is still working just fine with all the queues
> added
> > from the management api. and when the slave restarts it goes back to
> > working as expected and failover/failback with all those additional
> queues
> > work. so its not a permanent delete in the cluster its just not
> accessible
> > on that slave node after the configuration reload.
> >
> > i have not modified the delete policy.
> >
> > i will whip up the simplest set of broker.xml files to show this as soon
> as
> > i can here at work
> >
> >
> >
> >
> > On Thu, Sep 21, 2017 at 1:46 PM Michael André Pearce <
> > michael.andre.pearce@me.com> wrote:
> >
> > > The only scenario I can think here on the loss of address/queues ,
> noting
> > > that somehow your slave is thinking it can active as master (aka
> > acceptors
> > > start up) is that auto-delete-queues/auto-delete-address is kicking in
> > > (which default is true I believe) as it deletes queue on no
> subscription
> > > and then address will delete on no queues. Which would occur if the
> slave
> > > is activating somehow as you’d have no subscriptions.
> > >
> > > Seems that getting to bottom of why the slave is activating is prob the
> > > main priority here.
> > >
> > > Sent from my iPhone
> > >
> > > > On 21 Sep 2017, at 20:36, Michael André Pearce <
> > > michael.andre.pearce@me.com> wrote:
> > > >
> > > > I’ve just tested manually (in a HA setup) that if you set delete
> policy
> > > to OFF which by default it is set to OFF, then queues and address do
> not
> > > get undeployed on reload. Eg queues and addresses if created in GUI or
> > CLI
> > > remain.
> > > >
> > > > Only if you change/override that to FORCE would it remove an address
> or
> > > queue not defined in broker.xml. I assume here you have not set
> deletion
> > > policy to FORCE, and just on default OFF
> > > >
> > > > It would be good/great help if you are able to make any form or
> > > reproducer integration test if you still see this issue.
> > > >
> > > >
> > > > Cheers
> > > > Mike
> > > >
> > > >
> > > >
> > > > Sent from my iPhone
> > > >
> > > >> On 21 Sep 2017, at 20:16, Justin Bertram <jb...@redhat.com>
> wrote:
> > > >>
> > > >> Many of the changes made via the management API are volatile.
> > However,
> > > >> adding queues should be persistent.  If reloading broker.xml causes
> > > queues
> > > >> added via the management API to disappear I think that's likely a
> bug.
> > >
> >
>