You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@activemq.apache.org by Martyn Taylor <mt...@redhat.com> on 2016/11/16 15:16:02 UTC

[DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

All,

Some discussion has happened around this topic already, but I wanted to
ensure that everyone here, who have not been following the JIRA/ARTEMIS-780
branch has a chance for input and to digest the information in this
proposal.

In order to understand the motivators outlined here, you first need to
understand how the existing addressing model works in Artemis. For those of
you who are not familiar with how things currently work, I’ve added a
document to the ARTEMIS-780 JIRA in the attachments section, that gives an
overview of the existing model and some more detail / examples of the
proposal: *https://issues.apache.org/jira/browse/ARTEMIS-780
<https://issues.apache.org/jira/browse/ARTEMIS-780>*

To summarise here, the Artemis routing/addressing model has some
restrictions:

1. It’s not possible with core (and therefore across all protocols) to
define ,at the broker side, semantics about addresses. i.e. whether an
address behaves as a “point to point” or “publish subscribe” end point

2. For JMS destinations additional configuration and objects were added to
the broker, that rely on name-spacing to add semantics to addresses i.e.
“jms.topic.” “jms.queue.”  A couple of issues with this:

   1.

   This only works for JMS and no other protocols
   2.

   Name-spacing causes issues for cross protocol communication
   3.

   It means there’s two ways of doing things, 1 for JMS and 1 for
   everything else.

3. The JMS and Core destination definitions do not have enough information
to define more intricate behaviours. Such as whether an address should
behave like a “shared subscription” or similar to a “volatile subscription”
where clients don’t get messages missed when they are offline.

4. Some protocols (AMQP is a good example) don’t have enough information in
their frames for the broker to determine how to behave for certain
endpoints and rely on broker side configuration (or provider specific
parameters).

Proposal

What I’d like to do (and what I’ve proposed in ARTEMIS-780) is to get rid
of the JMS specific components and create a single unified mechanism for
configuring all types of endpoints across all protocols to define:

   -

   Point to point (queue)
   -

   Shared Durable Subscriptions
   -

   Shared Non Durable Subscriptions
   -

   Non Shared durable subscriptions
   -

   Non Shared Non durable subscriptions

To do this, the idea is to create a new “Address” configuration/management
object, that has certain properties such as a routing type which represents
how messages are routed to queues with this address.

When a request for subscription is received by Artemis, the relevant piece
can just look up the address and check it’s properties to determine how to
behave (or if an address doesn’t exist) then default to our existing
behaviour. For those interested in the details of how this might work I’ve
outlined some specific examples in the document on the JIRA.

What are the user impacts:

1. Configuration would need to be revised in order to expose the new
addressing object. I propose that we either continue supporting the old
schema for a while and/or provide a tool to migrate the configuration
schema.

2. Some new management operations would need to be added to expose the new
objects.

3. The JMS configuration and management objects would become obsolete, and
would need removing. The Broker side JMS resources were only a thin facade
to allow some JMS specific behaviour for managing destinations and for
things like registering objects in JNDI.

Broker side JNDI was removed in Artemis 1.0 in order to align with ActiveMQ
5.x style of client side JNDI.  These JMS pieces and their management
objects don't really do much, creating connection factories for instance
offers no functionality right now.  Going forward, users should be able to
use the core management API to do everything.

4. All client applications should behave exactly as they were before. The
proposal is for adding features to the core model, not removing any.  For
things like the Artemis JMS client which relied on name-spaces, they’ll be
a mechanism to define a name-spaced address and a mechanism to switch back
on name-spaces in the client.

5. Given some of the API changes and removal of the JMS specific pieces.
This would likely warrant a major bump. i.e. Artemis 2.0.0.

Whilst I’ve been looking at this, it’s become apparent, that the JMS pieces
have leaked into lots of areas of the code base, which does mean we’d need
to do a fair amount refactoring to move these bits to the new model.

In my opinion this proposal can only be a good thing. It creates a single
place (core) where all addressing objects are configured and managed and
allows all protocol managers to plug into the same mechanism. It solves
some of the cross protocol JMS → other protocols that we’ve seen users
struggle with, but still offers a way to support all the old behaviour in
client applications.

What are others thoughts on this? Any suggestions, comments or concerns?

Regards
Martyn

Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

Posted by Timothy Bish <ta...@gmail.com>.
<cough>....JMS is not a protocol....<cough>

On 11/16/2016 01:39 PM, Justin Bertram wrote:
> Some additional historical context from my personal observations...
>
> The "jms.queue." and "jms.topic." queue/address prefixes were put in place years ago when the code-base was relatively young.  This was before my time but I believe this was done because it was an extremely simple (and effective) solution to the problem of how to provide different semantics between JMS and core.  JMS was the first and only non-core protocol supported by the broker for a long time.  As other protocols were implemented the whole prefix notion was recognized as a weakness (e.g. see ARTEMIS-203).  Since the donation to Apache significant work has been done on other protocols like AMQP, MQTT, and STOMP.  IMO, this has pressed the issue to the point of action.
>
> I think making the changes that Martyn has outlined will provide a better foundation for the long-term of health of Artemis which is an increasingly multi-protocol broker.  It should make the broker simpler which will be a win for configuration as well as maintenance and new protocol integration.
>
>
> Justin
>
> ----- Original Message -----
> From: "Martyn Taylor" <mt...@redhat.com>
> To: dev@activemq.apache.org
> Sent: Wednesday, November 16, 2016 9:16:02 AM
> Subject: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0
>
> All,
>
> Some discussion has happened around this topic already, but I wanted to
> ensure that everyone here, who have not been following the JIRA/ARTEMIS-780
> branch has a chance for input and to digest the information in this
> proposal.
>
> In order to understand the motivators outlined here, you first need to
> understand how the existing addressing model works in Artemis. For those of
> you who are not familiar with how things currently work, I\u2019ve added a
> document to the ARTEMIS-780 JIRA in the attachments section, that gives an
> overview of the existing model and some more detail / examples of the
> proposal: *https://issues.apache.org/jira/browse/ARTEMIS-780
> <https://issues.apache.org/jira/browse/ARTEMIS-780>*
>
> To summarise here, the Artemis routing/addressing model has some
> restrictions:
>
> 1. It\u2019s not possible with core (and therefore across all protocols) to
> define ,at the broker side, semantics about addresses. i.e. whether an
> address behaves as a \u201cpoint to point\u201d or \u201cpublish subscribe\u201d end point
>
> 2. For JMS destinations additional configuration and objects were added to
> the broker, that rely on name-spacing to add semantics to addresses i.e.
> \u201cjms.topic.\u201d \u201cjms.queue.\u201d  A couple of issues with this:
>
>     1.
>
>     This only works for JMS and no other protocols
>     2.
>
>     Name-spacing causes issues for cross protocol communication
>     3.
>
>     It means there\u2019s two ways of doing things, 1 for JMS and 1 for
>     everything else.
>
> 3. The JMS and Core destination definitions do not have enough information
> to define more intricate behaviours. Such as whether an address should
> behave like a \u201cshared subscription\u201d or similar to a \u201cvolatile subscription\u201d
> where clients don\u2019t get messages missed when they are offline.
>
> 4. Some protocols (AMQP is a good example) don\u2019t have enough information in
> their frames for the broker to determine how to behave for certain
> endpoints and rely on broker side configuration (or provider specific
> parameters).
>
> Proposal
>
> What I\u2019d like to do (and what I\u2019ve proposed in ARTEMIS-780) is to get rid
> of the JMS specific components and create a single unified mechanism for
> configuring all types of endpoints across all protocols to define:
>
>     -
>
>     Point to point (queue)
>     -
>
>     Shared Durable Subscriptions
>     -
>
>     Shared Non Durable Subscriptions
>     -
>
>     Non Shared durable subscriptions
>     -
>
>     Non Shared Non durable subscriptions
>
> To do this, the idea is to create a new \u201cAddress\u201d configuration/management
> object, that has certain properties such as a routing type which represents
> how messages are routed to queues with this address.
>
> When a request for subscription is received by Artemis, the relevant piece
> can just look up the address and check it\u2019s properties to determine how to
> behave (or if an address doesn\u2019t exist) then default to our existing
> behaviour. For those interested in the details of how this might work I\u2019ve
> outlined some specific examples in the document on the JIRA.
>
> What are the user impacts:
>
> 1. Configuration would need to be revised in order to expose the new
> addressing object. I propose that we either continue supporting the old
> schema for a while and/or provide a tool to migrate the configuration
> schema.
>
> 2. Some new management operations would need to be added to expose the new
> objects.
>
> 3. The JMS configuration and management objects would become obsolete, and
> would need removing. The Broker side JMS resources were only a thin facade
> to allow some JMS specific behaviour for managing destinations and for
> things like registering objects in JNDI.
>
> Broker side JNDI was removed in Artemis 1.0 in order to align with ActiveMQ
> 5.x style of client side JNDI.  These JMS pieces and their management
> objects don't really do much, creating connection factories for instance
> offers no functionality right now.  Going forward, users should be able to
> use the core management API to do everything.
>
> 4. All client applications should behave exactly as they were before. The
> proposal is for adding features to the core model, not removing any.  For
> things like the Artemis JMS client which relied on name-spaces, they\u2019ll be
> a mechanism to define a name-spaced address and a mechanism to switch back
> on name-spaces in the client.
>
> 5. Given some of the API changes and removal of the JMS specific pieces.
> This would likely warrant a major bump. i.e. Artemis 2.0.0.
>
> Whilst I\u2019ve been looking at this, it\u2019s become apparent, that the JMS pieces
> have leaked into lots of areas of the code base, which does mean we\u2019d need
> to do a fair amount refactoring to move these bits to the new model.
>
> In my opinion this proposal can only be a good thing. It creates a single
> place (core) where all addressing objects are configured and managed and
> allows all protocol managers to plug into the same mechanism. It solves
> some of the cross protocol JMS \u2192 other protocols that we\u2019ve seen users
> struggle with, but still offers a way to support all the old behaviour in
> client applications.
>
> What are others thoughts on this? Any suggestions, comments or concerns?
>
> Regards
> Martyn
>


-- 
Tim Bish
twitter: @tabish121
blog: http://timbish.blogspot.com/


Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

Posted by Justin Bertram <jb...@apache.com>.
Some additional historical context from my personal observations...

The "jms.queue." and "jms.topic." queue/address prefixes were put in place years ago when the code-base was relatively young.  This was before my time but I believe this was done because it was an extremely simple (and effective) solution to the problem of how to provide different semantics between JMS and core.  JMS was the first and only non-core protocol supported by the broker for a long time.  As other protocols were implemented the whole prefix notion was recognized as a weakness (e.g. see ARTEMIS-203).  Since the donation to Apache significant work has been done on other protocols like AMQP, MQTT, and STOMP.  IMO, this has pressed the issue to the point of action.

I think making the changes that Martyn has outlined will provide a better foundation for the long-term of health of Artemis which is an increasingly multi-protocol broker.  It should make the broker simpler which will be a win for configuration as well as maintenance and new protocol integration.


Justin

----- Original Message -----
From: "Martyn Taylor" <mt...@redhat.com>
To: dev@activemq.apache.org
Sent: Wednesday, November 16, 2016 9:16:02 AM
Subject: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

All,

Some discussion has happened around this topic already, but I wanted to
ensure that everyone here, who have not been following the JIRA/ARTEMIS-780
branch has a chance for input and to digest the information in this
proposal.

In order to understand the motivators outlined here, you first need to
understand how the existing addressing model works in Artemis. For those of
you who are not familiar with how things currently work, I’ve added a
document to the ARTEMIS-780 JIRA in the attachments section, that gives an
overview of the existing model and some more detail / examples of the
proposal: *https://issues.apache.org/jira/browse/ARTEMIS-780
<https://issues.apache.org/jira/browse/ARTEMIS-780>*

To summarise here, the Artemis routing/addressing model has some
restrictions:

1. It’s not possible with core (and therefore across all protocols) to
define ,at the broker side, semantics about addresses. i.e. whether an
address behaves as a “point to point” or “publish subscribe” end point

2. For JMS destinations additional configuration and objects were added to
the broker, that rely on name-spacing to add semantics to addresses i.e.
“jms.topic.” “jms.queue.”  A couple of issues with this:

   1.

   This only works for JMS and no other protocols
   2.

   Name-spacing causes issues for cross protocol communication
   3.

   It means there’s two ways of doing things, 1 for JMS and 1 for
   everything else.

3. The JMS and Core destination definitions do not have enough information
to define more intricate behaviours. Such as whether an address should
behave like a “shared subscription” or similar to a “volatile subscription”
where clients don’t get messages missed when they are offline.

4. Some protocols (AMQP is a good example) don’t have enough information in
their frames for the broker to determine how to behave for certain
endpoints and rely on broker side configuration (or provider specific
parameters).

Proposal

What I’d like to do (and what I’ve proposed in ARTEMIS-780) is to get rid
of the JMS specific components and create a single unified mechanism for
configuring all types of endpoints across all protocols to define:

   -

   Point to point (queue)
   -

   Shared Durable Subscriptions
   -

   Shared Non Durable Subscriptions
   -

   Non Shared durable subscriptions
   -

   Non Shared Non durable subscriptions

To do this, the idea is to create a new “Address” configuration/management
object, that has certain properties such as a routing type which represents
how messages are routed to queues with this address.

When a request for subscription is received by Artemis, the relevant piece
can just look up the address and check it’s properties to determine how to
behave (or if an address doesn’t exist) then default to our existing
behaviour. For those interested in the details of how this might work I’ve
outlined some specific examples in the document on the JIRA.

What are the user impacts:

1. Configuration would need to be revised in order to expose the new
addressing object. I propose that we either continue supporting the old
schema for a while and/or provide a tool to migrate the configuration
schema.

2. Some new management operations would need to be added to expose the new
objects.

3. The JMS configuration and management objects would become obsolete, and
would need removing. The Broker side JMS resources were only a thin facade
to allow some JMS specific behaviour for managing destinations and for
things like registering objects in JNDI.

Broker side JNDI was removed in Artemis 1.0 in order to align with ActiveMQ
5.x style of client side JNDI.  These JMS pieces and their management
objects don't really do much, creating connection factories for instance
offers no functionality right now.  Going forward, users should be able to
use the core management API to do everything.

4. All client applications should behave exactly as they were before. The
proposal is for adding features to the core model, not removing any.  For
things like the Artemis JMS client which relied on name-spaces, they’ll be
a mechanism to define a name-spaced address and a mechanism to switch back
on name-spaces in the client.

5. Given some of the API changes and removal of the JMS specific pieces.
This would likely warrant a major bump. i.e. Artemis 2.0.0.

Whilst I’ve been looking at this, it’s become apparent, that the JMS pieces
have leaked into lots of areas of the code base, which does mean we’d need
to do a fair amount refactoring to move these bits to the new model.

In my opinion this proposal can only be a good thing. It creates a single
place (core) where all addressing objects are configured and managed and
allows all protocol managers to plug into the same mechanism. It solves
some of the cross protocol JMS → other protocols that we’ve seen users
struggle with, but still offers a way to support all the old behaviour in
client applications.

What are others thoughts on this? Any suggestions, comments or concerns?

Regards
Martyn

Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

Posted by Matt Pavlovich <ma...@gmail.com>.
On 11/16/16 5:57 PM, Justin Bertram wrote:

>> 0. isn't about auto-creation per-say, its about allowing protocol specific handlers to create address objects as needed for things like subscriptions.  JMS durable subscription, MQTT retain, etc.
> All of that is already in place.  To be clear, subscriptions are just queues (whether that's for JMS, STOMP, MQTT, etc.).
Rockin' =)

>> IHMO using the same prefix across protocols as much as possible would be super dope.
> I think we'd want to make this configurable so that it would be up to users.
Are you saying support making it configurable for each individual protocol?
>> Other messaging systems (namely, IBM MQ Remote Queues, Cluster Queues) support fully qualified destination names.
> This seems to me beyond the scope of this work.  The addressing improvements here are all intra-broker.  I'm in favor of keeping the work narrowly focused so our objectives remain clear.
I agree that remote broker addressing is out-of-scope for this first 
round of implementation. I'm suggesting it might make sense to at least 
accommodate it in the data model now, so there wouldn't be API breakage.

For example:
   "queue:///" as the prefix (or some other uri scheme w/ triple 
slashes) would support adding the remote broker part in later.

-Matt

Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

Posted by Justin Bertram <jb...@apache.com>.
> 0. isn't about auto-creation per-say, its about allowing protocol specific handlers to create address objects as needed for things like subscriptions.  JMS durable subscription, MQTT retain, etc.

All of that is already in place.  To be clear, subscriptions are just queues (whether that's for JMS, STOMP, MQTT, etc.).


> IHMO using the same prefix across protocols as much as possible would be super dope.

I think we'd want to make this configurable so that it would be up to users.


> Other messaging systems (namely, IBM MQ Remote Queues, Cluster Queues) support fully qualified destination names.

This seems to me beyond the scope of this work.  The addressing improvements here are all intra-broker.  I'm in favor of keeping the work narrowly focused so our objectives remain clear.


Justin

----- Original Message -----
From: "Matt Pavlovich" <ma...@gmail.com>
To: dev@activemq.apache.org
Sent: Wednesday, November 16, 2016 3:38:41 PM
Subject: Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

On 11/16/16 2:23 PM, Justin Bertram wrote:

> 0. Auto-creation for JMS queues and topics was already supported, and I expect that will continue.
0. isn't about auto-creation per-say, its about allowing protocol 
specific handlers to create address objects as needed for things like 
subscriptions.  JMS durable subscription, MQTT retain, etc.

> 1. I'm not sure I understand the use-case for having a topic and queue with the same name.  Can you clarify this?
Re-pasting from the IRC convo for folks on the list:

1a. I’m saying 90% of the products in the enterprise messaging market 
supports it (IBM MQ, ActiveMQ 5.x, Tibco EMS). The spec does not clearly 
call it out specifically. I believe JMS v.1.1 4.11 and 10.1.3 could be 
interpreted as support for it.

1b. Since the addressing work is being done now, it seems like a good 
time to get a decision on it

1c. NOT supporting it would mean breaking compatibility b/w ActiveMQ 5.x 
and having to document it for folks migrating from major commercial JMS 
providers

1d. I do not know of a JMS provider that does _not_ support this

> 2. I expect STOMP to have configurable multicast and anycast prefixes for destinations.  Whether users choose "/topic/" and "/queue/" for those respectively is up to them.  I'm not sure about AMQP.
Makes sense.. IHMO using the same prefix across protocols as much as 
possible would be super dope.
> 3. I think using URIs has merit, but each protocol has nuances that would probably make something universal impossible.
Which protocol(s)?
> 4. See ARTEMIS-815 (sub-task of ARTEMIS-780).
Rockin, that covers it! This is something that ActiveMQ 5.x seemed to 
not always have consistent (specifically, in some plugins that only 
operate on "."). Destination separator probably needs to be a one-time 
deal across the broker b/c plugins may need to reference it via global 
config object.

> 5. Can you clarify this a bit more?  An example would be great.
Other messaging systems (namely, IBM MQ Remote Queues, Cluster Queues) 
support fully qualified destination names.

Example:

     a. Client connects to BrokerA.
     b. Client sends message addressed as queue://BrokerB/My.Queue
     c. BrokerA delivers message to BrokerB on behalf of the client
     d. BrokerB delivers the message to queue:///My.Queue

The use case is client code to be able to work with dynamic networks 
(think retail environments / kiosk / IoT where remote brokers come up 
and down in relative high frequency). A remote broker naming convention 
is used and clients are able to address brokers+destinations dynamically.
-Matt

Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

Posted by Matt Pavlovich <ma...@gmail.com>.
On 11/16/16 2:23 PM, Justin Bertram wrote:

> 0. Auto-creation for JMS queues and topics was already supported, and I expect that will continue.
0. isn't about auto-creation per-say, its about allowing protocol 
specific handlers to create address objects as needed for things like 
subscriptions.  JMS durable subscription, MQTT retain, etc.

> 1. I'm not sure I understand the use-case for having a topic and queue with the same name.  Can you clarify this?
Re-pasting from the IRC convo for folks on the list:

1a. I\u2019m saying 90% of the products in the enterprise messaging market 
supports it (IBM MQ, ActiveMQ 5.x, Tibco EMS). The spec does not clearly 
call it out specifically. I believe JMS v.1.1 4.11 and 10.1.3 could be 
interpreted as support for it.

1b. Since the addressing work is being done now, it seems like a good 
time to get a decision on it

1c. NOT supporting it would mean breaking compatibility b/w ActiveMQ 5.x 
and having to document it for folks migrating from major commercial JMS 
providers

1d. I do not know of a JMS provider that does _not_ support this

> 2. I expect STOMP to have configurable multicast and anycast prefixes for destinations.  Whether users choose "/topic/" and "/queue/" for those respectively is up to them.  I'm not sure about AMQP.
Makes sense.. IHMO using the same prefix across protocols as much as 
possible would be super dope.
> 3. I think using URIs has merit, but each protocol has nuances that would probably make something universal impossible.
Which protocol(s)?
> 4. See ARTEMIS-815 (sub-task of ARTEMIS-780).
Rockin, that covers it! This is something that ActiveMQ 5.x seemed to 
not always have consistent (specifically, in some plugins that only 
operate on "."). Destination separator probably needs to be a one-time 
deal across the broker b/c plugins may need to reference it via global 
config object.

> 5. Can you clarify this a bit more?  An example would be great.
Other messaging systems (namely, IBM MQ Remote Queues, Cluster Queues) 
support fully qualified destination names.

Example:

     a. Client connects to BrokerA.
     b. Client sends message addressed as queue://BrokerB/My.Queue
     c. BrokerA delivers message to BrokerB on behalf of the client
     d. BrokerB delivers the message to queue:///My.Queue

The use case is client code to be able to work with dynamic networks 
(think retail environments / kiosk / IoT where remote brokers come up 
and down in relative high frequency). A remote broker naming convention 
is used and clients are able to address brokers+destinations dynamically.
-Matt

Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

Posted by Justin Bertram <jb...@apache.com>.
0. Auto-creation for JMS queues and topics was already supported, and I expect that will continue.

1. I'm not sure I understand the use-case for having a topic and queue with the same name.  Can you clarify this?

2. I expect STOMP to have configurable multicast and anycast prefixes for destinations.  Whether users choose "/topic/" and "/queue/" for those respectively is up to them.  I'm not sure about AMQP.

3. I think using URIs has merit, but each protocol has nuances that would probably make something universal impossible.

4. See ARTEMIS-815 (sub-task of ARTEMIS-780).

5. Can you clarify this a bit more?  An example would be great.


Justin

----- Original Message -----
From: "Matt Pavlovich" <ma...@gmail.com>
To: dev@activemq.apache.org
Sent: Wednesday, November 16, 2016 12:23:11 PM
Subject: Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

Hi Martin-

Glad to see this area getting dedicated attention. A couple things I 
didn't see covered in the doc or the JIRA comments. (I'll be adding to 
the JIRA comments as well.)

Items:

0. Pre-configuring destinations is a big brain drain, so anything that 
can be client-driven is a win. Also, protocol specific handlers could 
perform the admin operations on-demand.

    For example:  session.createDurableSubscriber(...)   The JMS handler 
create the subscription on the behalf of the client.

1. Separate topic and queue namespaces.. in JMS topic:///foo != 
queue:///foo. The addressing will need some sort of way to separate the 
two during naming collisions.

2. In ActiveMQ 5.x, AMQP and STOMP handled the addressing by using 
queue:/// and topic:/// prefixes. I don't think that is necessarily a 
bad thing, but something to consider b/c we need to support #1

3. As far as destination behaviors, how about using uri parameters to 
pass provider (Artemis) specific settings on-the-fly?

     For example:  in AMPQ the address could be 
topic:///foo?type=nonSharedNonDurable etc..  same for MQTT, STOMP, etc.

     There is precedence in using uri parameters to configure the 
Destination in JMS as well. IBM MQ has 
session.createQueue("My.Queue?targetClient=1")

     Note: AMQP supports options as well, so that could be used as well. 
However, uri's tend to be better for externalizing configuration management.

4. Destination name separation b/w protocol handlers.  STOMP and MQTT 
like "/" and JMS likes "." as destination name separators. Is there any 
thought to having a native destination name separator and then have 
protocol-specific convertors?

5. Fully qualified destination names that include a broker name. Other 
providers support fully-qualified destination names in JMS following the 
format:  queue://$brokerName/$queueName. Adding this would go a long way 
to supporting migration of current applications without having to change 
client-code.

     Note: This would probably impact cluster handling as well, so 
perhaps in phase 1 there is just a placeholder for supporting a broker 
name in the future?

-Matt

On 11/16/16 10:16 AM, Martyn Taylor wrote:
> All,
>
> Some discussion has happened around this topic already, but I wanted to
> ensure that everyone here, who have not been following the JIRA/ARTEMIS-780
> branch has a chance for input and to digest the information in this
> proposal.
>
> In order to understand the motivators outlined here, you first need to
> understand how the existing addressing model works in Artemis. For those of
> you who are not familiar with how things currently work, I’ve added a
> document to the ARTEMIS-780 JIRA in the attachments section, that gives an
> overview of the existing model and some more detail / examples of the
> proposal: *https://issues.apache.org/jira/browse/ARTEMIS-780
> <https://issues.apache.org/jira/browse/ARTEMIS-780>*
>
> To summarise here, the Artemis routing/addressing model has some
> restrictions:
>
> 1. It’s not possible with core (and therefore across all protocols) to
> define ,at the broker side, semantics about addresses. i.e. whether an
> address behaves as a “point to point” or “publish subscribe” end point
>
> 2. For JMS destinations additional configuration and objects were added to
> the broker, that rely on name-spacing to add semantics to addresses i.e.
> “jms.topic.” “jms.queue.”  A couple of issues with this:
>
>     1.
>
>     This only works for JMS and no other protocols
>     2.
>
>     Name-spacing causes issues for cross protocol communication
>     3.
>
>     It means there’s two ways of doing things, 1 for JMS and 1 for
>     everything else.
>
> 3. The JMS and Core destination definitions do not have enough information
> to define more intricate behaviours. Such as whether an address should
> behave like a “shared subscription” or similar to a “volatile subscription”
> where clients don’t get messages missed when they are offline.
>
> 4. Some protocols (AMQP is a good example) don’t have enough information in
> their frames for the broker to determine how to behave for certain
> endpoints and rely on broker side configuration (or provider specific
> parameters).
>
> Proposal
>
> What I’d like to do (and what I’ve proposed in ARTEMIS-780) is to get rid
> of the JMS specific components and create a single unified mechanism for
> configuring all types of endpoints across all protocols to define:
>
>     -
>
>     Point to point (queue)
>     -
>
>     Shared Durable Subscriptions
>     -
>
>     Shared Non Durable Subscriptions
>     -
>
>     Non Shared durable subscriptions
>     -
>
>     Non Shared Non durable subscriptions
>
> To do this, the idea is to create a new “Address” configuration/management
> object, that has certain properties such as a routing type which represents
> how messages are routed to queues with this address.
>
> When a request for subscription is received by Artemis, the relevant piece
> can just look up the address and check it’s properties to determine how to
> behave (or if an address doesn’t exist) then default to our existing
> behaviour. For those interested in the details of how this might work I’ve
> outlined some specific examples in the document on the JIRA.
>
> What are the user impacts:
>
> 1. Configuration would need to be revised in order to expose the new
> addressing object. I propose that we either continue supporting the old
> schema for a while and/or provide a tool to migrate the configuration
> schema.
>
> 2. Some new management operations would need to be added to expose the new
> objects.
>
> 3. The JMS configuration and management objects would become obsolete, and
> would need removing. The Broker side JMS resources were only a thin facade
> to allow some JMS specific behaviour for managing destinations and for
> things like registering objects in JNDI.
>
> Broker side JNDI was removed in Artemis 1.0 in order to align with ActiveMQ
> 5.x style of client side JNDI.  These JMS pieces and their management
> objects don't really do much, creating connection factories for instance
> offers no functionality right now.  Going forward, users should be able to
> use the core management API to do everything.
>
> 4. All client applications should behave exactly as they were before. The
> proposal is for adding features to the core model, not removing any.  For
> things like the Artemis JMS client which relied on name-spaces, they’ll be
> a mechanism to define a name-spaced address and a mechanism to switch back
> on name-spaces in the client.
>
> 5. Given some of the API changes and removal of the JMS specific pieces.
> This would likely warrant a major bump. i.e. Artemis 2.0.0.
>
> Whilst I’ve been looking at this, it’s become apparent, that the JMS pieces
> have leaked into lots of areas of the code base, which does mean we’d need
> to do a fair amount refactoring to move these bits to the new model.
>
> In my opinion this proposal can only be a good thing. It creates a single
> place (core) where all addressing objects are configured and managed and
> allows all protocol managers to plug into the same mechanism. It solves
> some of the cross protocol JMS → other protocols that we’ve seen users
> struggle with, but still offers a way to support all the old behaviour in
> client applications.
>
> What are others thoughts on this? Any suggestions, comments or concerns?
>
> Regards
> Martyn
>


Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

Posted by Martyn Taylor <mt...@redhat.com>.
On Fri, Nov 18, 2016 at 1:16 PM, Clebert Suconic <cl...@gmail.com>
wrote:

> -1000 on the Header metadata.

What header meta-data are you talking about?
>
> This is the same as changing the wire. It
> won't be possible to provide compatibility with older Artemis 1.0,1,2,3,4
> and 1.5 clients. Beyond Hornetq compatibility.
>
In what way? please be specific.

>
> I had already looked at the packets and nothing changed so far.

There is no need to change any packet in what I've suggested.

> So we are
> good. But adding information to change semantics on the producer is a non
> compatible change in my opinion.

What do you mean on the producer?  That's not what I am suggesting.




> On Fri, Nov 18, 2016 at 6:44 AM Martyn Taylor <mt...@redhat.com> wrote:
>
> > Clebert,
> >
> > This can work.  If you look at the ARTEMIS-780 branch and see how we've
> > approached this, you'll notice that we don't touch any of the internal
> > APIs.  It's only a few methods added.  Having two addresses in the
> config,
> > is not really creating two addressing inside of Artemis.  There's only
> one
> > address and all queues have this address.  The only thing that changes is
> > the fact that a queue binding now has some meta-data (an AddressInfo
> > object) that determines how messages are routed to it.  It's perfectly
> > viable to have 2 queues, with the same address, but with different
> address
> > info objects.
> >
> > As for the producer case, we could just add a message header that
> > identifies that this was sent for addresses with "multicast" only.  And
> put
> > the appropriate filter on the queues when they're created.
> >
> > In summary, it's possible, the question is whether this is the correct
> > approach.  I'm open to ideas, but I don't think anyone has suggested
> > anything as of yet that covers all use cases.
> >
> > Cheers
> > Martyn
> >
> > On Thu, Nov 17, 2016 at 12:28 PM, Clebert Suconic <
> > clebert.suconic@gmail.com
> > > wrote:
> >
> > > > Just so I understand exactly what you are saying here.  You're saying
> > > that
> > > > a client sends to "foo" and a consumer received messages sent to
> "foo".
> > > In
> > > > order for the consumer to consume from "foo" it passes in either
> "foo",
> > > > "queue:///foo" or "topic:///foo" which determines how the messages
> are
> > > > propagated to the client?  "foo" means let the broker decide,
> > > > "queue:///foo" and "topic:///foo" mean let the client decide.  In
> > > addition
> > > > to these two approaches, it may be that the protocol itself wants to
> > > > decide.  MQTT for example, always requires a subscription.
> > > >
> > > > One way to do this, not straying too far from the original proposal,
> > > would
> > > > be to make the address uniqueness a combination of the routing type
> and
> > > the
> > > > address name.  This would allow something like:
> > > >
> > > > <address name="foo" routingType="anycast">
> > > > <address name="foo" routingType="multicast">
> > > >
> > > > We'd need to ensure there is a precedent set for times when a
> > subscriber
> > > > just subscribes to "foo".  I'd say it makes sense for "multicast" to
> > take
> > > > precedence in this case.
> > >
> > >
> > > That wouldn't work. You would need to change the API to pass in an
> > > address type, the protocols to have an address type (in a way it
> > > wouldn't be compatible with previous clients).
> > >
> > > I think this is settled if you make the prefix configurable for cases
> > > where users want to have such thing.
> > >
> >
>

Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

Posted by Clebert Suconic <cl...@gmail.com>.
-1000 on the Header metadata. This is the same as changing the wire. It
won't be possible to provide compatibility with older Artemis 1.0,1,2,3,4
and 1.5 clients. Beyond Hornetq compatibility.


I had already looked at the packets and nothing changed so far. So we are
good. But adding information to change semantics on the producer is a non
compatible change in my opinion.

On Fri, Nov 18, 2016 at 6:44 AM Martyn Taylor <mt...@redhat.com> wrote:

> Clebert,
>
> This can work.  If you look at the ARTEMIS-780 branch and see how we've
> approached this, you'll notice that we don't touch any of the internal
> APIs.  It's only a few methods added.  Having two addresses in the config,
> is not really creating two addressing inside of Artemis.  There's only one
> address and all queues have this address.  The only thing that changes is
> the fact that a queue binding now has some meta-data (an AddressInfo
> object) that determines how messages are routed to it.  It's perfectly
> viable to have 2 queues, with the same address, but with different address
> info objects.
>
> As for the producer case, we could just add a message header that
> identifies that this was sent for addresses with "multicast" only.  And put
> the appropriate filter on the queues when they're created.
>
> In summary, it's possible, the question is whether this is the correct
> approach.  I'm open to ideas, but I don't think anyone has suggested
> anything as of yet that covers all use cases.
>
> Cheers
> Martyn
>
> On Thu, Nov 17, 2016 at 12:28 PM, Clebert Suconic <
> clebert.suconic@gmail.com
> > wrote:
>
> > > Just so I understand exactly what you are saying here.  You're saying
> > that
> > > a client sends to "foo" and a consumer received messages sent to "foo".
> > In
> > > order for the consumer to consume from "foo" it passes in either "foo",
> > > "queue:///foo" or "topic:///foo" which determines how the messages are
> > > propagated to the client?  "foo" means let the broker decide,
> > > "queue:///foo" and "topic:///foo" mean let the client decide.  In
> > addition
> > > to these two approaches, it may be that the protocol itself wants to
> > > decide.  MQTT for example, always requires a subscription.
> > >
> > > One way to do this, not straying too far from the original proposal,
> > would
> > > be to make the address uniqueness a combination of the routing type and
> > the
> > > address name.  This would allow something like:
> > >
> > > <address name="foo" routingType="anycast">
> > > <address name="foo" routingType="multicast">
> > >
> > > We'd need to ensure there is a precedent set for times when a
> subscriber
> > > just subscribes to "foo".  I'd say it makes sense for "multicast" to
> take
> > > precedence in this case.
> >
> >
> > That wouldn't work. You would need to change the API to pass in an
> > address type, the protocols to have an address type (in a way it
> > wouldn't be compatible with previous clients).
> >
> > I think this is settled if you make the prefix configurable for cases
> > where users want to have such thing.
> >
>

Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

Posted by Martyn Taylor <mt...@redhat.com>.
Clebert,

This can work.  If you look at the ARTEMIS-780 branch and see how we've
approached this, you'll notice that we don't touch any of the internal
APIs.  It's only a few methods added.  Having two addresses in the config,
is not really creating two addressing inside of Artemis.  There's only one
address and all queues have this address.  The only thing that changes is
the fact that a queue binding now has some meta-data (an AddressInfo
object) that determines how messages are routed to it.  It's perfectly
viable to have 2 queues, with the same address, but with different address
info objects.

As for the producer case, we could just add a message header that
identifies that this was sent for addresses with "multicast" only.  And put
the appropriate filter on the queues when they're created.

In summary, it's possible, the question is whether this is the correct
approach.  I'm open to ideas, but I don't think anyone has suggested
anything as of yet that covers all use cases.

Cheers
Martyn

On Thu, Nov 17, 2016 at 12:28 PM, Clebert Suconic <clebert.suconic@gmail.com
> wrote:

> > Just so I understand exactly what you are saying here.  You're saying
> that
> > a client sends to "foo" and a consumer received messages sent to "foo".
> In
> > order for the consumer to consume from "foo" it passes in either "foo",
> > "queue:///foo" or "topic:///foo" which determines how the messages are
> > propagated to the client?  "foo" means let the broker decide,
> > "queue:///foo" and "topic:///foo" mean let the client decide.  In
> addition
> > to these two approaches, it may be that the protocol itself wants to
> > decide.  MQTT for example, always requires a subscription.
> >
> > One way to do this, not straying too far from the original proposal,
> would
> > be to make the address uniqueness a combination of the routing type and
> the
> > address name.  This would allow something like:
> >
> > <address name="foo" routingType="anycast">
> > <address name="foo" routingType="multicast">
> >
> > We'd need to ensure there is a precedent set for times when a subscriber
> > just subscribes to "foo".  I'd say it makes sense for "multicast" to take
> > precedence in this case.
>
>
> That wouldn't work. You would need to change the API to pass in an
> address type, the protocols to have an address type (in a way it
> wouldn't be compatible with previous clients).
>
> I think this is settled if you make the prefix configurable for cases
> where users want to have such thing.
>

Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

Posted by Clebert Suconic <cl...@gmail.com>.
> Just so I understand exactly what you are saying here.  You're saying that
> a client sends to "foo" and a consumer received messages sent to "foo".  In
> order for the consumer to consume from "foo" it passes in either "foo",
> "queue:///foo" or "topic:///foo" which determines how the messages are
> propagated to the client?  "foo" means let the broker decide,
> "queue:///foo" and "topic:///foo" mean let the client decide.  In addition
> to these two approaches, it may be that the protocol itself wants to
> decide.  MQTT for example, always requires a subscription.
>
> One way to do this, not straying too far from the original proposal, would
> be to make the address uniqueness a combination of the routing type and the
> address name.  This would allow something like:
>
> <address name="foo" routingType="anycast">
> <address name="foo" routingType="multicast">
>
> We'd need to ensure there is a precedent set for times when a subscriber
> just subscribes to "foo".  I'd say it makes sense for "multicast" to take
> precedence in this case.


That wouldn't work. You would need to change the API to pass in an
address type, the protocols to have an address type (in a way it
wouldn't be compatible with previous clients).

I think this is settled if you make the prefix configurable for cases
where users want to have such thing.

Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

Posted by Justin Bertram <jb...@apache.com>.
I agree.  It looks good.


Justin

----- Original Message -----
From: "Clebert Suconic" <cl...@gmail.com>
To: dev@activemq.apache.org
Sent: Monday, November 21, 2016 1:49:13 PM
Subject: Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

On Mon, Nov 21, 2016 at 2:02 PM, Matt Pavlovich <ma...@gmail.com> wrote:
> Martyn-
>
> I think you nailed it here-- well done =)

+1000

Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

Posted by Matt Pavlovich <ma...@gmail.com>.
Justin-

Agreed all around. I'm suggesting a separate thread on feature set for 
2.0 to allow around of input for those that don't have their eyes on 
this addressing thread. I wasn't sure if it had been discussed or not.

I'll kick off a new thread.

-Matt


On 12/7/16 3:31 PM, Justin Bertram wrote:
> Essential feature parity with 5.x (where it makes sense) has been a goal all along, but I think waiting until such parity exists before the next major release means the community will be waiting quite a bit longer than they already have.  Meanwhile, new functionality that could benefit the community will remain unavailable.  In any event, "feature parity" is a bit vague.  If there is something specific with regards to 5.x parity that you're looking for then I think you should make that explicit so it can be evaluated.
>
> I'm in favor of merging the addressing changes onto master, hardening things up a bit, and then releasing.
>
>
> Justin
>
> ----- Original Message -----
> From: "Matt Pavlovich" <ma...@gmail.com>
> To: dev@activemq.apache.org
> Sent: Wednesday, December 7, 2016 2:04:13 PM
> Subject: Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0
>
> IMHO, I think it would be good to kick up a thread on what it means to
> be 2.0. It sounds like the addressing changes definitely warrant it on
> its own, but I'm thinking having ActiveMQ 5.x feature parity would be a
> good goal for the 2.0 release.  My $0.02
>
> On 12/7/16 2:56 PM, Clebert Suconic wrote:
>> +1000
>>
>>
>> It needs one final cleanup before it can be done though.. these commit
>> messages need meaninful descriptions.
>>
>> if Justin or Martyn could come up with that since they did most of the
>> work on the branch.
>>
>> This will really require bumping the release to 2.0.0 (there's a
>> 2.0.snapshot commit on it already).  I would merge this into master,
>> and fork the current master as 1.x.
>>
>>
>>
>>
>> On Wed, Dec 7, 2016 at 1:52 PM, Timothy Bish <ta...@gmail.com> wrote:
>>> This would be a good time to move to master, would allow other to more
>>> easily get onboard
>>>
>>>
>>> On 12/07/2016 01:25 PM, Clebert Suconic wrote:
>>>> I have rebased ARTEMIS-780 in top of master. There was a lot of
>>>> conflicts...
>>>>
>>>> I have aggregated/squashed most of the commits by chronological order
>>>> almost. So if Martyn had 10 commits in series I had squashed all of
>>>> them, since they were small comments anyways. The good thing about
>>>> this is that nobody would lose authorship of these commits.
>>>>
>>>> We will need to come up with more meaningful messages for these
>>>> commits before we can merge into master. But this is getting into a
>>>> very good shape. I'm impressed by the amount of work I see done on
>>>> this branch. Very well done guys! I mean it!
>>>>
>>>> Also, I have saved the old branch before I pushed -f into my fork as
>>>> old-ARTEMIS-780 in case I broke anything on the process. Please check
>>>> everything and let me know if I did.
>>>>
>>>>
>>>> And please rebase more often on this branch unless you merge it soon.
>>>>
>>>>
>>>> On Mon, Nov 28, 2016 at 2:36 PM, Clebert Suconic
>>>> <cl...@gmail.com> wrote:
>>>>> If / when we do the 2.0 bump, I would like to move a few classes.
>>>>> Mainly under server.impl... I would like to move activations under a
>>>>> package for activation, replicationendpoints for a package for
>>>>> replications...    some small stuff like that just to reorganize
>>>>> little things like this a bit.
>>>>>
>>>>> We can't do that now as that would break API and compatibility, but if
>>>>> we do the bump, I would like to make that simple move.
>>>>>
>>>>> On Thu, Nov 24, 2016 at 4:41 AM, Martyn Taylor <mt...@redhat.com>
>>>>> wrote:
>>>>>> Hi Matt,
>>>>>>
>>>>>> Comments inline.
>>>>>>
>>>>>> On Mon, Nov 21, 2016 at 7:02 PM, Matt Pavlovich <ma...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Martyn-
>>>>>>>
>>>>>>> I think you nailed it here-- well done =)
>>>>>>>
>>>>>>> My notes in-line--
>>>>>>>
>>>>>>> On 11/21/16 10:45 AM, Martyn Taylor wrote:
>>>>>>>
>>>>>>>> 1. Ability to route messages to queues with the same address, but
>>>>>>>> different
>>>>>>>> routing semantics.
>>>>>>>>
>>>>>>>> The proposal outlined in ARTEMIS-780 outlines a new model that
>>>>>>>> introduces
>>>>>>>> an address object at the configuration and management layer. In the
>>>>>>>> proposal it is not possible to create 2 addresses with different
>>>>>>>> routing
>>>>>>>> types. This causes a problem with existing clients (JMS, STOMP and for
>>>>>>>> compatability with other vendors).
>>>>>>>>
>>>>>>>> Potential Modification: Addresses can have multiple routing type
>>>>>>>> \u201cendpoints\u201d, either \u201cmulticast\u201d only, \u201canycast\u201d only or both. The
>>>>>>>> example
>>>>>>>> below would be used to represent a JMS Topic called \u201cfoo\u201d, with a
>>>>>>>> single
>>>>>>>> subscription queue and a JMS Queue called \u201cfoo\u201d. N.B. The actual XML
>>>>>>>> is
>>>>>>>> just an example, there are multiple ways this could be represented
>>>>>>>> that we
>>>>>>>> can define later.
>>>>>>>>
>>>>>>>> <*addresses*>   <*address* *name**="foo"*>      <*anycast*>
>>>>>>>> <*queues*>            <*queue* *name**="**foo\u201d* />         </*queues*>
>>>>>>>>          </*anycast*>      <*mulcast*>         <*queues*>
>>>>>>>> <*queue* *name**="my.topic.subscription" */>         </*queues*>
>>>>>>>> </*multicast*>   </*address*></*addresses*>
>>>>>>>>
>>>>>>> I think this solves it. The crux of the issues (for me) boils down to
>>>>>>> auto-creation of destinations across protocols. Having this show up in
>>>>>>> the
>>>>>>> configs would give developers and admins more information to
>>>>>>> troubleshoot
>>>>>>> the mixed address type+protocol scenario.
>>>>>>>
>>>>>>> 2. Sending to \u201cmulticast\u201d, \u201canycast\u201d or \u201call\u201d
>>>>>>>> As mentioned earlier JMS (and other clients such as STOMP via
>>>>>>>> prefixing)
>>>>>>>> allow the producer to identify the type of end point it would like to
>>>>>>>> send
>>>>>>>> to.
>>>>>>>>
>>>>>>>> If a JMS client creates a producer and passes in a topic with address
>>>>>>>> \u201cfoo\u201d. Then only the queues associated with the \u201cmulticast\u201d section of
>>>>>>>> the
>>>>>>>> address. A similar thing happens when the JMS producer sends to a
>>>>>>>> \u201cqueue\u201d
>>>>>>>> messages should be distributed amongst the queues associated with the
>>>>>>>> \u201canycast\u201d section of the address.
>>>>>>>>
>>>>>>>> There may also be a case when a producer does not identify the
>>>>>>>> endpoint
>>>>>>>> type, and simply sends to \u201cfoo\u201d. AMQP or MQTT may want to do this. In
>>>>>>>> this
>>>>>>>> scenario both should happen. All the queues under the multicast
>>>>>>>> section
>>>>>>>> get
>>>>>>>> a copy of the message, and one queue under the anycast section gets
>>>>>>>> the
>>>>>>>> message.
>>>>>>>>
>>>>>>>> Modification: None Needed. Internal APIs would need to be updated to
>>>>>>>> allow
>>>>>>>> this functionality.
>>>>>>>>
>>>>>>> I think the "deliver to all" scenario should be fine. This seems
>>>>>>> analogous
>>>>>>> to a CompositeDestination in ActiveMQ 5.x. I'll map through some
>>>>>>> scenarios
>>>>>>> and report back any gotchas.
>>>>>>>
>>>>>>> 3. Support for prefixes to identify endpoint types
>>>>>>>> Many clients, ActiveMQ 5.x, STOMP and potential clients from alternate
>>>>>>>> vendors, identify the endpoint type (in producer and consumer) using a
>>>>>>>> prefix notation.
>>>>>>>>
>>>>>>>> e.g. queue:///foo
>>>>>>>>
>>>>>>>> Which would identify:
>>>>>>>>
>>>>>>>> <*addresses*>   <*address* *name**="foo"*>      <*anycast*>
>>>>>>>> <*queues*>            <*queue* *name**="my.foo.queue" */>
>>>>>>>> </*queues*>      </*anycast*>   </*address*></*addresses*>
>>>>>>>>
>>>>>>>> Modifications Needed: None to the model. An additional parameter to
>>>>>>>> the
>>>>>>>> acceptors should be added to identify the prefix.
>>>>>>>>
>>>>>>> Just as a check point in the syntax+naming convention in your provided
>>>>>>> example... would the name actually be:
>>>>>>>
>>>>>>> <*queue* *name**="foo" .. vs "my.foo.queue" ?
>>>>>>>
>>>>>> The queue name can be anything.  It's the address that is used by
>>>>>> consumer/producer.  The protocol handler / broker will decided which
>>>>>> queue
>>>>>> to connect to.
>>>>>>
>>>>>>> 4. Multiple endpoints are defined, but client does not specify
>>>>>>> \u201cendpoint
>>>>>>>> routing type\u201d when consuming
>>>>>>>>
>>>>>>>> Handling cases where consumers does not pass enough information in
>>>>>>>> their
>>>>>>>> address or via protocol specific mechanisms to identify an endpoint.
>>>>>>>> Let\u2019s
>>>>>>>> say an AMQP client, requests to subscribe to the address \u201cfoo\u201d, but
>>>>>>>> passes
>>>>>>>> no extra information. In the cases where there are only a single
>>>>>>>> endpoint
>>>>>>>> type defined, the consumer would associated with that endpoint type.
>>>>>>>> However, when both endpoint types are defined, the protocol handler
>>>>>>>> does
>>>>>>>> not know whether to associate this consumer with a queue under the
>>>>>>>> \u201canycast\u201d section, or whether to create a new queue under the
>>>>>>>> \u201cmulticast\u201d
>>>>>>>> section. e.g.
>>>>>>>>
>>>>>>>> Consume: \u201cfoo\u201d
>>>>>>>>
>>>>>>>> <*addresses*>
>>>>>>>>
>>>>>>>>        <*address* *name**="foo"*>      <*anycast*>         <*queues*>
>>>>>>>>           <*queue* *name**="**foo\u201d* />         </*queues*>
>>>>>>>> </*anycast*>      <*multicast*>         <*queues*>            <*queue*
>>>>>>>> *name**="my.topic.subscription" */>         </*queues*>
>>>>>>>> </*multicast*>   </*address*></*addresses*>
>>>>>>>>
>>>>>>>> In this scenario, we can make the default configurable on the
>>>>>>>> protocol/acceptor. Possible options for this could be:
>>>>>>>>
>>>>>>>> \u201cmulticast\u201d: Defaults multicast
>>>>>>>>
>>>>>>>> \u201canycast\u201d: Defaults to anycast
>>>>>>>>
>>>>>>>> \u201cerror\u201d: Returns an error to the client
>>>>>>>>
>>>>>>>> Alternatively each protocol handler could handle this in the most
>>>>>>>> sensible
>>>>>>>> way for that protocol. MQTT might default to \u201cmulticast\u201d, STOMP
>>>>>>>> \u201canycast\u201d,
>>>>>>>> and AMQP to \u201cerror\u201d.
>>>>>>>>
>>>>>>> Yep, this works great. I think there are two flags on the acceptors..
>>>>>>> one
>>>>>>> for auto-create and one for default handling of name collision. The
>>>>>>> defaults would most likely be the same.
>>>>>>>
>>>>>>> Something along the lines of:
>>>>>>> auto-create-default = "multicast | anycast"
>>>>>>> no-prefix-default = "multicast | anycast | error"
>>>>>>>
>>>>>>> 5. Fully qualified address names
>>>>>>>> This feature allows a client to identify a particular address on a
>>>>>>>> specific
>>>>>>>> broker in a cluster. This could be achieved by the client using some
>>>>>>>> form
>>>>>>>> of address as:
>>>>>>>>
>>>>>>>> queue:///host/broker/address/
>>>>>>>>
>>>>>>>> Matt could you elaborate on the drivers behind this requirement
>>>>>>>> please.
>>>>>>>>
>>>>>>>> I am of the opinion that this is out of the scope of the addressing
>>>>>>>> changes, and is more to do with redirecting in cluster scenarios. The
>>>>>>>> current model will support this address syntax if we want to use it in
>>>>>>>> the
>>>>>>>> future.
>>>>>>>>
>>>>>>> I agree that tackling the impl of this should be out-of-scope. My
>>>>>>> recommendation is to consider it in addressing now, so we can hopefully
>>>>>>> avoid any breakage down the road.
>>>>>>>
>>>>>>> A widely used feature in other EMS brokers (IBM MQ, Tibco EMS, etc) is
>>>>>>> the
>>>>>>> ability to fully address a destination using a format similar to this:
>>>>>>>
>>>>>>> queue://brokerB/myQueue
>>>>>>>
>>>>>> The advantage of this is to allow for scaling of the number of
>>>>>> destinations
>>>>>>> and allows for more dynamic broker networks to be created without
>>>>>>> applications having to have connection information for all brokers in a
>>>>>>> broker network. Think simple delivery+routing, and not horizontal
>>>>>>> scaling.
>>>>>>> It is very analogous to SMTP mail routing.
>>>>>>>
>>>>>>> Producer behavior:
>>>>>>>
>>>>>>> 1. Client X connects to brokerA and sends it a message addressed:
>>>>>>> queue://brokerB/myQueue
>>>>>>> 2. brokerA accepts the message on behalf of brokerB and handles all
>>>>>>> acknowledgement and persistence accordingly
>>>>>>> 3. brokerA would then store the message in a "queue" for brokerB. Note:
>>>>>>> All messages for brokerB are generally stored in one queue-- this is
>>>>>>> how it
>>>>>>> helps with destination scaling
>>>>>>>
>>>>>>> Broker to broker behavior:
>>>>>>>
>>>>>>> There are generally two scenarios: always-on or periodic-check
>>>>>>>
>>>>>>> In "always-on"
>>>>>>> 1. brokerA looks for a brokerB in its list of cluster connections and
>>>>>>> then
>>>>>>> sends all messages for all queues for brokerB (or brokerB pulls all
>>>>>>> messages, depending on cluster connection config)
>>>>>>>
>>>>>>> In "periodic-check"
>>>>>>> 1. brokerB connects to brokerA (or vice-versa) on a given time interval
>>>>>>> and then receives any messages that have arrived since last check
>>>>>>>
>>>>>>> TL;DR;
>>>>>>>
>>>>>>> It would be cool to consider remote broker delivery for messages while
>>>>>>> refactoring the address handling code. This would bring Artemis inline
>>>>>>> with
>>>>>>> the rest of the commercial EMS brokers. The impact now, hopefully, is
>>>>>>> minor
>>>>>>> and just thinking about default prefixes.
>>>>>>>
>>>>>> Understood, from our conversations on IRC I can see why this might be
>>>>>> useful.
>>>>>>
>>>>>>> Thanks,
>>>>>>> -Matt
>>>>>>>
>>>>>>>
>>>>>>>
>>>>> --
>>>>> Clebert Suconic
>>>>
>>> --
>>> Tim Bish
>>> twitter: @tabish121
>>> blog: http://timbish.blogspot.com/
>>>
>>


Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

Posted by Clebert Suconic <cl...@gmail.com>.
Justin and Martyn have cleaned up the branch, and everything is ready
to be pushed upstream...


so, what I will be doing now:


i - push the cleaned branch as master upstream. this will include a
commit to bump the poms to 2.0.0-SNAPSHOT
ii - I will remove the temporary branch.
iii - I will fork the current tip of master as 1.x




And... BTW: everybody need to check on this.. this is a massive amount
of work.. Awesome work guys... It is really nice to see this coming in
shape this way.




On Wed, Dec 7, 2016 at 4:11 PM, Christopher Shannon
<ch...@gmail.com> wrote:
> +1 for merging the branch into master after the cleanup is done and bumping
> to 2.0 since it is a major architecture change.
>
>
> On Wed, Dec 7, 2016 at 3:31 PM, Justin Bertram <jb...@apache.com> wrote:
>
>> Essential feature parity with 5.x (where it makes sense) has been a goal
>> all along, but I think waiting until such parity exists before the next
>> major release means the community will be waiting quite a bit longer than
>> they already have.  Meanwhile, new functionality that could benefit the
>> community will remain unavailable.  In any event, "feature parity" is a bit
>> vague.  If there is something specific with regards to 5.x parity that
>> you're looking for then I think you should make that explicit so it can be
>> evaluated.
>>
>> I'm in favor of merging the addressing changes onto master, hardening
>> things up a bit, and then releasing.
>>
>>
>> Justin
>>
>> ----- Original Message -----
>> From: "Matt Pavlovich" <ma...@gmail.com>
>> To: dev@activemq.apache.org
>> Sent: Wednesday, December 7, 2016 2:04:13 PM
>> Subject: Re: [DISCUSS] Artemis addressing improvements, JMS component
>> removal and potential 2.0.0
>>
>> IMHO, I think it would be good to kick up a thread on what it means to
>> be 2.0. It sounds like the addressing changes definitely warrant it on
>> its own, but I'm thinking having ActiveMQ 5.x feature parity would be a
>> good goal for the 2.0 release.  My $0.02
>>
>> On 12/7/16 2:56 PM, Clebert Suconic wrote:
>> > +1000
>> >
>> >
>> > It needs one final cleanup before it can be done though.. these commit
>> > messages need meaninful descriptions.
>> >
>> > if Justin or Martyn could come up with that since they did most of the
>> > work on the branch.
>> >
>> > This will really require bumping the release to 2.0.0 (there's a
>> > 2.0.snapshot commit on it already).  I would merge this into master,
>> > and fork the current master as 1.x.
>> >
>> >
>> >
>> >
>> > On Wed, Dec 7, 2016 at 1:52 PM, Timothy Bish <ta...@gmail.com>
>> wrote:
>> >> This would be a good time to move to master, would allow other to more
>> >> easily get onboard
>> >>
>> >>
>> >> On 12/07/2016 01:25 PM, Clebert Suconic wrote:
>> >>> I have rebased ARTEMIS-780 in top of master. There was a lot of
>> >>> conflicts...
>> >>>
>> >>> I have aggregated/squashed most of the commits by chronological order
>> >>> almost. So if Martyn had 10 commits in series I had squashed all of
>> >>> them, since they were small comments anyways. The good thing about
>> >>> this is that nobody would lose authorship of these commits.
>> >>>
>> >>> We will need to come up with more meaningful messages for these
>> >>> commits before we can merge into master. But this is getting into a
>> >>> very good shape. I'm impressed by the amount of work I see done on
>> >>> this branch. Very well done guys! I mean it!
>> >>>
>> >>> Also, I have saved the old branch before I pushed -f into my fork as
>> >>> old-ARTEMIS-780 in case I broke anything on the process. Please check
>> >>> everything and let me know if I did.
>> >>>
>> >>>
>> >>> And please rebase more often on this branch unless you merge it soon.
>> >>>
>> >>>
>> >>> On Mon, Nov 28, 2016 at 2:36 PM, Clebert Suconic
>> >>> <cl...@gmail.com> wrote:
>> >>>> If / when we do the 2.0 bump, I would like to move a few classes.
>> >>>> Mainly under server.impl... I would like to move activations under a
>> >>>> package for activation, replicationendpoints for a package for
>> >>>> replications...    some small stuff like that just to reorganize
>> >>>> little things like this a bit.
>> >>>>
>> >>>> We can't do that now as that would break API and compatibility, but if
>> >>>> we do the bump, I would like to make that simple move.
>> >>>>
>> >>>> On Thu, Nov 24, 2016 at 4:41 AM, Martyn Taylor <mt...@redhat.com>
>> >>>> wrote:
>> >>>>> Hi Matt,
>> >>>>>
>> >>>>> Comments inline.
>> >>>>>
>> >>>>> On Mon, Nov 21, 2016 at 7:02 PM, Matt Pavlovich <ma...@gmail.com>
>> >>>>> wrote:
>> >>>>>
>> >>>>>> Martyn-
>> >>>>>>
>> >>>>>> I think you nailed it here-- well done =)
>> >>>>>>
>> >>>>>> My notes in-line--
>> >>>>>>
>> >>>>>> On 11/21/16 10:45 AM, Martyn Taylor wrote:
>> >>>>>>
>> >>>>>>> 1. Ability to route messages to queues with the same address, but
>> >>>>>>> different
>> >>>>>>> routing semantics.
>> >>>>>>>
>> >>>>>>> The proposal outlined in ARTEMIS-780 outlines a new model that
>> >>>>>>> introduces
>> >>>>>>> an address object at the configuration and management layer. In the
>> >>>>>>> proposal it is not possible to create 2 addresses with different
>> >>>>>>> routing
>> >>>>>>> types. This causes a problem with existing clients (JMS, STOMP and
>> for
>> >>>>>>> compatability with other vendors).
>> >>>>>>>
>> >>>>>>> Potential Modification: Addresses can have multiple routing type
>> >>>>>>> “endpoints”, either “multicast” only, “anycast” only or both. The
>> >>>>>>> example
>> >>>>>>> below would be used to represent a JMS Topic called “foo”, with a
>> >>>>>>> single
>> >>>>>>> subscription queue and a JMS Queue called “foo”. N.B. The actual
>> XML
>> >>>>>>> is
>> >>>>>>> just an example, there are multiple ways this could be represented
>> >>>>>>> that we
>> >>>>>>> can define later.
>> >>>>>>>
>> >>>>>>> <*addresses*>   <*address* *name**="foo"*>      <*anycast*>
>> >>>>>>> <*queues*>            <*queue* *name**="**foo”* />
>>  </*queues*>
>> >>>>>>>         </*anycast*>      <*mulcast*>         <*queues*>
>> >>>>>>> <*queue* *name**="my.topic.subscription" */>         </*queues*>
>> >>>>>>> </*multicast*>   </*address*></*addresses*>
>> >>>>>>>
>> >>>>>> I think this solves it. The crux of the issues (for me) boils down
>> to
>> >>>>>> auto-creation of destinations across protocols. Having this show up
>> in
>> >>>>>> the
>> >>>>>> configs would give developers and admins more information to
>> >>>>>> troubleshoot
>> >>>>>> the mixed address type+protocol scenario.
>> >>>>>>
>> >>>>>> 2. Sending to “multicast”, “anycast” or “all”
>> >>>>>>> As mentioned earlier JMS (and other clients such as STOMP via
>> >>>>>>> prefixing)
>> >>>>>>> allow the producer to identify the type of end point it would like
>> to
>> >>>>>>> send
>> >>>>>>> to.
>> >>>>>>>
>> >>>>>>> If a JMS client creates a producer and passes in a topic with
>> address
>> >>>>>>> “foo”. Then only the queues associated with the “multicast”
>> section of
>> >>>>>>> the
>> >>>>>>> address. A similar thing happens when the JMS producer sends to a
>> >>>>>>> “queue”
>> >>>>>>> messages should be distributed amongst the queues associated with
>> the
>> >>>>>>> “anycast” section of the address.
>> >>>>>>>
>> >>>>>>> There may also be a case when a producer does not identify the
>> >>>>>>> endpoint
>> >>>>>>> type, and simply sends to “foo”. AMQP or MQTT may want to do this.
>> In
>> >>>>>>> this
>> >>>>>>> scenario both should happen. All the queues under the multicast
>> >>>>>>> section
>> >>>>>>> get
>> >>>>>>> a copy of the message, and one queue under the anycast section gets
>> >>>>>>> the
>> >>>>>>> message.
>> >>>>>>>
>> >>>>>>> Modification: None Needed. Internal APIs would need to be updated
>> to
>> >>>>>>> allow
>> >>>>>>> this functionality.
>> >>>>>>>
>> >>>>>> I think the "deliver to all" scenario should be fine. This seems
>> >>>>>> analogous
>> >>>>>> to a CompositeDestination in ActiveMQ 5.x. I'll map through some
>> >>>>>> scenarios
>> >>>>>> and report back any gotchas.
>> >>>>>>
>> >>>>>> 3. Support for prefixes to identify endpoint types
>> >>>>>>> Many clients, ActiveMQ 5.x, STOMP and potential clients from
>> alternate
>> >>>>>>> vendors, identify the endpoint type (in producer and consumer)
>> using a
>> >>>>>>> prefix notation.
>> >>>>>>>
>> >>>>>>> e.g. queue:///foo
>> >>>>>>>
>> >>>>>>> Which would identify:
>> >>>>>>>
>> >>>>>>> <*addresses*>   <*address* *name**="foo"*>      <*anycast*>
>> >>>>>>> <*queues*>            <*queue* *name**="my.foo.queue" */>
>> >>>>>>> </*queues*>      </*anycast*>   </*address*></*addresses*>
>> >>>>>>>
>> >>>>>>> Modifications Needed: None to the model. An additional parameter to
>> >>>>>>> the
>> >>>>>>> acceptors should be added to identify the prefix.
>> >>>>>>>
>> >>>>>> Just as a check point in the syntax+naming convention in your
>> provided
>> >>>>>> example... would the name actually be:
>> >>>>>>
>> >>>>>> <*queue* *name**="foo" .. vs "my.foo.queue" ?
>> >>>>>>
>> >>>>> The queue name can be anything.  It's the address that is used by
>> >>>>> consumer/producer.  The protocol handler / broker will decided which
>> >>>>> queue
>> >>>>> to connect to.
>> >>>>>
>> >>>>>> 4. Multiple endpoints are defined, but client does not specify
>> >>>>>> “endpoint
>> >>>>>>> routing type” when consuming
>> >>>>>>>
>> >>>>>>> Handling cases where consumers does not pass enough information in
>> >>>>>>> their
>> >>>>>>> address or via protocol specific mechanisms to identify an
>> endpoint.
>> >>>>>>> Let’s
>> >>>>>>> say an AMQP client, requests to subscribe to the address “foo”, but
>> >>>>>>> passes
>> >>>>>>> no extra information. In the cases where there are only a single
>> >>>>>>> endpoint
>> >>>>>>> type defined, the consumer would associated with that endpoint
>> type.
>> >>>>>>> However, when both endpoint types are defined, the protocol handler
>> >>>>>>> does
>> >>>>>>> not know whether to associate this consumer with a queue under the
>> >>>>>>> “anycast” section, or whether to create a new queue under the
>> >>>>>>> “multicast”
>> >>>>>>> section. e.g.
>> >>>>>>>
>> >>>>>>> Consume: “foo”
>> >>>>>>>
>> >>>>>>> <*addresses*>
>> >>>>>>>
>> >>>>>>>       <*address* *name**="foo"*>      <*anycast*>
>>  <*queues*>
>> >>>>>>>          <*queue* *name**="**foo”* />         </*queues*>
>> >>>>>>> </*anycast*>      <*multicast*>         <*queues*>
>> <*queue*
>> >>>>>>> *name**="my.topic.subscription" */>         </*queues*>
>> >>>>>>> </*multicast*>   </*address*></*addresses*>
>> >>>>>>>
>> >>>>>>> In this scenario, we can make the default configurable on the
>> >>>>>>> protocol/acceptor. Possible options for this could be:
>> >>>>>>>
>> >>>>>>> “multicast”: Defaults multicast
>> >>>>>>>
>> >>>>>>> “anycast”: Defaults to anycast
>> >>>>>>>
>> >>>>>>> “error”: Returns an error to the client
>> >>>>>>>
>> >>>>>>> Alternatively each protocol handler could handle this in the most
>> >>>>>>> sensible
>> >>>>>>> way for that protocol. MQTT might default to “multicast”, STOMP
>> >>>>>>> “anycast”,
>> >>>>>>> and AMQP to “error”.
>> >>>>>>>
>> >>>>>> Yep, this works great. I think there are two flags on the
>> acceptors..
>> >>>>>> one
>> >>>>>> for auto-create and one for default handling of name collision. The
>> >>>>>> defaults would most likely be the same.
>> >>>>>>
>> >>>>>> Something along the lines of:
>> >>>>>> auto-create-default = "multicast | anycast"
>> >>>>>> no-prefix-default = "multicast | anycast | error"
>> >>>>>>
>> >>>>>> 5. Fully qualified address names
>> >>>>>>> This feature allows a client to identify a particular address on a
>> >>>>>>> specific
>> >>>>>>> broker in a cluster. This could be achieved by the client using
>> some
>> >>>>>>> form
>> >>>>>>> of address as:
>> >>>>>>>
>> >>>>>>> queue:///host/broker/address/
>> >>>>>>>
>> >>>>>>> Matt could you elaborate on the drivers behind this requirement
>> >>>>>>> please.
>> >>>>>>>
>> >>>>>>> I am of the opinion that this is out of the scope of the addressing
>> >>>>>>> changes, and is more to do with redirecting in cluster scenarios.
>> The
>> >>>>>>> current model will support this address syntax if we want to use
>> it in
>> >>>>>>> the
>> >>>>>>> future.
>> >>>>>>>
>> >>>>>> I agree that tackling the impl of this should be out-of-scope. My
>> >>>>>> recommendation is to consider it in addressing now, so we can
>> hopefully
>> >>>>>> avoid any breakage down the road.
>> >>>>>>
>> >>>>>> A widely used feature in other EMS brokers (IBM MQ, Tibco EMS, etc)
>> is
>> >>>>>> the
>> >>>>>> ability to fully address a destination using a format similar to
>> this:
>> >>>>>>
>> >>>>>> queue://brokerB/myQueue
>> >>>>>>
>> >>>>> The advantage of this is to allow for scaling of the number of
>> >>>>> destinations
>> >>>>>> and allows for more dynamic broker networks to be created without
>> >>>>>> applications having to have connection information for all brokers
>> in a
>> >>>>>> broker network. Think simple delivery+routing, and not horizontal
>> >>>>>> scaling.
>> >>>>>> It is very analogous to SMTP mail routing.
>> >>>>>>
>> >>>>>> Producer behavior:
>> >>>>>>
>> >>>>>> 1. Client X connects to brokerA and sends it a message addressed:
>> >>>>>> queue://brokerB/myQueue
>> >>>>>> 2. brokerA accepts the message on behalf of brokerB and handles all
>> >>>>>> acknowledgement and persistence accordingly
>> >>>>>> 3. brokerA would then store the message in a "queue" for brokerB.
>> Note:
>> >>>>>> All messages for brokerB are generally stored in one queue-- this is
>> >>>>>> how it
>> >>>>>> helps with destination scaling
>> >>>>>>
>> >>>>>> Broker to broker behavior:
>> >>>>>>
>> >>>>>> There are generally two scenarios: always-on or periodic-check
>> >>>>>>
>> >>>>>> In "always-on"
>> >>>>>> 1. brokerA looks for a brokerB in its list of cluster connections
>> and
>> >>>>>> then
>> >>>>>> sends all messages for all queues for brokerB (or brokerB pulls all
>> >>>>>> messages, depending on cluster connection config)
>> >>>>>>
>> >>>>>> In "periodic-check"
>> >>>>>> 1. brokerB connects to brokerA (or vice-versa) on a given time
>> interval
>> >>>>>> and then receives any messages that have arrived since last check
>> >>>>>>
>> >>>>>> TL;DR;
>> >>>>>>
>> >>>>>> It would be cool to consider remote broker delivery for messages
>> while
>> >>>>>> refactoring the address handling code. This would bring Artemis
>> inline
>> >>>>>> with
>> >>>>>> the rest of the commercial EMS brokers. The impact now, hopefully,
>> is
>> >>>>>> minor
>> >>>>>> and just thinking about default prefixes.
>> >>>>>>
>> >>>>> Understood, from our conversations on IRC I can see why this might be
>> >>>>> useful.
>> >>>>>
>> >>>>>> Thanks,
>> >>>>>> -Matt
>> >>>>>>
>> >>>>>>
>> >>>>>>
>> >>>>
>> >>>> --
>> >>>> Clebert Suconic
>> >>>
>> >>>
>> >>
>> >> --
>> >> Tim Bish
>> >> twitter: @tabish121
>> >> blog: http://timbish.blogspot.com/
>> >>
>> >
>> >
>>
>>



-- 
Clebert Suconic

Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

Posted by Christopher Shannon <ch...@gmail.com>.
+1 for merging the branch into master after the cleanup is done and bumping
to 2.0 since it is a major architecture change.


On Wed, Dec 7, 2016 at 3:31 PM, Justin Bertram <jb...@apache.com> wrote:

> Essential feature parity with 5.x (where it makes sense) has been a goal
> all along, but I think waiting until such parity exists before the next
> major release means the community will be waiting quite a bit longer than
> they already have.  Meanwhile, new functionality that could benefit the
> community will remain unavailable.  In any event, "feature parity" is a bit
> vague.  If there is something specific with regards to 5.x parity that
> you're looking for then I think you should make that explicit so it can be
> evaluated.
>
> I'm in favor of merging the addressing changes onto master, hardening
> things up a bit, and then releasing.
>
>
> Justin
>
> ----- Original Message -----
> From: "Matt Pavlovich" <ma...@gmail.com>
> To: dev@activemq.apache.org
> Sent: Wednesday, December 7, 2016 2:04:13 PM
> Subject: Re: [DISCUSS] Artemis addressing improvements, JMS component
> removal and potential 2.0.0
>
> IMHO, I think it would be good to kick up a thread on what it means to
> be 2.0. It sounds like the addressing changes definitely warrant it on
> its own, but I'm thinking having ActiveMQ 5.x feature parity would be a
> good goal for the 2.0 release.  My $0.02
>
> On 12/7/16 2:56 PM, Clebert Suconic wrote:
> > +1000
> >
> >
> > It needs one final cleanup before it can be done though.. these commit
> > messages need meaninful descriptions.
> >
> > if Justin or Martyn could come up with that since they did most of the
> > work on the branch.
> >
> > This will really require bumping the release to 2.0.0 (there's a
> > 2.0.snapshot commit on it already).  I would merge this into master,
> > and fork the current master as 1.x.
> >
> >
> >
> >
> > On Wed, Dec 7, 2016 at 1:52 PM, Timothy Bish <ta...@gmail.com>
> wrote:
> >> This would be a good time to move to master, would allow other to more
> >> easily get onboard
> >>
> >>
> >> On 12/07/2016 01:25 PM, Clebert Suconic wrote:
> >>> I have rebased ARTEMIS-780 in top of master. There was a lot of
> >>> conflicts...
> >>>
> >>> I have aggregated/squashed most of the commits by chronological order
> >>> almost. So if Martyn had 10 commits in series I had squashed all of
> >>> them, since they were small comments anyways. The good thing about
> >>> this is that nobody would lose authorship of these commits.
> >>>
> >>> We will need to come up with more meaningful messages for these
> >>> commits before we can merge into master. But this is getting into a
> >>> very good shape. I'm impressed by the amount of work I see done on
> >>> this branch. Very well done guys! I mean it!
> >>>
> >>> Also, I have saved the old branch before I pushed -f into my fork as
> >>> old-ARTEMIS-780 in case I broke anything on the process. Please check
> >>> everything and let me know if I did.
> >>>
> >>>
> >>> And please rebase more often on this branch unless you merge it soon.
> >>>
> >>>
> >>> On Mon, Nov 28, 2016 at 2:36 PM, Clebert Suconic
> >>> <cl...@gmail.com> wrote:
> >>>> If / when we do the 2.0 bump, I would like to move a few classes.
> >>>> Mainly under server.impl... I would like to move activations under a
> >>>> package for activation, replicationendpoints for a package for
> >>>> replications...    some small stuff like that just to reorganize
> >>>> little things like this a bit.
> >>>>
> >>>> We can't do that now as that would break API and compatibility, but if
> >>>> we do the bump, I would like to make that simple move.
> >>>>
> >>>> On Thu, Nov 24, 2016 at 4:41 AM, Martyn Taylor <mt...@redhat.com>
> >>>> wrote:
> >>>>> Hi Matt,
> >>>>>
> >>>>> Comments inline.
> >>>>>
> >>>>> On Mon, Nov 21, 2016 at 7:02 PM, Matt Pavlovich <ma...@gmail.com>
> >>>>> wrote:
> >>>>>
> >>>>>> Martyn-
> >>>>>>
> >>>>>> I think you nailed it here-- well done =)
> >>>>>>
> >>>>>> My notes in-line--
> >>>>>>
> >>>>>> On 11/21/16 10:45 AM, Martyn Taylor wrote:
> >>>>>>
> >>>>>>> 1. Ability to route messages to queues with the same address, but
> >>>>>>> different
> >>>>>>> routing semantics.
> >>>>>>>
> >>>>>>> The proposal outlined in ARTEMIS-780 outlines a new model that
> >>>>>>> introduces
> >>>>>>> an address object at the configuration and management layer. In the
> >>>>>>> proposal it is not possible to create 2 addresses with different
> >>>>>>> routing
> >>>>>>> types. This causes a problem with existing clients (JMS, STOMP and
> for
> >>>>>>> compatability with other vendors).
> >>>>>>>
> >>>>>>> Potential Modification: Addresses can have multiple routing type
> >>>>>>> “endpoints”, either “multicast” only, “anycast” only or both. The
> >>>>>>> example
> >>>>>>> below would be used to represent a JMS Topic called “foo”, with a
> >>>>>>> single
> >>>>>>> subscription queue and a JMS Queue called “foo”. N.B. The actual
> XML
> >>>>>>> is
> >>>>>>> just an example, there are multiple ways this could be represented
> >>>>>>> that we
> >>>>>>> can define later.
> >>>>>>>
> >>>>>>> <*addresses*>   <*address* *name**="foo"*>      <*anycast*>
> >>>>>>> <*queues*>            <*queue* *name**="**foo”* />
>  </*queues*>
> >>>>>>>         </*anycast*>      <*mulcast*>         <*queues*>
> >>>>>>> <*queue* *name**="my.topic.subscription" */>         </*queues*>
> >>>>>>> </*multicast*>   </*address*></*addresses*>
> >>>>>>>
> >>>>>> I think this solves it. The crux of the issues (for me) boils down
> to
> >>>>>> auto-creation of destinations across protocols. Having this show up
> in
> >>>>>> the
> >>>>>> configs would give developers and admins more information to
> >>>>>> troubleshoot
> >>>>>> the mixed address type+protocol scenario.
> >>>>>>
> >>>>>> 2. Sending to “multicast”, “anycast” or “all”
> >>>>>>> As mentioned earlier JMS (and other clients such as STOMP via
> >>>>>>> prefixing)
> >>>>>>> allow the producer to identify the type of end point it would like
> to
> >>>>>>> send
> >>>>>>> to.
> >>>>>>>
> >>>>>>> If a JMS client creates a producer and passes in a topic with
> address
> >>>>>>> “foo”. Then only the queues associated with the “multicast”
> section of
> >>>>>>> the
> >>>>>>> address. A similar thing happens when the JMS producer sends to a
> >>>>>>> “queue”
> >>>>>>> messages should be distributed amongst the queues associated with
> the
> >>>>>>> “anycast” section of the address.
> >>>>>>>
> >>>>>>> There may also be a case when a producer does not identify the
> >>>>>>> endpoint
> >>>>>>> type, and simply sends to “foo”. AMQP or MQTT may want to do this.
> In
> >>>>>>> this
> >>>>>>> scenario both should happen. All the queues under the multicast
> >>>>>>> section
> >>>>>>> get
> >>>>>>> a copy of the message, and one queue under the anycast section gets
> >>>>>>> the
> >>>>>>> message.
> >>>>>>>
> >>>>>>> Modification: None Needed. Internal APIs would need to be updated
> to
> >>>>>>> allow
> >>>>>>> this functionality.
> >>>>>>>
> >>>>>> I think the "deliver to all" scenario should be fine. This seems
> >>>>>> analogous
> >>>>>> to a CompositeDestination in ActiveMQ 5.x. I'll map through some
> >>>>>> scenarios
> >>>>>> and report back any gotchas.
> >>>>>>
> >>>>>> 3. Support for prefixes to identify endpoint types
> >>>>>>> Many clients, ActiveMQ 5.x, STOMP and potential clients from
> alternate
> >>>>>>> vendors, identify the endpoint type (in producer and consumer)
> using a
> >>>>>>> prefix notation.
> >>>>>>>
> >>>>>>> e.g. queue:///foo
> >>>>>>>
> >>>>>>> Which would identify:
> >>>>>>>
> >>>>>>> <*addresses*>   <*address* *name**="foo"*>      <*anycast*>
> >>>>>>> <*queues*>            <*queue* *name**="my.foo.queue" */>
> >>>>>>> </*queues*>      </*anycast*>   </*address*></*addresses*>
> >>>>>>>
> >>>>>>> Modifications Needed: None to the model. An additional parameter to
> >>>>>>> the
> >>>>>>> acceptors should be added to identify the prefix.
> >>>>>>>
> >>>>>> Just as a check point in the syntax+naming convention in your
> provided
> >>>>>> example... would the name actually be:
> >>>>>>
> >>>>>> <*queue* *name**="foo" .. vs "my.foo.queue" ?
> >>>>>>
> >>>>> The queue name can be anything.  It's the address that is used by
> >>>>> consumer/producer.  The protocol handler / broker will decided which
> >>>>> queue
> >>>>> to connect to.
> >>>>>
> >>>>>> 4. Multiple endpoints are defined, but client does not specify
> >>>>>> “endpoint
> >>>>>>> routing type” when consuming
> >>>>>>>
> >>>>>>> Handling cases where consumers does not pass enough information in
> >>>>>>> their
> >>>>>>> address or via protocol specific mechanisms to identify an
> endpoint.
> >>>>>>> Let’s
> >>>>>>> say an AMQP client, requests to subscribe to the address “foo”, but
> >>>>>>> passes
> >>>>>>> no extra information. In the cases where there are only a single
> >>>>>>> endpoint
> >>>>>>> type defined, the consumer would associated with that endpoint
> type.
> >>>>>>> However, when both endpoint types are defined, the protocol handler
> >>>>>>> does
> >>>>>>> not know whether to associate this consumer with a queue under the
> >>>>>>> “anycast” section, or whether to create a new queue under the
> >>>>>>> “multicast”
> >>>>>>> section. e.g.
> >>>>>>>
> >>>>>>> Consume: “foo”
> >>>>>>>
> >>>>>>> <*addresses*>
> >>>>>>>
> >>>>>>>       <*address* *name**="foo"*>      <*anycast*>
>  <*queues*>
> >>>>>>>          <*queue* *name**="**foo”* />         </*queues*>
> >>>>>>> </*anycast*>      <*multicast*>         <*queues*>
> <*queue*
> >>>>>>> *name**="my.topic.subscription" */>         </*queues*>
> >>>>>>> </*multicast*>   </*address*></*addresses*>
> >>>>>>>
> >>>>>>> In this scenario, we can make the default configurable on the
> >>>>>>> protocol/acceptor. Possible options for this could be:
> >>>>>>>
> >>>>>>> “multicast”: Defaults multicast
> >>>>>>>
> >>>>>>> “anycast”: Defaults to anycast
> >>>>>>>
> >>>>>>> “error”: Returns an error to the client
> >>>>>>>
> >>>>>>> Alternatively each protocol handler could handle this in the most
> >>>>>>> sensible
> >>>>>>> way for that protocol. MQTT might default to “multicast”, STOMP
> >>>>>>> “anycast”,
> >>>>>>> and AMQP to “error”.
> >>>>>>>
> >>>>>> Yep, this works great. I think there are two flags on the
> acceptors..
> >>>>>> one
> >>>>>> for auto-create and one for default handling of name collision. The
> >>>>>> defaults would most likely be the same.
> >>>>>>
> >>>>>> Something along the lines of:
> >>>>>> auto-create-default = "multicast | anycast"
> >>>>>> no-prefix-default = "multicast | anycast | error"
> >>>>>>
> >>>>>> 5. Fully qualified address names
> >>>>>>> This feature allows a client to identify a particular address on a
> >>>>>>> specific
> >>>>>>> broker in a cluster. This could be achieved by the client using
> some
> >>>>>>> form
> >>>>>>> of address as:
> >>>>>>>
> >>>>>>> queue:///host/broker/address/
> >>>>>>>
> >>>>>>> Matt could you elaborate on the drivers behind this requirement
> >>>>>>> please.
> >>>>>>>
> >>>>>>> I am of the opinion that this is out of the scope of the addressing
> >>>>>>> changes, and is more to do with redirecting in cluster scenarios.
> The
> >>>>>>> current model will support this address syntax if we want to use
> it in
> >>>>>>> the
> >>>>>>> future.
> >>>>>>>
> >>>>>> I agree that tackling the impl of this should be out-of-scope. My
> >>>>>> recommendation is to consider it in addressing now, so we can
> hopefully
> >>>>>> avoid any breakage down the road.
> >>>>>>
> >>>>>> A widely used feature in other EMS brokers (IBM MQ, Tibco EMS, etc)
> is
> >>>>>> the
> >>>>>> ability to fully address a destination using a format similar to
> this:
> >>>>>>
> >>>>>> queue://brokerB/myQueue
> >>>>>>
> >>>>> The advantage of this is to allow for scaling of the number of
> >>>>> destinations
> >>>>>> and allows for more dynamic broker networks to be created without
> >>>>>> applications having to have connection information for all brokers
> in a
> >>>>>> broker network. Think simple delivery+routing, and not horizontal
> >>>>>> scaling.
> >>>>>> It is very analogous to SMTP mail routing.
> >>>>>>
> >>>>>> Producer behavior:
> >>>>>>
> >>>>>> 1. Client X connects to brokerA and sends it a message addressed:
> >>>>>> queue://brokerB/myQueue
> >>>>>> 2. brokerA accepts the message on behalf of brokerB and handles all
> >>>>>> acknowledgement and persistence accordingly
> >>>>>> 3. brokerA would then store the message in a "queue" for brokerB.
> Note:
> >>>>>> All messages for brokerB are generally stored in one queue-- this is
> >>>>>> how it
> >>>>>> helps with destination scaling
> >>>>>>
> >>>>>> Broker to broker behavior:
> >>>>>>
> >>>>>> There are generally two scenarios: always-on or periodic-check
> >>>>>>
> >>>>>> In "always-on"
> >>>>>> 1. brokerA looks for a brokerB in its list of cluster connections
> and
> >>>>>> then
> >>>>>> sends all messages for all queues for brokerB (or brokerB pulls all
> >>>>>> messages, depending on cluster connection config)
> >>>>>>
> >>>>>> In "periodic-check"
> >>>>>> 1. brokerB connects to brokerA (or vice-versa) on a given time
> interval
> >>>>>> and then receives any messages that have arrived since last check
> >>>>>>
> >>>>>> TL;DR;
> >>>>>>
> >>>>>> It would be cool to consider remote broker delivery for messages
> while
> >>>>>> refactoring the address handling code. This would bring Artemis
> inline
> >>>>>> with
> >>>>>> the rest of the commercial EMS brokers. The impact now, hopefully,
> is
> >>>>>> minor
> >>>>>> and just thinking about default prefixes.
> >>>>>>
> >>>>> Understood, from our conversations on IRC I can see why this might be
> >>>>> useful.
> >>>>>
> >>>>>> Thanks,
> >>>>>> -Matt
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>
> >>>> --
> >>>> Clebert Suconic
> >>>
> >>>
> >>
> >> --
> >> Tim Bish
> >> twitter: @tabish121
> >> blog: http://timbish.blogspot.com/
> >>
> >
> >
>
>

Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

Posted by Justin Bertram <jb...@apache.com>.
Essential feature parity with 5.x (where it makes sense) has been a goal all along, but I think waiting until such parity exists before the next major release means the community will be waiting quite a bit longer than they already have.  Meanwhile, new functionality that could benefit the community will remain unavailable.  In any event, "feature parity" is a bit vague.  If there is something specific with regards to 5.x parity that you're looking for then I think you should make that explicit so it can be evaluated.

I'm in favor of merging the addressing changes onto master, hardening things up a bit, and then releasing.


Justin

----- Original Message -----
From: "Matt Pavlovich" <ma...@gmail.com>
To: dev@activemq.apache.org
Sent: Wednesday, December 7, 2016 2:04:13 PM
Subject: Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

IMHO, I think it would be good to kick up a thread on what it means to 
be 2.0. It sounds like the addressing changes definitely warrant it on 
its own, but I'm thinking having ActiveMQ 5.x feature parity would be a 
good goal for the 2.0 release.  My $0.02

On 12/7/16 2:56 PM, Clebert Suconic wrote:
> +1000
>
>
> It needs one final cleanup before it can be done though.. these commit
> messages need meaninful descriptions.
>
> if Justin or Martyn could come up with that since they did most of the
> work on the branch.
>
> This will really require bumping the release to 2.0.0 (there's a
> 2.0.snapshot commit on it already).  I would merge this into master,
> and fork the current master as 1.x.
>
>
>
>
> On Wed, Dec 7, 2016 at 1:52 PM, Timothy Bish <ta...@gmail.com> wrote:
>> This would be a good time to move to master, would allow other to more
>> easily get onboard
>>
>>
>> On 12/07/2016 01:25 PM, Clebert Suconic wrote:
>>> I have rebased ARTEMIS-780 in top of master. There was a lot of
>>> conflicts...
>>>
>>> I have aggregated/squashed most of the commits by chronological order
>>> almost. So if Martyn had 10 commits in series I had squashed all of
>>> them, since they were small comments anyways. The good thing about
>>> this is that nobody would lose authorship of these commits.
>>>
>>> We will need to come up with more meaningful messages for these
>>> commits before we can merge into master. But this is getting into a
>>> very good shape. I'm impressed by the amount of work I see done on
>>> this branch. Very well done guys! I mean it!
>>>
>>> Also, I have saved the old branch before I pushed -f into my fork as
>>> old-ARTEMIS-780 in case I broke anything on the process. Please check
>>> everything and let me know if I did.
>>>
>>>
>>> And please rebase more often on this branch unless you merge it soon.
>>>
>>>
>>> On Mon, Nov 28, 2016 at 2:36 PM, Clebert Suconic
>>> <cl...@gmail.com> wrote:
>>>> If / when we do the 2.0 bump, I would like to move a few classes.
>>>> Mainly under server.impl... I would like to move activations under a
>>>> package for activation, replicationendpoints for a package for
>>>> replications...    some small stuff like that just to reorganize
>>>> little things like this a bit.
>>>>
>>>> We can't do that now as that would break API and compatibility, but if
>>>> we do the bump, I would like to make that simple move.
>>>>
>>>> On Thu, Nov 24, 2016 at 4:41 AM, Martyn Taylor <mt...@redhat.com>
>>>> wrote:
>>>>> Hi Matt,
>>>>>
>>>>> Comments inline.
>>>>>
>>>>> On Mon, Nov 21, 2016 at 7:02 PM, Matt Pavlovich <ma...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Martyn-
>>>>>>
>>>>>> I think you nailed it here-- well done =)
>>>>>>
>>>>>> My notes in-line--
>>>>>>
>>>>>> On 11/21/16 10:45 AM, Martyn Taylor wrote:
>>>>>>
>>>>>>> 1. Ability to route messages to queues with the same address, but
>>>>>>> different
>>>>>>> routing semantics.
>>>>>>>
>>>>>>> The proposal outlined in ARTEMIS-780 outlines a new model that
>>>>>>> introduces
>>>>>>> an address object at the configuration and management layer. In the
>>>>>>> proposal it is not possible to create 2 addresses with different
>>>>>>> routing
>>>>>>> types. This causes a problem with existing clients (JMS, STOMP and for
>>>>>>> compatability with other vendors).
>>>>>>>
>>>>>>> Potential Modification: Addresses can have multiple routing type
>>>>>>> “endpoints”, either “multicast” only, “anycast” only or both. The
>>>>>>> example
>>>>>>> below would be used to represent a JMS Topic called “foo”, with a
>>>>>>> single
>>>>>>> subscription queue and a JMS Queue called “foo”. N.B. The actual XML
>>>>>>> is
>>>>>>> just an example, there are multiple ways this could be represented
>>>>>>> that we
>>>>>>> can define later.
>>>>>>>
>>>>>>> <*addresses*>   <*address* *name**="foo"*>      <*anycast*>
>>>>>>> <*queues*>            <*queue* *name**="**foo”* />         </*queues*>
>>>>>>>         </*anycast*>      <*mulcast*>         <*queues*>
>>>>>>> <*queue* *name**="my.topic.subscription" */>         </*queues*>
>>>>>>> </*multicast*>   </*address*></*addresses*>
>>>>>>>
>>>>>> I think this solves it. The crux of the issues (for me) boils down to
>>>>>> auto-creation of destinations across protocols. Having this show up in
>>>>>> the
>>>>>> configs would give developers and admins more information to
>>>>>> troubleshoot
>>>>>> the mixed address type+protocol scenario.
>>>>>>
>>>>>> 2. Sending to “multicast”, “anycast” or “all”
>>>>>>> As mentioned earlier JMS (and other clients such as STOMP via
>>>>>>> prefixing)
>>>>>>> allow the producer to identify the type of end point it would like to
>>>>>>> send
>>>>>>> to.
>>>>>>>
>>>>>>> If a JMS client creates a producer and passes in a topic with address
>>>>>>> “foo”. Then only the queues associated with the “multicast” section of
>>>>>>> the
>>>>>>> address. A similar thing happens when the JMS producer sends to a
>>>>>>> “queue”
>>>>>>> messages should be distributed amongst the queues associated with the
>>>>>>> “anycast” section of the address.
>>>>>>>
>>>>>>> There may also be a case when a producer does not identify the
>>>>>>> endpoint
>>>>>>> type, and simply sends to “foo”. AMQP or MQTT may want to do this. In
>>>>>>> this
>>>>>>> scenario both should happen. All the queues under the multicast
>>>>>>> section
>>>>>>> get
>>>>>>> a copy of the message, and one queue under the anycast section gets
>>>>>>> the
>>>>>>> message.
>>>>>>>
>>>>>>> Modification: None Needed. Internal APIs would need to be updated to
>>>>>>> allow
>>>>>>> this functionality.
>>>>>>>
>>>>>> I think the "deliver to all" scenario should be fine. This seems
>>>>>> analogous
>>>>>> to a CompositeDestination in ActiveMQ 5.x. I'll map through some
>>>>>> scenarios
>>>>>> and report back any gotchas.
>>>>>>
>>>>>> 3. Support for prefixes to identify endpoint types
>>>>>>> Many clients, ActiveMQ 5.x, STOMP and potential clients from alternate
>>>>>>> vendors, identify the endpoint type (in producer and consumer) using a
>>>>>>> prefix notation.
>>>>>>>
>>>>>>> e.g. queue:///foo
>>>>>>>
>>>>>>> Which would identify:
>>>>>>>
>>>>>>> <*addresses*>   <*address* *name**="foo"*>      <*anycast*>
>>>>>>> <*queues*>            <*queue* *name**="my.foo.queue" */>
>>>>>>> </*queues*>      </*anycast*>   </*address*></*addresses*>
>>>>>>>
>>>>>>> Modifications Needed: None to the model. An additional parameter to
>>>>>>> the
>>>>>>> acceptors should be added to identify the prefix.
>>>>>>>
>>>>>> Just as a check point in the syntax+naming convention in your provided
>>>>>> example... would the name actually be:
>>>>>>
>>>>>> <*queue* *name**="foo" .. vs "my.foo.queue" ?
>>>>>>
>>>>> The queue name can be anything.  It's the address that is used by
>>>>> consumer/producer.  The protocol handler / broker will decided which
>>>>> queue
>>>>> to connect to.
>>>>>
>>>>>> 4. Multiple endpoints are defined, but client does not specify
>>>>>> “endpoint
>>>>>>> routing type” when consuming
>>>>>>>
>>>>>>> Handling cases where consumers does not pass enough information in
>>>>>>> their
>>>>>>> address or via protocol specific mechanisms to identify an endpoint.
>>>>>>> Let’s
>>>>>>> say an AMQP client, requests to subscribe to the address “foo”, but
>>>>>>> passes
>>>>>>> no extra information. In the cases where there are only a single
>>>>>>> endpoint
>>>>>>> type defined, the consumer would associated with that endpoint type.
>>>>>>> However, when both endpoint types are defined, the protocol handler
>>>>>>> does
>>>>>>> not know whether to associate this consumer with a queue under the
>>>>>>> “anycast” section, or whether to create a new queue under the
>>>>>>> “multicast”
>>>>>>> section. e.g.
>>>>>>>
>>>>>>> Consume: “foo”
>>>>>>>
>>>>>>> <*addresses*>
>>>>>>>
>>>>>>>       <*address* *name**="foo"*>      <*anycast*>         <*queues*>
>>>>>>>          <*queue* *name**="**foo”* />         </*queues*>
>>>>>>> </*anycast*>      <*multicast*>         <*queues*>            <*queue*
>>>>>>> *name**="my.topic.subscription" */>         </*queues*>
>>>>>>> </*multicast*>   </*address*></*addresses*>
>>>>>>>
>>>>>>> In this scenario, we can make the default configurable on the
>>>>>>> protocol/acceptor. Possible options for this could be:
>>>>>>>
>>>>>>> “multicast”: Defaults multicast
>>>>>>>
>>>>>>> “anycast”: Defaults to anycast
>>>>>>>
>>>>>>> “error”: Returns an error to the client
>>>>>>>
>>>>>>> Alternatively each protocol handler could handle this in the most
>>>>>>> sensible
>>>>>>> way for that protocol. MQTT might default to “multicast”, STOMP
>>>>>>> “anycast”,
>>>>>>> and AMQP to “error”.
>>>>>>>
>>>>>> Yep, this works great. I think there are two flags on the acceptors..
>>>>>> one
>>>>>> for auto-create and one for default handling of name collision. The
>>>>>> defaults would most likely be the same.
>>>>>>
>>>>>> Something along the lines of:
>>>>>> auto-create-default = "multicast | anycast"
>>>>>> no-prefix-default = "multicast | anycast | error"
>>>>>>
>>>>>> 5. Fully qualified address names
>>>>>>> This feature allows a client to identify a particular address on a
>>>>>>> specific
>>>>>>> broker in a cluster. This could be achieved by the client using some
>>>>>>> form
>>>>>>> of address as:
>>>>>>>
>>>>>>> queue:///host/broker/address/
>>>>>>>
>>>>>>> Matt could you elaborate on the drivers behind this requirement
>>>>>>> please.
>>>>>>>
>>>>>>> I am of the opinion that this is out of the scope of the addressing
>>>>>>> changes, and is more to do with redirecting in cluster scenarios. The
>>>>>>> current model will support this address syntax if we want to use it in
>>>>>>> the
>>>>>>> future.
>>>>>>>
>>>>>> I agree that tackling the impl of this should be out-of-scope. My
>>>>>> recommendation is to consider it in addressing now, so we can hopefully
>>>>>> avoid any breakage down the road.
>>>>>>
>>>>>> A widely used feature in other EMS brokers (IBM MQ, Tibco EMS, etc) is
>>>>>> the
>>>>>> ability to fully address a destination using a format similar to this:
>>>>>>
>>>>>> queue://brokerB/myQueue
>>>>>>
>>>>> The advantage of this is to allow for scaling of the number of
>>>>> destinations
>>>>>> and allows for more dynamic broker networks to be created without
>>>>>> applications having to have connection information for all brokers in a
>>>>>> broker network. Think simple delivery+routing, and not horizontal
>>>>>> scaling.
>>>>>> It is very analogous to SMTP mail routing.
>>>>>>
>>>>>> Producer behavior:
>>>>>>
>>>>>> 1. Client X connects to brokerA and sends it a message addressed:
>>>>>> queue://brokerB/myQueue
>>>>>> 2. brokerA accepts the message on behalf of brokerB and handles all
>>>>>> acknowledgement and persistence accordingly
>>>>>> 3. brokerA would then store the message in a "queue" for brokerB. Note:
>>>>>> All messages for brokerB are generally stored in one queue-- this is
>>>>>> how it
>>>>>> helps with destination scaling
>>>>>>
>>>>>> Broker to broker behavior:
>>>>>>
>>>>>> There are generally two scenarios: always-on or periodic-check
>>>>>>
>>>>>> In "always-on"
>>>>>> 1. brokerA looks for a brokerB in its list of cluster connections and
>>>>>> then
>>>>>> sends all messages for all queues for brokerB (or brokerB pulls all
>>>>>> messages, depending on cluster connection config)
>>>>>>
>>>>>> In "periodic-check"
>>>>>> 1. brokerB connects to brokerA (or vice-versa) on a given time interval
>>>>>> and then receives any messages that have arrived since last check
>>>>>>
>>>>>> TL;DR;
>>>>>>
>>>>>> It would be cool to consider remote broker delivery for messages while
>>>>>> refactoring the address handling code. This would bring Artemis inline
>>>>>> with
>>>>>> the rest of the commercial EMS brokers. The impact now, hopefully, is
>>>>>> minor
>>>>>> and just thinking about default prefixes.
>>>>>>
>>>>> Understood, from our conversations on IRC I can see why this might be
>>>>> useful.
>>>>>
>>>>>> Thanks,
>>>>>> -Matt
>>>>>>
>>>>>>
>>>>>>
>>>>
>>>> --
>>>> Clebert Suconic
>>>
>>>
>>
>> --
>> Tim Bish
>> twitter: @tabish121
>> blog: http://timbish.blogspot.com/
>>
>
>


Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

Posted by Matt Pavlovich <ma...@gmail.com>.
+1


On 12/7/16 1:52 PM, Timothy Bish wrote:
> This would be a good time to move to master, would allow other to more 
> easily get onboard
>
> On 12/07/2016 01:25 PM, Clebert Suconic wrote:
>> I have rebased ARTEMIS-780 in top of master. There was a lot of 
>> conflicts...
>>
>> I have aggregated/squashed most of the commits by chronological order
>> almost. So if Martyn had 10 commits in series I had squashed all of
>> them, since they were small comments anyways. The good thing about
>> this is that nobody would lose authorship of these commits.
>>
>> We will need to come up with more meaningful messages for these
>> commits before we can merge into master. But this is getting into a
>> very good shape. I'm impressed by the amount of work I see done on
>> this branch. Very well done guys! I mean it!
>>
>> Also, I have saved the old branch before I pushed -f into my fork as
>> old-ARTEMIS-780 in case I broke anything on the process. Please check
>> everything and let me know if I did.
>>
>>
>> And please rebase more often on this branch unless you merge it soon.
>>
>>
>> On Mon, Nov 28, 2016 at 2:36 PM, Clebert Suconic
>> <cl...@gmail.com> wrote:
>>> If / when we do the 2.0 bump, I would like to move a few classes.
>>> Mainly under server.impl... I would like to move activations under a
>>> package for activation, replicationendpoints for a package for
>>> replications...    some small stuff like that just to reorganize
>>> little things like this a bit.
>>>
>>> We can't do that now as that would break API and compatibility, but if
>>> we do the bump, I would like to make that simple move.
>>>
>>> On Thu, Nov 24, 2016 at 4:41 AM, Martyn Taylor <mt...@redhat.com> 
>>> wrote:
>>>> Hi Matt,
>>>>
>>>> Comments inline.
>>>>
>>>> On Mon, Nov 21, 2016 at 7:02 PM, Matt Pavlovich 
>>>> <ma...@gmail.com> wrote:
>>>>
>>>>> Martyn-
>>>>>
>>>>> I think you nailed it here-- well done =)
>>>>>
>>>>> My notes in-line--
>>>>>
>>>>> On 11/21/16 10:45 AM, Martyn Taylor wrote:
>>>>>
>>>>>> 1. Ability to route messages to queues with the same address, but
>>>>>> different
>>>>>> routing semantics.
>>>>>>
>>>>>> The proposal outlined in ARTEMIS-780 outlines a new model that 
>>>>>> introduces
>>>>>> an address object at the configuration and management layer. In the
>>>>>> proposal it is not possible to create 2 addresses with different 
>>>>>> routing
>>>>>> types. This causes a problem with existing clients (JMS, STOMP 
>>>>>> and for
>>>>>> compatability with other vendors).
>>>>>>
>>>>>> Potential Modification: Addresses can have multiple routing type
>>>>>> \u201cendpoints\u201d, either \u201cmulticast\u201d only, \u201canycast\u201d only or both. The 
>>>>>> example
>>>>>> below would be used to represent a JMS Topic called \u201cfoo\u201d, with a 
>>>>>> single
>>>>>> subscription queue and a JMS Queue called \u201cfoo\u201d. N.B. The actual 
>>>>>> XML is
>>>>>> just an example, there are multiple ways this could be 
>>>>>> represented that we
>>>>>> can define later.
>>>>>>
>>>>>> <*addresses*>   <*address* *name**="foo"*>      <*anycast*>
>>>>>> <*queues*>            <*queue* *name**="**foo\u201d* />         
>>>>>> </*queues*>
>>>>>>        </*anycast*>      <*mulcast*> <*queues*>
>>>>>> <*queue* *name**="my.topic.subscription" */>         </*queues*>
>>>>>> </*multicast*> </*address*></*addresses*>
>>>>>>
>>>>> I think this solves it. The crux of the issues (for me) boils down to
>>>>> auto-creation of destinations across protocols. Having this show 
>>>>> up in the
>>>>> configs would give developers and admins more information to 
>>>>> troubleshoot
>>>>> the mixed address type+protocol scenario.
>>>>>
>>>>> 2. Sending to \u201cmulticast\u201d, \u201canycast\u201d or \u201call\u201d
>>>>>> As mentioned earlier JMS (and other clients such as STOMP via 
>>>>>> prefixing)
>>>>>> allow the producer to identify the type of end point it would 
>>>>>> like to send
>>>>>> to.
>>>>>>
>>>>>> If a JMS client creates a producer and passes in a topic with 
>>>>>> address
>>>>>> \u201cfoo\u201d. Then only the queues associated with the \u201cmulticast\u201d 
>>>>>> section of the
>>>>>> address. A similar thing happens when the JMS producer sends to a 
>>>>>> \u201cqueue\u201d
>>>>>> messages should be distributed amongst the queues associated with 
>>>>>> the
>>>>>> \u201canycast\u201d section of the address.
>>>>>>
>>>>>> There may also be a case when a producer does not identify the 
>>>>>> endpoint
>>>>>> type, and simply sends to \u201cfoo\u201d. AMQP or MQTT may want to do 
>>>>>> this. In this
>>>>>> scenario both should happen. All the queues under the multicast 
>>>>>> section
>>>>>> get
>>>>>> a copy of the message, and one queue under the anycast section 
>>>>>> gets the
>>>>>> message.
>>>>>>
>>>>>> Modification: None Needed. Internal APIs would need to be updated 
>>>>>> to allow
>>>>>> this functionality.
>>>>>>
>>>>> I think the "deliver to all" scenario should be fine. This seems 
>>>>> analogous
>>>>> to a CompositeDestination in ActiveMQ 5.x. I'll map through some 
>>>>> scenarios
>>>>> and report back any gotchas.
>>>>>
>>>>> 3. Support for prefixes to identify endpoint types
>>>>>> Many clients, ActiveMQ 5.x, STOMP and potential clients from 
>>>>>> alternate
>>>>>> vendors, identify the endpoint type (in producer and consumer) 
>>>>>> using a
>>>>>> prefix notation.
>>>>>>
>>>>>> e.g. queue:///foo
>>>>>>
>>>>>> Which would identify:
>>>>>>
>>>>>> <*addresses*>   <*address* *name**="foo"*>      <*anycast*>
>>>>>> <*queues*>            <*queue* *name**="my.foo.queue" */>
>>>>>> </*queues*>      </*anycast*> </*address*></*addresses*>
>>>>>>
>>>>>> Modifications Needed: None to the model. An additional parameter 
>>>>>> to the
>>>>>> acceptors should be added to identify the prefix.
>>>>>>
>>>>> Just as a check point in the syntax+naming convention in your 
>>>>> provided
>>>>> example... would the name actually be:
>>>>>
>>>>> <*queue* *name**="foo" .. vs "my.foo.queue" ?
>>>>>
>>>> The queue name can be anything.  It's the address that is used by
>>>> consumer/producer.  The protocol handler / broker will decided 
>>>> which queue
>>>> to connect to.
>>>>
>>>>> 4. Multiple endpoints are defined, but client does not specify 
>>>>> \u201cendpoint
>>>>>> routing type\u201d when consuming
>>>>>>
>>>>>> Handling cases where consumers does not pass enough information 
>>>>>> in their
>>>>>> address or via protocol specific mechanisms to identify an 
>>>>>> endpoint. Let\u2019s
>>>>>> say an AMQP client, requests to subscribe to the address \u201cfoo\u201d, 
>>>>>> but passes
>>>>>> no extra information. In the cases where there are only a single 
>>>>>> endpoint
>>>>>> type defined, the consumer would associated with that endpoint type.
>>>>>> However, when both endpoint types are defined, the protocol 
>>>>>> handler does
>>>>>> not know whether to associate this consumer with a queue under the
>>>>>> \u201canycast\u201d section, or whether to create a new queue under the 
>>>>>> \u201cmulticast\u201d
>>>>>> section. e.g.
>>>>>>
>>>>>> Consume: \u201cfoo\u201d
>>>>>>
>>>>>> <*addresses*>
>>>>>>
>>>>>>      <*address* *name**="foo"*> <*anycast*>         <*queues*>
>>>>>>         <*queue* *name**="**foo\u201d* /> </*queues*>
>>>>>> </*anycast*>      <*multicast*> <*queues*>            <*queue*
>>>>>> *name**="my.topic.subscription" */> </*queues*>
>>>>>> </*multicast*> </*address*></*addresses*>
>>>>>>
>>>>>> In this scenario, we can make the default configurable on the
>>>>>> protocol/acceptor. Possible options for this could be:
>>>>>>
>>>>>> \u201cmulticast\u201d: Defaults multicast
>>>>>>
>>>>>> \u201canycast\u201d: Defaults to anycast
>>>>>>
>>>>>> \u201cerror\u201d: Returns an error to the client
>>>>>>
>>>>>> Alternatively each protocol handler could handle this in the most 
>>>>>> sensible
>>>>>> way for that protocol. MQTT might default to \u201cmulticast\u201d, STOMP 
>>>>>> \u201canycast\u201d,
>>>>>> and AMQP to \u201cerror\u201d.
>>>>>>
>>>>> Yep, this works great. I think there are two flags on the 
>>>>> acceptors.. one
>>>>> for auto-create and one for default handling of name collision. The
>>>>> defaults would most likely be the same.
>>>>>
>>>>> Something along the lines of:
>>>>> auto-create-default = "multicast | anycast"
>>>>> no-prefix-default = "multicast | anycast | error"
>>>>>
>>>>> 5. Fully qualified address names
>>>>>> This feature allows a client to identify a particular address on a
>>>>>> specific
>>>>>> broker in a cluster. This could be achieved by the client using 
>>>>>> some form
>>>>>> of address as:
>>>>>>
>>>>>> queue:///host/broker/address/
>>>>>>
>>>>>> Matt could you elaborate on the drivers behind this requirement 
>>>>>> please.
>>>>>>
>>>>>> I am of the opinion that this is out of the scope of the addressing
>>>>>> changes, and is more to do with redirecting in cluster scenarios. 
>>>>>> The
>>>>>> current model will support this address syntax if we want to use 
>>>>>> it in the
>>>>>> future.
>>>>>>
>>>>> I agree that tackling the impl of this should be out-of-scope. My
>>>>> recommendation is to consider it in addressing now, so we can 
>>>>> hopefully
>>>>> avoid any breakage down the road.
>>>>>
>>>>> A widely used feature in other EMS brokers (IBM MQ, Tibco EMS, 
>>>>> etc) is the
>>>>> ability to fully address a destination using a format similar to 
>>>>> this:
>>>>>
>>>>> queue://brokerB/myQueue
>>>>>
>>>> The advantage of this is to allow for scaling of the number of 
>>>> destinations
>>>>> and allows for more dynamic broker networks to be created without
>>>>> applications having to have connection information for all brokers 
>>>>> in a
>>>>> broker network. Think simple delivery+routing, and not horizontal 
>>>>> scaling.
>>>>> It is very analogous to SMTP mail routing.
>>>>>
>>>>> Producer behavior:
>>>>>
>>>>> 1. Client X connects to brokerA and sends it a message addressed:
>>>>> queue://brokerB/myQueue
>>>>> 2. brokerA accepts the message on behalf of brokerB and handles all
>>>>> acknowledgement and persistence accordingly
>>>>> 3. brokerA would then store the message in a "queue" for brokerB. 
>>>>> Note:
>>>>> All messages for brokerB are generally stored in one queue-- this 
>>>>> is how it
>>>>> helps with destination scaling
>>>>>
>>>>> Broker to broker behavior:
>>>>>
>>>>> There are generally two scenarios: always-on or periodic-check
>>>>>
>>>>> In "always-on"
>>>>> 1. brokerA looks for a brokerB in its list of cluster connections 
>>>>> and then
>>>>> sends all messages for all queues for brokerB (or brokerB pulls all
>>>>> messages, depending on cluster connection config)
>>>>>
>>>>> In "periodic-check"
>>>>> 1. brokerB connects to brokerA (or vice-versa) on a given time 
>>>>> interval
>>>>> and then receives any messages that have arrived since last check
>>>>>
>>>>> TL;DR;
>>>>>
>>>>> It would be cool to consider remote broker delivery for messages 
>>>>> while
>>>>> refactoring the address handling code. This would bring Artemis 
>>>>> inline with
>>>>> the rest of the commercial EMS brokers. The impact now, hopefully, 
>>>>> is minor
>>>>> and just thinking about default prefixes.
>>>>>
>>>> Understood, from our conversations on IRC I can see why this might be
>>>> useful.
>>>>
>>>>> Thanks,
>>>>> -Matt
>>>>>
>>>>>
>>>>>
>>>
>>>
>>> -- 
>>> Clebert Suconic
>>
>>
>
>


Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

Posted by Matt Pavlovich <ma...@gmail.com>.
IMHO, I think it would be good to kick up a thread on what it means to 
be 2.0. It sounds like the addressing changes definitely warrant it on 
its own, but I'm thinking having ActiveMQ 5.x feature parity would be a 
good goal for the 2.0 release.  My $0.02

On 12/7/16 2:56 PM, Clebert Suconic wrote:
> +1000
>
>
> It needs one final cleanup before it can be done though.. these commit
> messages need meaninful descriptions.
>
> if Justin or Martyn could come up with that since they did most of the
> work on the branch.
>
> This will really require bumping the release to 2.0.0 (there's a
> 2.0.snapshot commit on it already).  I would merge this into master,
> and fork the current master as 1.x.
>
>
>
>
> On Wed, Dec 7, 2016 at 1:52 PM, Timothy Bish <ta...@gmail.com> wrote:
>> This would be a good time to move to master, would allow other to more
>> easily get onboard
>>
>>
>> On 12/07/2016 01:25 PM, Clebert Suconic wrote:
>>> I have rebased ARTEMIS-780 in top of master. There was a lot of
>>> conflicts...
>>>
>>> I have aggregated/squashed most of the commits by chronological order
>>> almost. So if Martyn had 10 commits in series I had squashed all of
>>> them, since they were small comments anyways. The good thing about
>>> this is that nobody would lose authorship of these commits.
>>>
>>> We will need to come up with more meaningful messages for these
>>> commits before we can merge into master. But this is getting into a
>>> very good shape. I'm impressed by the amount of work I see done on
>>> this branch. Very well done guys! I mean it!
>>>
>>> Also, I have saved the old branch before I pushed -f into my fork as
>>> old-ARTEMIS-780 in case I broke anything on the process. Please check
>>> everything and let me know if I did.
>>>
>>>
>>> And please rebase more often on this branch unless you merge it soon.
>>>
>>>
>>> On Mon, Nov 28, 2016 at 2:36 PM, Clebert Suconic
>>> <cl...@gmail.com> wrote:
>>>> If / when we do the 2.0 bump, I would like to move a few classes.
>>>> Mainly under server.impl... I would like to move activations under a
>>>> package for activation, replicationendpoints for a package for
>>>> replications...    some small stuff like that just to reorganize
>>>> little things like this a bit.
>>>>
>>>> We can't do that now as that would break API and compatibility, but if
>>>> we do the bump, I would like to make that simple move.
>>>>
>>>> On Thu, Nov 24, 2016 at 4:41 AM, Martyn Taylor <mt...@redhat.com>
>>>> wrote:
>>>>> Hi Matt,
>>>>>
>>>>> Comments inline.
>>>>>
>>>>> On Mon, Nov 21, 2016 at 7:02 PM, Matt Pavlovich <ma...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Martyn-
>>>>>>
>>>>>> I think you nailed it here-- well done =)
>>>>>>
>>>>>> My notes in-line--
>>>>>>
>>>>>> On 11/21/16 10:45 AM, Martyn Taylor wrote:
>>>>>>
>>>>>>> 1. Ability to route messages to queues with the same address, but
>>>>>>> different
>>>>>>> routing semantics.
>>>>>>>
>>>>>>> The proposal outlined in ARTEMIS-780 outlines a new model that
>>>>>>> introduces
>>>>>>> an address object at the configuration and management layer. In the
>>>>>>> proposal it is not possible to create 2 addresses with different
>>>>>>> routing
>>>>>>> types. This causes a problem with existing clients (JMS, STOMP and for
>>>>>>> compatability with other vendors).
>>>>>>>
>>>>>>> Potential Modification: Addresses can have multiple routing type
>>>>>>> \u201cendpoints\u201d, either \u201cmulticast\u201d only, \u201canycast\u201d only or both. The
>>>>>>> example
>>>>>>> below would be used to represent a JMS Topic called \u201cfoo\u201d, with a
>>>>>>> single
>>>>>>> subscription queue and a JMS Queue called \u201cfoo\u201d. N.B. The actual XML
>>>>>>> is
>>>>>>> just an example, there are multiple ways this could be represented
>>>>>>> that we
>>>>>>> can define later.
>>>>>>>
>>>>>>> <*addresses*>   <*address* *name**="foo"*>      <*anycast*>
>>>>>>> <*queues*>            <*queue* *name**="**foo\u201d* />         </*queues*>
>>>>>>>         </*anycast*>      <*mulcast*>         <*queues*>
>>>>>>> <*queue* *name**="my.topic.subscription" */>         </*queues*>
>>>>>>> </*multicast*>   </*address*></*addresses*>
>>>>>>>
>>>>>> I think this solves it. The crux of the issues (for me) boils down to
>>>>>> auto-creation of destinations across protocols. Having this show up in
>>>>>> the
>>>>>> configs would give developers and admins more information to
>>>>>> troubleshoot
>>>>>> the mixed address type+protocol scenario.
>>>>>>
>>>>>> 2. Sending to \u201cmulticast\u201d, \u201canycast\u201d or \u201call\u201d
>>>>>>> As mentioned earlier JMS (and other clients such as STOMP via
>>>>>>> prefixing)
>>>>>>> allow the producer to identify the type of end point it would like to
>>>>>>> send
>>>>>>> to.
>>>>>>>
>>>>>>> If a JMS client creates a producer and passes in a topic with address
>>>>>>> \u201cfoo\u201d. Then only the queues associated with the \u201cmulticast\u201d section of
>>>>>>> the
>>>>>>> address. A similar thing happens when the JMS producer sends to a
>>>>>>> \u201cqueue\u201d
>>>>>>> messages should be distributed amongst the queues associated with the
>>>>>>> \u201canycast\u201d section of the address.
>>>>>>>
>>>>>>> There may also be a case when a producer does not identify the
>>>>>>> endpoint
>>>>>>> type, and simply sends to \u201cfoo\u201d. AMQP or MQTT may want to do this. In
>>>>>>> this
>>>>>>> scenario both should happen. All the queues under the multicast
>>>>>>> section
>>>>>>> get
>>>>>>> a copy of the message, and one queue under the anycast section gets
>>>>>>> the
>>>>>>> message.
>>>>>>>
>>>>>>> Modification: None Needed. Internal APIs would need to be updated to
>>>>>>> allow
>>>>>>> this functionality.
>>>>>>>
>>>>>> I think the "deliver to all" scenario should be fine. This seems
>>>>>> analogous
>>>>>> to a CompositeDestination in ActiveMQ 5.x. I'll map through some
>>>>>> scenarios
>>>>>> and report back any gotchas.
>>>>>>
>>>>>> 3. Support for prefixes to identify endpoint types
>>>>>>> Many clients, ActiveMQ 5.x, STOMP and potential clients from alternate
>>>>>>> vendors, identify the endpoint type (in producer and consumer) using a
>>>>>>> prefix notation.
>>>>>>>
>>>>>>> e.g. queue:///foo
>>>>>>>
>>>>>>> Which would identify:
>>>>>>>
>>>>>>> <*addresses*>   <*address* *name**="foo"*>      <*anycast*>
>>>>>>> <*queues*>            <*queue* *name**="my.foo.queue" */>
>>>>>>> </*queues*>      </*anycast*>   </*address*></*addresses*>
>>>>>>>
>>>>>>> Modifications Needed: None to the model. An additional parameter to
>>>>>>> the
>>>>>>> acceptors should be added to identify the prefix.
>>>>>>>
>>>>>> Just as a check point in the syntax+naming convention in your provided
>>>>>> example... would the name actually be:
>>>>>>
>>>>>> <*queue* *name**="foo" .. vs "my.foo.queue" ?
>>>>>>
>>>>> The queue name can be anything.  It's the address that is used by
>>>>> consumer/producer.  The protocol handler / broker will decided which
>>>>> queue
>>>>> to connect to.
>>>>>
>>>>>> 4. Multiple endpoints are defined, but client does not specify
>>>>>> \u201cendpoint
>>>>>>> routing type\u201d when consuming
>>>>>>>
>>>>>>> Handling cases where consumers does not pass enough information in
>>>>>>> their
>>>>>>> address or via protocol specific mechanisms to identify an endpoint.
>>>>>>> Let\u2019s
>>>>>>> say an AMQP client, requests to subscribe to the address \u201cfoo\u201d, but
>>>>>>> passes
>>>>>>> no extra information. In the cases where there are only a single
>>>>>>> endpoint
>>>>>>> type defined, the consumer would associated with that endpoint type.
>>>>>>> However, when both endpoint types are defined, the protocol handler
>>>>>>> does
>>>>>>> not know whether to associate this consumer with a queue under the
>>>>>>> \u201canycast\u201d section, or whether to create a new queue under the
>>>>>>> \u201cmulticast\u201d
>>>>>>> section. e.g.
>>>>>>>
>>>>>>> Consume: \u201cfoo\u201d
>>>>>>>
>>>>>>> <*addresses*>
>>>>>>>
>>>>>>>       <*address* *name**="foo"*>      <*anycast*>         <*queues*>
>>>>>>>          <*queue* *name**="**foo\u201d* />         </*queues*>
>>>>>>> </*anycast*>      <*multicast*>         <*queues*>            <*queue*
>>>>>>> *name**="my.topic.subscription" */>         </*queues*>
>>>>>>> </*multicast*>   </*address*></*addresses*>
>>>>>>>
>>>>>>> In this scenario, we can make the default configurable on the
>>>>>>> protocol/acceptor. Possible options for this could be:
>>>>>>>
>>>>>>> \u201cmulticast\u201d: Defaults multicast
>>>>>>>
>>>>>>> \u201canycast\u201d: Defaults to anycast
>>>>>>>
>>>>>>> \u201cerror\u201d: Returns an error to the client
>>>>>>>
>>>>>>> Alternatively each protocol handler could handle this in the most
>>>>>>> sensible
>>>>>>> way for that protocol. MQTT might default to \u201cmulticast\u201d, STOMP
>>>>>>> \u201canycast\u201d,
>>>>>>> and AMQP to \u201cerror\u201d.
>>>>>>>
>>>>>> Yep, this works great. I think there are two flags on the acceptors..
>>>>>> one
>>>>>> for auto-create and one for default handling of name collision. The
>>>>>> defaults would most likely be the same.
>>>>>>
>>>>>> Something along the lines of:
>>>>>> auto-create-default = "multicast | anycast"
>>>>>> no-prefix-default = "multicast | anycast | error"
>>>>>>
>>>>>> 5. Fully qualified address names
>>>>>>> This feature allows a client to identify a particular address on a
>>>>>>> specific
>>>>>>> broker in a cluster. This could be achieved by the client using some
>>>>>>> form
>>>>>>> of address as:
>>>>>>>
>>>>>>> queue:///host/broker/address/
>>>>>>>
>>>>>>> Matt could you elaborate on the drivers behind this requirement
>>>>>>> please.
>>>>>>>
>>>>>>> I am of the opinion that this is out of the scope of the addressing
>>>>>>> changes, and is more to do with redirecting in cluster scenarios. The
>>>>>>> current model will support this address syntax if we want to use it in
>>>>>>> the
>>>>>>> future.
>>>>>>>
>>>>>> I agree that tackling the impl of this should be out-of-scope. My
>>>>>> recommendation is to consider it in addressing now, so we can hopefully
>>>>>> avoid any breakage down the road.
>>>>>>
>>>>>> A widely used feature in other EMS brokers (IBM MQ, Tibco EMS, etc) is
>>>>>> the
>>>>>> ability to fully address a destination using a format similar to this:
>>>>>>
>>>>>> queue://brokerB/myQueue
>>>>>>
>>>>> The advantage of this is to allow for scaling of the number of
>>>>> destinations
>>>>>> and allows for more dynamic broker networks to be created without
>>>>>> applications having to have connection information for all brokers in a
>>>>>> broker network. Think simple delivery+routing, and not horizontal
>>>>>> scaling.
>>>>>> It is very analogous to SMTP mail routing.
>>>>>>
>>>>>> Producer behavior:
>>>>>>
>>>>>> 1. Client X connects to brokerA and sends it a message addressed:
>>>>>> queue://brokerB/myQueue
>>>>>> 2. brokerA accepts the message on behalf of brokerB and handles all
>>>>>> acknowledgement and persistence accordingly
>>>>>> 3. brokerA would then store the message in a "queue" for brokerB. Note:
>>>>>> All messages for brokerB are generally stored in one queue-- this is
>>>>>> how it
>>>>>> helps with destination scaling
>>>>>>
>>>>>> Broker to broker behavior:
>>>>>>
>>>>>> There are generally two scenarios: always-on or periodic-check
>>>>>>
>>>>>> In "always-on"
>>>>>> 1. brokerA looks for a brokerB in its list of cluster connections and
>>>>>> then
>>>>>> sends all messages for all queues for brokerB (or brokerB pulls all
>>>>>> messages, depending on cluster connection config)
>>>>>>
>>>>>> In "periodic-check"
>>>>>> 1. brokerB connects to brokerA (or vice-versa) on a given time interval
>>>>>> and then receives any messages that have arrived since last check
>>>>>>
>>>>>> TL;DR;
>>>>>>
>>>>>> It would be cool to consider remote broker delivery for messages while
>>>>>> refactoring the address handling code. This would bring Artemis inline
>>>>>> with
>>>>>> the rest of the commercial EMS brokers. The impact now, hopefully, is
>>>>>> minor
>>>>>> and just thinking about default prefixes.
>>>>>>
>>>>> Understood, from our conversations on IRC I can see why this might be
>>>>> useful.
>>>>>
>>>>>> Thanks,
>>>>>> -Matt
>>>>>>
>>>>>>
>>>>>>
>>>>
>>>> --
>>>> Clebert Suconic
>>>
>>>
>>
>> --
>> Tim Bish
>> twitter: @tabish121
>> blog: http://timbish.blogspot.com/
>>
>
>


Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

Posted by Clebert Suconic <cl...@gmail.com>.
+1000


It needs one final cleanup before it can be done though.. these commit
messages need meaninful descriptions.

if Justin or Martyn could come up with that since they did most of the
work on the branch.

This will really require bumping the release to 2.0.0 (there's a
2.0.snapshot commit on it already).  I would merge this into master,
and fork the current master as 1.x.




On Wed, Dec 7, 2016 at 1:52 PM, Timothy Bish <ta...@gmail.com> wrote:
> This would be a good time to move to master, would allow other to more
> easily get onboard
>
>
> On 12/07/2016 01:25 PM, Clebert Suconic wrote:
>>
>> I have rebased ARTEMIS-780 in top of master. There was a lot of
>> conflicts...
>>
>> I have aggregated/squashed most of the commits by chronological order
>> almost. So if Martyn had 10 commits in series I had squashed all of
>> them, since they were small comments anyways. The good thing about
>> this is that nobody would lose authorship of these commits.
>>
>> We will need to come up with more meaningful messages for these
>> commits before we can merge into master. But this is getting into a
>> very good shape. I'm impressed by the amount of work I see done on
>> this branch. Very well done guys! I mean it!
>>
>> Also, I have saved the old branch before I pushed -f into my fork as
>> old-ARTEMIS-780 in case I broke anything on the process. Please check
>> everything and let me know if I did.
>>
>>
>> And please rebase more often on this branch unless you merge it soon.
>>
>>
>> On Mon, Nov 28, 2016 at 2:36 PM, Clebert Suconic
>> <cl...@gmail.com> wrote:
>>>
>>> If / when we do the 2.0 bump, I would like to move a few classes.
>>> Mainly under server.impl... I would like to move activations under a
>>> package for activation, replicationendpoints for a package for
>>> replications...    some small stuff like that just to reorganize
>>> little things like this a bit.
>>>
>>> We can't do that now as that would break API and compatibility, but if
>>> we do the bump, I would like to make that simple move.
>>>
>>> On Thu, Nov 24, 2016 at 4:41 AM, Martyn Taylor <mt...@redhat.com>
>>> wrote:
>>>>
>>>> Hi Matt,
>>>>
>>>> Comments inline.
>>>>
>>>> On Mon, Nov 21, 2016 at 7:02 PM, Matt Pavlovich <ma...@gmail.com>
>>>> wrote:
>>>>
>>>>> Martyn-
>>>>>
>>>>> I think you nailed it here-- well done =)
>>>>>
>>>>> My notes in-line--
>>>>>
>>>>> On 11/21/16 10:45 AM, Martyn Taylor wrote:
>>>>>
>>>>>> 1. Ability to route messages to queues with the same address, but
>>>>>> different
>>>>>> routing semantics.
>>>>>>
>>>>>> The proposal outlined in ARTEMIS-780 outlines a new model that
>>>>>> introduces
>>>>>> an address object at the configuration and management layer. In the
>>>>>> proposal it is not possible to create 2 addresses with different
>>>>>> routing
>>>>>> types. This causes a problem with existing clients (JMS, STOMP and for
>>>>>> compatability with other vendors).
>>>>>>
>>>>>> Potential Modification: Addresses can have multiple routing type
>>>>>> “endpoints”, either “multicast” only, “anycast” only or both. The
>>>>>> example
>>>>>> below would be used to represent a JMS Topic called “foo”, with a
>>>>>> single
>>>>>> subscription queue and a JMS Queue called “foo”. N.B. The actual XML
>>>>>> is
>>>>>> just an example, there are multiple ways this could be represented
>>>>>> that we
>>>>>> can define later.
>>>>>>
>>>>>> <*addresses*>   <*address* *name**="foo"*>      <*anycast*>
>>>>>> <*queues*>            <*queue* *name**="**foo”* />         </*queues*>
>>>>>>        </*anycast*>      <*mulcast*>         <*queues*>
>>>>>> <*queue* *name**="my.topic.subscription" */>         </*queues*>
>>>>>> </*multicast*>   </*address*></*addresses*>
>>>>>>
>>>>> I think this solves it. The crux of the issues (for me) boils down to
>>>>> auto-creation of destinations across protocols. Having this show up in
>>>>> the
>>>>> configs would give developers and admins more information to
>>>>> troubleshoot
>>>>> the mixed address type+protocol scenario.
>>>>>
>>>>> 2. Sending to “multicast”, “anycast” or “all”
>>>>>>
>>>>>> As mentioned earlier JMS (and other clients such as STOMP via
>>>>>> prefixing)
>>>>>> allow the producer to identify the type of end point it would like to
>>>>>> send
>>>>>> to.
>>>>>>
>>>>>> If a JMS client creates a producer and passes in a topic with address
>>>>>> “foo”. Then only the queues associated with the “multicast” section of
>>>>>> the
>>>>>> address. A similar thing happens when the JMS producer sends to a
>>>>>> “queue”
>>>>>> messages should be distributed amongst the queues associated with the
>>>>>> “anycast” section of the address.
>>>>>>
>>>>>> There may also be a case when a producer does not identify the
>>>>>> endpoint
>>>>>> type, and simply sends to “foo”. AMQP or MQTT may want to do this. In
>>>>>> this
>>>>>> scenario both should happen. All the queues under the multicast
>>>>>> section
>>>>>> get
>>>>>> a copy of the message, and one queue under the anycast section gets
>>>>>> the
>>>>>> message.
>>>>>>
>>>>>> Modification: None Needed. Internal APIs would need to be updated to
>>>>>> allow
>>>>>> this functionality.
>>>>>>
>>>>> I think the "deliver to all" scenario should be fine. This seems
>>>>> analogous
>>>>> to a CompositeDestination in ActiveMQ 5.x. I'll map through some
>>>>> scenarios
>>>>> and report back any gotchas.
>>>>>
>>>>> 3. Support for prefixes to identify endpoint types
>>>>>>
>>>>>> Many clients, ActiveMQ 5.x, STOMP and potential clients from alternate
>>>>>> vendors, identify the endpoint type (in producer and consumer) using a
>>>>>> prefix notation.
>>>>>>
>>>>>> e.g. queue:///foo
>>>>>>
>>>>>> Which would identify:
>>>>>>
>>>>>> <*addresses*>   <*address* *name**="foo"*>      <*anycast*>
>>>>>> <*queues*>            <*queue* *name**="my.foo.queue" */>
>>>>>> </*queues*>      </*anycast*>   </*address*></*addresses*>
>>>>>>
>>>>>> Modifications Needed: None to the model. An additional parameter to
>>>>>> the
>>>>>> acceptors should be added to identify the prefix.
>>>>>>
>>>>> Just as a check point in the syntax+naming convention in your provided
>>>>> example... would the name actually be:
>>>>>
>>>>> <*queue* *name**="foo" .. vs "my.foo.queue" ?
>>>>>
>>>> The queue name can be anything.  It's the address that is used by
>>>> consumer/producer.  The protocol handler / broker will decided which
>>>> queue
>>>> to connect to.
>>>>
>>>>> 4. Multiple endpoints are defined, but client does not specify
>>>>> “endpoint
>>>>>>
>>>>>> routing type” when consuming
>>>>>>
>>>>>> Handling cases where consumers does not pass enough information in
>>>>>> their
>>>>>> address or via protocol specific mechanisms to identify an endpoint.
>>>>>> Let’s
>>>>>> say an AMQP client, requests to subscribe to the address “foo”, but
>>>>>> passes
>>>>>> no extra information. In the cases where there are only a single
>>>>>> endpoint
>>>>>> type defined, the consumer would associated with that endpoint type.
>>>>>> However, when both endpoint types are defined, the protocol handler
>>>>>> does
>>>>>> not know whether to associate this consumer with a queue under the
>>>>>> “anycast” section, or whether to create a new queue under the
>>>>>> “multicast”
>>>>>> section. e.g.
>>>>>>
>>>>>> Consume: “foo”
>>>>>>
>>>>>> <*addresses*>
>>>>>>
>>>>>>      <*address* *name**="foo"*>      <*anycast*>         <*queues*>
>>>>>>         <*queue* *name**="**foo”* />         </*queues*>
>>>>>> </*anycast*>      <*multicast*>         <*queues*>            <*queue*
>>>>>> *name**="my.topic.subscription" */>         </*queues*>
>>>>>> </*multicast*>   </*address*></*addresses*>
>>>>>>
>>>>>> In this scenario, we can make the default configurable on the
>>>>>> protocol/acceptor. Possible options for this could be:
>>>>>>
>>>>>> “multicast”: Defaults multicast
>>>>>>
>>>>>> “anycast”: Defaults to anycast
>>>>>>
>>>>>> “error”: Returns an error to the client
>>>>>>
>>>>>> Alternatively each protocol handler could handle this in the most
>>>>>> sensible
>>>>>> way for that protocol. MQTT might default to “multicast”, STOMP
>>>>>> “anycast”,
>>>>>> and AMQP to “error”.
>>>>>>
>>>>> Yep, this works great. I think there are two flags on the acceptors..
>>>>> one
>>>>> for auto-create and one for default handling of name collision. The
>>>>> defaults would most likely be the same.
>>>>>
>>>>> Something along the lines of:
>>>>> auto-create-default = "multicast | anycast"
>>>>> no-prefix-default = "multicast | anycast | error"
>>>>>
>>>>> 5. Fully qualified address names
>>>>>>
>>>>>> This feature allows a client to identify a particular address on a
>>>>>> specific
>>>>>> broker in a cluster. This could be achieved by the client using some
>>>>>> form
>>>>>> of address as:
>>>>>>
>>>>>> queue:///host/broker/address/
>>>>>>
>>>>>> Matt could you elaborate on the drivers behind this requirement
>>>>>> please.
>>>>>>
>>>>>> I am of the opinion that this is out of the scope of the addressing
>>>>>> changes, and is more to do with redirecting in cluster scenarios. The
>>>>>> current model will support this address syntax if we want to use it in
>>>>>> the
>>>>>> future.
>>>>>>
>>>>> I agree that tackling the impl of this should be out-of-scope. My
>>>>> recommendation is to consider it in addressing now, so we can hopefully
>>>>> avoid any breakage down the road.
>>>>>
>>>>> A widely used feature in other EMS brokers (IBM MQ, Tibco EMS, etc) is
>>>>> the
>>>>> ability to fully address a destination using a format similar to this:
>>>>>
>>>>> queue://brokerB/myQueue
>>>>>
>>>> The advantage of this is to allow for scaling of the number of
>>>> destinations
>>>>>
>>>>> and allows for more dynamic broker networks to be created without
>>>>> applications having to have connection information for all brokers in a
>>>>> broker network. Think simple delivery+routing, and not horizontal
>>>>> scaling.
>>>>> It is very analogous to SMTP mail routing.
>>>>>
>>>>> Producer behavior:
>>>>>
>>>>> 1. Client X connects to brokerA and sends it a message addressed:
>>>>> queue://brokerB/myQueue
>>>>> 2. brokerA accepts the message on behalf of brokerB and handles all
>>>>> acknowledgement and persistence accordingly
>>>>> 3. brokerA would then store the message in a "queue" for brokerB. Note:
>>>>> All messages for brokerB are generally stored in one queue-- this is
>>>>> how it
>>>>> helps with destination scaling
>>>>>
>>>>> Broker to broker behavior:
>>>>>
>>>>> There are generally two scenarios: always-on or periodic-check
>>>>>
>>>>> In "always-on"
>>>>> 1. brokerA looks for a brokerB in its list of cluster connections and
>>>>> then
>>>>> sends all messages for all queues for brokerB (or brokerB pulls all
>>>>> messages, depending on cluster connection config)
>>>>>
>>>>> In "periodic-check"
>>>>> 1. brokerB connects to brokerA (or vice-versa) on a given time interval
>>>>> and then receives any messages that have arrived since last check
>>>>>
>>>>> TL;DR;
>>>>>
>>>>> It would be cool to consider remote broker delivery for messages while
>>>>> refactoring the address handling code. This would bring Artemis inline
>>>>> with
>>>>> the rest of the commercial EMS brokers. The impact now, hopefully, is
>>>>> minor
>>>>> and just thinking about default prefixes.
>>>>>
>>>> Understood, from our conversations on IRC I can see why this might be
>>>> useful.
>>>>
>>>>> Thanks,
>>>>> -Matt
>>>>>
>>>>>
>>>>>
>>>
>>>
>>> --
>>> Clebert Suconic
>>
>>
>>
>
>
> --
> Tim Bish
> twitter: @tabish121
> blog: http://timbish.blogspot.com/
>



-- 
Clebert Suconic

Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

Posted by Timothy Bish <ta...@gmail.com>.
This would be a good time to move to master, would allow other to more 
easily get onboard

On 12/07/2016 01:25 PM, Clebert Suconic wrote:
> I have rebased ARTEMIS-780 in top of master. There was a lot of conflicts...
>
> I have aggregated/squashed most of the commits by chronological order
> almost. So if Martyn had 10 commits in series I had squashed all of
> them, since they were small comments anyways. The good thing about
> this is that nobody would lose authorship of these commits.
>
> We will need to come up with more meaningful messages for these
> commits before we can merge into master. But this is getting into a
> very good shape. I'm impressed by the amount of work I see done on
> this branch. Very well done guys! I mean it!
>
> Also, I have saved the old branch before I pushed -f into my fork as
> old-ARTEMIS-780 in case I broke anything on the process. Please check
> everything and let me know if I did.
>
>
> And please rebase more often on this branch unless you merge it soon.
>
>
> On Mon, Nov 28, 2016 at 2:36 PM, Clebert Suconic
> <cl...@gmail.com> wrote:
>> If / when we do the 2.0 bump, I would like to move a few classes.
>> Mainly under server.impl... I would like to move activations under a
>> package for activation, replicationendpoints for a package for
>> replications...    some small stuff like that just to reorganize
>> little things like this a bit.
>>
>> We can't do that now as that would break API and compatibility, but if
>> we do the bump, I would like to make that simple move.
>>
>> On Thu, Nov 24, 2016 at 4:41 AM, Martyn Taylor <mt...@redhat.com> wrote:
>>> Hi Matt,
>>>
>>> Comments inline.
>>>
>>> On Mon, Nov 21, 2016 at 7:02 PM, Matt Pavlovich <ma...@gmail.com> wrote:
>>>
>>>> Martyn-
>>>>
>>>> I think you nailed it here-- well done =)
>>>>
>>>> My notes in-line--
>>>>
>>>> On 11/21/16 10:45 AM, Martyn Taylor wrote:
>>>>
>>>>> 1. Ability to route messages to queues with the same address, but
>>>>> different
>>>>> routing semantics.
>>>>>
>>>>> The proposal outlined in ARTEMIS-780 outlines a new model that introduces
>>>>> an address object at the configuration and management layer. In the
>>>>> proposal it is not possible to create 2 addresses with different routing
>>>>> types. This causes a problem with existing clients (JMS, STOMP and for
>>>>> compatability with other vendors).
>>>>>
>>>>> Potential Modification: Addresses can have multiple routing type
>>>>> \u201cendpoints\u201d, either \u201cmulticast\u201d only, \u201canycast\u201d only or both. The example
>>>>> below would be used to represent a JMS Topic called \u201cfoo\u201d, with a single
>>>>> subscription queue and a JMS Queue called \u201cfoo\u201d. N.B. The actual XML is
>>>>> just an example, there are multiple ways this could be represented that we
>>>>> can define later.
>>>>>
>>>>> <*addresses*>   <*address* *name**="foo"*>      <*anycast*>
>>>>> <*queues*>            <*queue* *name**="**foo\u201d* />         </*queues*>
>>>>>        </*anycast*>      <*mulcast*>         <*queues*>
>>>>> <*queue* *name**="my.topic.subscription" */>         </*queues*>
>>>>> </*multicast*>   </*address*></*addresses*>
>>>>>
>>>> I think this solves it. The crux of the issues (for me) boils down to
>>>> auto-creation of destinations across protocols. Having this show up in the
>>>> configs would give developers and admins more information to troubleshoot
>>>> the mixed address type+protocol scenario.
>>>>
>>>> 2. Sending to \u201cmulticast\u201d, \u201canycast\u201d or \u201call\u201d
>>>>> As mentioned earlier JMS (and other clients such as STOMP via prefixing)
>>>>> allow the producer to identify the type of end point it would like to send
>>>>> to.
>>>>>
>>>>> If a JMS client creates a producer and passes in a topic with address
>>>>> \u201cfoo\u201d. Then only the queues associated with the \u201cmulticast\u201d section of the
>>>>> address. A similar thing happens when the JMS producer sends to a \u201cqueue\u201d
>>>>> messages should be distributed amongst the queues associated with the
>>>>> \u201canycast\u201d section of the address.
>>>>>
>>>>> There may also be a case when a producer does not identify the endpoint
>>>>> type, and simply sends to \u201cfoo\u201d. AMQP or MQTT may want to do this. In this
>>>>> scenario both should happen. All the queues under the multicast section
>>>>> get
>>>>> a copy of the message, and one queue under the anycast section gets the
>>>>> message.
>>>>>
>>>>> Modification: None Needed. Internal APIs would need to be updated to allow
>>>>> this functionality.
>>>>>
>>>> I think the "deliver to all" scenario should be fine. This seems analogous
>>>> to a CompositeDestination in ActiveMQ 5.x. I'll map through some scenarios
>>>> and report back any gotchas.
>>>>
>>>> 3. Support for prefixes to identify endpoint types
>>>>> Many clients, ActiveMQ 5.x, STOMP and potential clients from alternate
>>>>> vendors, identify the endpoint type (in producer and consumer) using a
>>>>> prefix notation.
>>>>>
>>>>> e.g. queue:///foo
>>>>>
>>>>> Which would identify:
>>>>>
>>>>> <*addresses*>   <*address* *name**="foo"*>      <*anycast*>
>>>>> <*queues*>            <*queue* *name**="my.foo.queue" */>
>>>>> </*queues*>      </*anycast*>   </*address*></*addresses*>
>>>>>
>>>>> Modifications Needed: None to the model. An additional parameter to the
>>>>> acceptors should be added to identify the prefix.
>>>>>
>>>> Just as a check point in the syntax+naming convention in your provided
>>>> example... would the name actually be:
>>>>
>>>> <*queue* *name**="foo" .. vs "my.foo.queue" ?
>>>>
>>> The queue name can be anything.  It's the address that is used by
>>> consumer/producer.  The protocol handler / broker will decided which queue
>>> to connect to.
>>>
>>>> 4. Multiple endpoints are defined, but client does not specify \u201cendpoint
>>>>> routing type\u201d when consuming
>>>>>
>>>>> Handling cases where consumers does not pass enough information in their
>>>>> address or via protocol specific mechanisms to identify an endpoint. Let\u2019s
>>>>> say an AMQP client, requests to subscribe to the address \u201cfoo\u201d, but passes
>>>>> no extra information. In the cases where there are only a single endpoint
>>>>> type defined, the consumer would associated with that endpoint type.
>>>>> However, when both endpoint types are defined, the protocol handler does
>>>>> not know whether to associate this consumer with a queue under the
>>>>> \u201canycast\u201d section, or whether to create a new queue under the \u201cmulticast\u201d
>>>>> section. e.g.
>>>>>
>>>>> Consume: \u201cfoo\u201d
>>>>>
>>>>> <*addresses*>
>>>>>
>>>>>      <*address* *name**="foo"*>      <*anycast*>         <*queues*>
>>>>>         <*queue* *name**="**foo\u201d* />         </*queues*>
>>>>> </*anycast*>      <*multicast*>         <*queues*>            <*queue*
>>>>> *name**="my.topic.subscription" */>         </*queues*>
>>>>> </*multicast*>   </*address*></*addresses*>
>>>>>
>>>>> In this scenario, we can make the default configurable on the
>>>>> protocol/acceptor. Possible options for this could be:
>>>>>
>>>>> \u201cmulticast\u201d: Defaults multicast
>>>>>
>>>>> \u201canycast\u201d: Defaults to anycast
>>>>>
>>>>> \u201cerror\u201d: Returns an error to the client
>>>>>
>>>>> Alternatively each protocol handler could handle this in the most sensible
>>>>> way for that protocol. MQTT might default to \u201cmulticast\u201d, STOMP \u201canycast\u201d,
>>>>> and AMQP to \u201cerror\u201d.
>>>>>
>>>> Yep, this works great. I think there are two flags on the acceptors.. one
>>>> for auto-create and one for default handling of name collision. The
>>>> defaults would most likely be the same.
>>>>
>>>> Something along the lines of:
>>>> auto-create-default = "multicast | anycast"
>>>> no-prefix-default = "multicast | anycast | error"
>>>>
>>>> 5. Fully qualified address names
>>>>> This feature allows a client to identify a particular address on a
>>>>> specific
>>>>> broker in a cluster. This could be achieved by the client using some form
>>>>> of address as:
>>>>>
>>>>> queue:///host/broker/address/
>>>>>
>>>>> Matt could you elaborate on the drivers behind this requirement please.
>>>>>
>>>>> I am of the opinion that this is out of the scope of the addressing
>>>>> changes, and is more to do with redirecting in cluster scenarios. The
>>>>> current model will support this address syntax if we want to use it in the
>>>>> future.
>>>>>
>>>> I agree that tackling the impl of this should be out-of-scope. My
>>>> recommendation is to consider it in addressing now, so we can hopefully
>>>> avoid any breakage down the road.
>>>>
>>>> A widely used feature in other EMS brokers (IBM MQ, Tibco EMS, etc) is the
>>>> ability to fully address a destination using a format similar to this:
>>>>
>>>> queue://brokerB/myQueue
>>>>
>>> The advantage of this is to allow for scaling of the number of destinations
>>>> and allows for more dynamic broker networks to be created without
>>>> applications having to have connection information for all brokers in a
>>>> broker network. Think simple delivery+routing, and not horizontal scaling.
>>>> It is very analogous to SMTP mail routing.
>>>>
>>>> Producer behavior:
>>>>
>>>> 1. Client X connects to brokerA and sends it a message addressed:
>>>> queue://brokerB/myQueue
>>>> 2. brokerA accepts the message on behalf of brokerB and handles all
>>>> acknowledgement and persistence accordingly
>>>> 3. brokerA would then store the message in a "queue" for brokerB. Note:
>>>> All messages for brokerB are generally stored in one queue-- this is how it
>>>> helps with destination scaling
>>>>
>>>> Broker to broker behavior:
>>>>
>>>> There are generally two scenarios: always-on or periodic-check
>>>>
>>>> In "always-on"
>>>> 1. brokerA looks for a brokerB in its list of cluster connections and then
>>>> sends all messages for all queues for brokerB (or brokerB pulls all
>>>> messages, depending on cluster connection config)
>>>>
>>>> In "periodic-check"
>>>> 1. brokerB connects to brokerA (or vice-versa) on a given time interval
>>>> and then receives any messages that have arrived since last check
>>>>
>>>> TL;DR;
>>>>
>>>> It would be cool to consider remote broker delivery for messages while
>>>> refactoring the address handling code. This would bring Artemis inline with
>>>> the rest of the commercial EMS brokers. The impact now, hopefully, is minor
>>>> and just thinking about default prefixes.
>>>>
>>> Understood, from our conversations on IRC I can see why this might be
>>> useful.
>>>
>>>> Thanks,
>>>> -Matt
>>>>
>>>>
>>>>
>>
>>
>> --
>> Clebert Suconic
>
>


-- 
Tim Bish
twitter: @tabish121
blog: http://timbish.blogspot.com/


Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

Posted by Clebert Suconic <cl...@gmail.com>.
I have rebased ARTEMIS-780 in top of master. There was a lot of conflicts...

I have aggregated/squashed most of the commits by chronological order
almost. So if Martyn had 10 commits in series I had squashed all of
them, since they were small comments anyways. The good thing about
this is that nobody would lose authorship of these commits.

We will need to come up with more meaningful messages for these
commits before we can merge into master. But this is getting into a
very good shape. I'm impressed by the amount of work I see done on
this branch. Very well done guys! I mean it!

Also, I have saved the old branch before I pushed -f into my fork as
old-ARTEMIS-780 in case I broke anything on the process. Please check
everything and let me know if I did.


And please rebase more often on this branch unless you merge it soon.


On Mon, Nov 28, 2016 at 2:36 PM, Clebert Suconic
<cl...@gmail.com> wrote:
> If / when we do the 2.0 bump, I would like to move a few classes.
> Mainly under server.impl... I would like to move activations under a
> package for activation, replicationendpoints for a package for
> replications...    some small stuff like that just to reorganize
> little things like this a bit.
>
> We can't do that now as that would break API and compatibility, but if
> we do the bump, I would like to make that simple move.
>
> On Thu, Nov 24, 2016 at 4:41 AM, Martyn Taylor <mt...@redhat.com> wrote:
>> Hi Matt,
>>
>> Comments inline.
>>
>> On Mon, Nov 21, 2016 at 7:02 PM, Matt Pavlovich <ma...@gmail.com> wrote:
>>
>>> Martyn-
>>>
>>> I think you nailed it here-- well done =)
>>>
>>> My notes in-line--
>>>
>>> On 11/21/16 10:45 AM, Martyn Taylor wrote:
>>>
>>>> 1. Ability to route messages to queues with the same address, but
>>>> different
>>>> routing semantics.
>>>>
>>>> The proposal outlined in ARTEMIS-780 outlines a new model that introduces
>>>> an address object at the configuration and management layer. In the
>>>> proposal it is not possible to create 2 addresses with different routing
>>>> types. This causes a problem with existing clients (JMS, STOMP and for
>>>> compatability with other vendors).
>>>>
>>>> Potential Modification: Addresses can have multiple routing type
>>>> “endpoints”, either “multicast” only, “anycast” only or both. The example
>>>> below would be used to represent a JMS Topic called “foo”, with a single
>>>> subscription queue and a JMS Queue called “foo”. N.B. The actual XML is
>>>> just an example, there are multiple ways this could be represented that we
>>>> can define later.
>>>>
>>>> <*addresses*>   <*address* *name**="foo"*>      <*anycast*>
>>>> <*queues*>            <*queue* *name**="**foo”* />         </*queues*>
>>>>       </*anycast*>      <*mulcast*>         <*queues*>
>>>> <*queue* *name**="my.topic.subscription" */>         </*queues*>
>>>> </*multicast*>   </*address*></*addresses*>
>>>>
>>> I think this solves it. The crux of the issues (for me) boils down to
>>> auto-creation of destinations across protocols. Having this show up in the
>>> configs would give developers and admins more information to troubleshoot
>>> the mixed address type+protocol scenario.
>>>
>>> 2. Sending to “multicast”, “anycast” or “all”
>>>>
>>>> As mentioned earlier JMS (and other clients such as STOMP via prefixing)
>>>> allow the producer to identify the type of end point it would like to send
>>>> to.
>>>>
>>>> If a JMS client creates a producer and passes in a topic with address
>>>> “foo”. Then only the queues associated with the “multicast” section of the
>>>> address. A similar thing happens when the JMS producer sends to a “queue”
>>>> messages should be distributed amongst the queues associated with the
>>>> “anycast” section of the address.
>>>>
>>>> There may also be a case when a producer does not identify the endpoint
>>>> type, and simply sends to “foo”. AMQP or MQTT may want to do this. In this
>>>> scenario both should happen. All the queues under the multicast section
>>>> get
>>>> a copy of the message, and one queue under the anycast section gets the
>>>> message.
>>>>
>>>> Modification: None Needed. Internal APIs would need to be updated to allow
>>>> this functionality.
>>>>
>>> I think the "deliver to all" scenario should be fine. This seems analogous
>>> to a CompositeDestination in ActiveMQ 5.x. I'll map through some scenarios
>>> and report back any gotchas.
>>>
>>> 3. Support for prefixes to identify endpoint types
>>>>
>>>> Many clients, ActiveMQ 5.x, STOMP and potential clients from alternate
>>>> vendors, identify the endpoint type (in producer and consumer) using a
>>>> prefix notation.
>>>>
>>>> e.g. queue:///foo
>>>>
>>>> Which would identify:
>>>>
>>>> <*addresses*>   <*address* *name**="foo"*>      <*anycast*>
>>>> <*queues*>            <*queue* *name**="my.foo.queue" */>
>>>> </*queues*>      </*anycast*>   </*address*></*addresses*>
>>>>
>>>> Modifications Needed: None to the model. An additional parameter to the
>>>> acceptors should be added to identify the prefix.
>>>>
>>> Just as a check point in the syntax+naming convention in your provided
>>> example... would the name actually be:
>>>
>>> <*queue* *name**="foo" .. vs "my.foo.queue" ?
>>>
>> The queue name can be anything.  It's the address that is used by
>> consumer/producer.  The protocol handler / broker will decided which queue
>> to connect to.
>>
>>>
>>> 4. Multiple endpoints are defined, but client does not specify “endpoint
>>>> routing type” when consuming
>>>>
>>>> Handling cases where consumers does not pass enough information in their
>>>> address or via protocol specific mechanisms to identify an endpoint. Let’s
>>>> say an AMQP client, requests to subscribe to the address “foo”, but passes
>>>> no extra information. In the cases where there are only a single endpoint
>>>> type defined, the consumer would associated with that endpoint type.
>>>> However, when both endpoint types are defined, the protocol handler does
>>>> not know whether to associate this consumer with a queue under the
>>>> “anycast” section, or whether to create a new queue under the “multicast”
>>>> section. e.g.
>>>>
>>>> Consume: “foo”
>>>>
>>>> <*addresses*>
>>>>
>>>>     <*address* *name**="foo"*>      <*anycast*>         <*queues*>
>>>>        <*queue* *name**="**foo”* />         </*queues*>
>>>> </*anycast*>      <*multicast*>         <*queues*>            <*queue*
>>>> *name**="my.topic.subscription" */>         </*queues*>
>>>> </*multicast*>   </*address*></*addresses*>
>>>>
>>>> In this scenario, we can make the default configurable on the
>>>> protocol/acceptor. Possible options for this could be:
>>>>
>>>> “multicast”: Defaults multicast
>>>>
>>>> “anycast”: Defaults to anycast
>>>>
>>>> “error”: Returns an error to the client
>>>>
>>>> Alternatively each protocol handler could handle this in the most sensible
>>>> way for that protocol. MQTT might default to “multicast”, STOMP “anycast”,
>>>> and AMQP to “error”.
>>>>
>>>
>>> Yep, this works great. I think there are two flags on the acceptors.. one
>>> for auto-create and one for default handling of name collision. The
>>> defaults would most likely be the same.
>>>
>>> Something along the lines of:
>>> auto-create-default = "multicast | anycast"
>>> no-prefix-default = "multicast | anycast | error"
>>>
>>> 5. Fully qualified address names
>>>>
>>>> This feature allows a client to identify a particular address on a
>>>> specific
>>>> broker in a cluster. This could be achieved by the client using some form
>>>> of address as:
>>>>
>>>> queue:///host/broker/address/
>>>>
>>>> Matt could you elaborate on the drivers behind this requirement please.
>>>>
>>>> I am of the opinion that this is out of the scope of the addressing
>>>> changes, and is more to do with redirecting in cluster scenarios. The
>>>> current model will support this address syntax if we want to use it in the
>>>> future.
>>>>
>>> I agree that tackling the impl of this should be out-of-scope. My
>>> recommendation is to consider it in addressing now, so we can hopefully
>>> avoid any breakage down the road.
>>>
>>> A widely used feature in other EMS brokers (IBM MQ, Tibco EMS, etc) is the
>>> ability to fully address a destination using a format similar to this:
>>>
>>> queue://brokerB/myQueue
>>>
>> The advantage of this is to allow for scaling of the number of destinations
>>> and allows for more dynamic broker networks to be created without
>>> applications having to have connection information for all brokers in a
>>> broker network. Think simple delivery+routing, and not horizontal scaling.
>>> It is very analogous to SMTP mail routing.
>>>
>>> Producer behavior:
>>>
>>> 1. Client X connects to brokerA and sends it a message addressed:
>>> queue://brokerB/myQueue
>>> 2. brokerA accepts the message on behalf of brokerB and handles all
>>> acknowledgement and persistence accordingly
>>> 3. brokerA would then store the message in a "queue" for brokerB. Note:
>>> All messages for brokerB are generally stored in one queue-- this is how it
>>> helps with destination scaling
>>>
>>> Broker to broker behavior:
>>>
>>> There are generally two scenarios: always-on or periodic-check
>>>
>>> In "always-on"
>>> 1. brokerA looks for a brokerB in its list of cluster connections and then
>>> sends all messages for all queues for brokerB (or brokerB pulls all
>>> messages, depending on cluster connection config)
>>>
>>> In "periodic-check"
>>> 1. brokerB connects to brokerA (or vice-versa) on a given time interval
>>> and then receives any messages that have arrived since last check
>>>
>>> TL;DR;
>>>
>>> It would be cool to consider remote broker delivery for messages while
>>> refactoring the address handling code. This would bring Artemis inline with
>>> the rest of the commercial EMS brokers. The impact now, hopefully, is minor
>>> and just thinking about default prefixes.
>>>
>> Understood, from our conversations on IRC I can see why this might be
>> useful.
>>
>>>
>>> Thanks,
>>> -Matt
>>>
>>>
>>>
>
>
>
> --
> Clebert Suconic



-- 
Clebert Suconic

Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

Posted by Clebert Suconic <cl...@gmail.com>.
If / when we do the 2.0 bump, I would like to move a few classes.
Mainly under server.impl... I would like to move activations under a
package for activation, replicationendpoints for a package for
replications...    some small stuff like that just to reorganize
little things like this a bit.

We can't do that now as that would break API and compatibility, but if
we do the bump, I would like to make that simple move.

On Thu, Nov 24, 2016 at 4:41 AM, Martyn Taylor <mt...@redhat.com> wrote:
> Hi Matt,
>
> Comments inline.
>
> On Mon, Nov 21, 2016 at 7:02 PM, Matt Pavlovich <ma...@gmail.com> wrote:
>
>> Martyn-
>>
>> I think you nailed it here-- well done =)
>>
>> My notes in-line--
>>
>> On 11/21/16 10:45 AM, Martyn Taylor wrote:
>>
>>> 1. Ability to route messages to queues with the same address, but
>>> different
>>> routing semantics.
>>>
>>> The proposal outlined in ARTEMIS-780 outlines a new model that introduces
>>> an address object at the configuration and management layer. In the
>>> proposal it is not possible to create 2 addresses with different routing
>>> types. This causes a problem with existing clients (JMS, STOMP and for
>>> compatability with other vendors).
>>>
>>> Potential Modification: Addresses can have multiple routing type
>>> “endpoints”, either “multicast” only, “anycast” only or both. The example
>>> below would be used to represent a JMS Topic called “foo”, with a single
>>> subscription queue and a JMS Queue called “foo”. N.B. The actual XML is
>>> just an example, there are multiple ways this could be represented that we
>>> can define later.
>>>
>>> <*addresses*>   <*address* *name**="foo"*>      <*anycast*>
>>> <*queues*>            <*queue* *name**="**foo”* />         </*queues*>
>>>       </*anycast*>      <*mulcast*>         <*queues*>
>>> <*queue* *name**="my.topic.subscription" */>         </*queues*>
>>> </*multicast*>   </*address*></*addresses*>
>>>
>> I think this solves it. The crux of the issues (for me) boils down to
>> auto-creation of destinations across protocols. Having this show up in the
>> configs would give developers and admins more information to troubleshoot
>> the mixed address type+protocol scenario.
>>
>> 2. Sending to “multicast”, “anycast” or “all”
>>>
>>> As mentioned earlier JMS (and other clients such as STOMP via prefixing)
>>> allow the producer to identify the type of end point it would like to send
>>> to.
>>>
>>> If a JMS client creates a producer and passes in a topic with address
>>> “foo”. Then only the queues associated with the “multicast” section of the
>>> address. A similar thing happens when the JMS producer sends to a “queue”
>>> messages should be distributed amongst the queues associated with the
>>> “anycast” section of the address.
>>>
>>> There may also be a case when a producer does not identify the endpoint
>>> type, and simply sends to “foo”. AMQP or MQTT may want to do this. In this
>>> scenario both should happen. All the queues under the multicast section
>>> get
>>> a copy of the message, and one queue under the anycast section gets the
>>> message.
>>>
>>> Modification: None Needed. Internal APIs would need to be updated to allow
>>> this functionality.
>>>
>> I think the "deliver to all" scenario should be fine. This seems analogous
>> to a CompositeDestination in ActiveMQ 5.x. I'll map through some scenarios
>> and report back any gotchas.
>>
>> 3. Support for prefixes to identify endpoint types
>>>
>>> Many clients, ActiveMQ 5.x, STOMP and potential clients from alternate
>>> vendors, identify the endpoint type (in producer and consumer) using a
>>> prefix notation.
>>>
>>> e.g. queue:///foo
>>>
>>> Which would identify:
>>>
>>> <*addresses*>   <*address* *name**="foo"*>      <*anycast*>
>>> <*queues*>            <*queue* *name**="my.foo.queue" */>
>>> </*queues*>      </*anycast*>   </*address*></*addresses*>
>>>
>>> Modifications Needed: None to the model. An additional parameter to the
>>> acceptors should be added to identify the prefix.
>>>
>> Just as a check point in the syntax+naming convention in your provided
>> example... would the name actually be:
>>
>> <*queue* *name**="foo" .. vs "my.foo.queue" ?
>>
> The queue name can be anything.  It's the address that is used by
> consumer/producer.  The protocol handler / broker will decided which queue
> to connect to.
>
>>
>> 4. Multiple endpoints are defined, but client does not specify “endpoint
>>> routing type” when consuming
>>>
>>> Handling cases where consumers does not pass enough information in their
>>> address or via protocol specific mechanisms to identify an endpoint. Let’s
>>> say an AMQP client, requests to subscribe to the address “foo”, but passes
>>> no extra information. In the cases where there are only a single endpoint
>>> type defined, the consumer would associated with that endpoint type.
>>> However, when both endpoint types are defined, the protocol handler does
>>> not know whether to associate this consumer with a queue under the
>>> “anycast” section, or whether to create a new queue under the “multicast”
>>> section. e.g.
>>>
>>> Consume: “foo”
>>>
>>> <*addresses*>
>>>
>>>     <*address* *name**="foo"*>      <*anycast*>         <*queues*>
>>>        <*queue* *name**="**foo”* />         </*queues*>
>>> </*anycast*>      <*multicast*>         <*queues*>            <*queue*
>>> *name**="my.topic.subscription" */>         </*queues*>
>>> </*multicast*>   </*address*></*addresses*>
>>>
>>> In this scenario, we can make the default configurable on the
>>> protocol/acceptor. Possible options for this could be:
>>>
>>> “multicast”: Defaults multicast
>>>
>>> “anycast”: Defaults to anycast
>>>
>>> “error”: Returns an error to the client
>>>
>>> Alternatively each protocol handler could handle this in the most sensible
>>> way for that protocol. MQTT might default to “multicast”, STOMP “anycast”,
>>> and AMQP to “error”.
>>>
>>
>> Yep, this works great. I think there are two flags on the acceptors.. one
>> for auto-create and one for default handling of name collision. The
>> defaults would most likely be the same.
>>
>> Something along the lines of:
>> auto-create-default = "multicast | anycast"
>> no-prefix-default = "multicast | anycast | error"
>>
>> 5. Fully qualified address names
>>>
>>> This feature allows a client to identify a particular address on a
>>> specific
>>> broker in a cluster. This could be achieved by the client using some form
>>> of address as:
>>>
>>> queue:///host/broker/address/
>>>
>>> Matt could you elaborate on the drivers behind this requirement please.
>>>
>>> I am of the opinion that this is out of the scope of the addressing
>>> changes, and is more to do with redirecting in cluster scenarios. The
>>> current model will support this address syntax if we want to use it in the
>>> future.
>>>
>> I agree that tackling the impl of this should be out-of-scope. My
>> recommendation is to consider it in addressing now, so we can hopefully
>> avoid any breakage down the road.
>>
>> A widely used feature in other EMS brokers (IBM MQ, Tibco EMS, etc) is the
>> ability to fully address a destination using a format similar to this:
>>
>> queue://brokerB/myQueue
>>
> The advantage of this is to allow for scaling of the number of destinations
>> and allows for more dynamic broker networks to be created without
>> applications having to have connection information for all brokers in a
>> broker network. Think simple delivery+routing, and not horizontal scaling.
>> It is very analogous to SMTP mail routing.
>>
>> Producer behavior:
>>
>> 1. Client X connects to brokerA and sends it a message addressed:
>> queue://brokerB/myQueue
>> 2. brokerA accepts the message on behalf of brokerB and handles all
>> acknowledgement and persistence accordingly
>> 3. brokerA would then store the message in a "queue" for brokerB. Note:
>> All messages for brokerB are generally stored in one queue-- this is how it
>> helps with destination scaling
>>
>> Broker to broker behavior:
>>
>> There are generally two scenarios: always-on or periodic-check
>>
>> In "always-on"
>> 1. brokerA looks for a brokerB in its list of cluster connections and then
>> sends all messages for all queues for brokerB (or brokerB pulls all
>> messages, depending on cluster connection config)
>>
>> In "periodic-check"
>> 1. brokerB connects to brokerA (or vice-versa) on a given time interval
>> and then receives any messages that have arrived since last check
>>
>> TL;DR;
>>
>> It would be cool to consider remote broker delivery for messages while
>> refactoring the address handling code. This would bring Artemis inline with
>> the rest of the commercial EMS brokers. The impact now, hopefully, is minor
>> and just thinking about default prefixes.
>>
> Understood, from our conversations on IRC I can see why this might be
> useful.
>
>>
>> Thanks,
>> -Matt
>>
>>
>>



-- 
Clebert Suconic

Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

Posted by Martyn Taylor <mt...@redhat.com>.
Hi Matt,

Comments inline.

On Mon, Nov 21, 2016 at 7:02 PM, Matt Pavlovich <ma...@gmail.com> wrote:

> Martyn-
>
> I think you nailed it here-- well done =)
>
> My notes in-line--
>
> On 11/21/16 10:45 AM, Martyn Taylor wrote:
>
>> 1. Ability to route messages to queues with the same address, but
>> different
>> routing semantics.
>>
>> The proposal outlined in ARTEMIS-780 outlines a new model that introduces
>> an address object at the configuration and management layer. In the
>> proposal it is not possible to create 2 addresses with different routing
>> types. This causes a problem with existing clients (JMS, STOMP and for
>> compatability with other vendors).
>>
>> Potential Modification: Addresses can have multiple routing type
>> “endpoints”, either “multicast” only, “anycast” only or both. The example
>> below would be used to represent a JMS Topic called “foo”, with a single
>> subscription queue and a JMS Queue called “foo”. N.B. The actual XML is
>> just an example, there are multiple ways this could be represented that we
>> can define later.
>>
>> <*addresses*>   <*address* *name**="foo"*>      <*anycast*>
>> <*queues*>            <*queue* *name**="**foo”* />         </*queues*>
>>       </*anycast*>      <*mulcast*>         <*queues*>
>> <*queue* *name**="my.topic.subscription" */>         </*queues*>
>> </*multicast*>   </*address*></*addresses*>
>>
> I think this solves it. The crux of the issues (for me) boils down to
> auto-creation of destinations across protocols. Having this show up in the
> configs would give developers and admins more information to troubleshoot
> the mixed address type+protocol scenario.
>
> 2. Sending to “multicast”, “anycast” or “all”
>>
>> As mentioned earlier JMS (and other clients such as STOMP via prefixing)
>> allow the producer to identify the type of end point it would like to send
>> to.
>>
>> If a JMS client creates a producer and passes in a topic with address
>> “foo”. Then only the queues associated with the “multicast” section of the
>> address. A similar thing happens when the JMS producer sends to a “queue”
>> messages should be distributed amongst the queues associated with the
>> “anycast” section of the address.
>>
>> There may also be a case when a producer does not identify the endpoint
>> type, and simply sends to “foo”. AMQP or MQTT may want to do this. In this
>> scenario both should happen. All the queues under the multicast section
>> get
>> a copy of the message, and one queue under the anycast section gets the
>> message.
>>
>> Modification: None Needed. Internal APIs would need to be updated to allow
>> this functionality.
>>
> I think the "deliver to all" scenario should be fine. This seems analogous
> to a CompositeDestination in ActiveMQ 5.x. I'll map through some scenarios
> and report back any gotchas.
>
> 3. Support for prefixes to identify endpoint types
>>
>> Many clients, ActiveMQ 5.x, STOMP and potential clients from alternate
>> vendors, identify the endpoint type (in producer and consumer) using a
>> prefix notation.
>>
>> e.g. queue:///foo
>>
>> Which would identify:
>>
>> <*addresses*>   <*address* *name**="foo"*>      <*anycast*>
>> <*queues*>            <*queue* *name**="my.foo.queue" */>
>> </*queues*>      </*anycast*>   </*address*></*addresses*>
>>
>> Modifications Needed: None to the model. An additional parameter to the
>> acceptors should be added to identify the prefix.
>>
> Just as a check point in the syntax+naming convention in your provided
> example... would the name actually be:
>
> <*queue* *name**="foo" .. vs "my.foo.queue" ?
>
The queue name can be anything.  It's the address that is used by
consumer/producer.  The protocol handler / broker will decided which queue
to connect to.

>
> 4. Multiple endpoints are defined, but client does not specify “endpoint
>> routing type” when consuming
>>
>> Handling cases where consumers does not pass enough information in their
>> address or via protocol specific mechanisms to identify an endpoint. Let’s
>> say an AMQP client, requests to subscribe to the address “foo”, but passes
>> no extra information. In the cases where there are only a single endpoint
>> type defined, the consumer would associated with that endpoint type.
>> However, when both endpoint types are defined, the protocol handler does
>> not know whether to associate this consumer with a queue under the
>> “anycast” section, or whether to create a new queue under the “multicast”
>> section. e.g.
>>
>> Consume: “foo”
>>
>> <*addresses*>
>>
>>     <*address* *name**="foo"*>      <*anycast*>         <*queues*>
>>        <*queue* *name**="**foo”* />         </*queues*>
>> </*anycast*>      <*multicast*>         <*queues*>            <*queue*
>> *name**="my.topic.subscription" */>         </*queues*>
>> </*multicast*>   </*address*></*addresses*>
>>
>> In this scenario, we can make the default configurable on the
>> protocol/acceptor. Possible options for this could be:
>>
>> “multicast”: Defaults multicast
>>
>> “anycast”: Defaults to anycast
>>
>> “error”: Returns an error to the client
>>
>> Alternatively each protocol handler could handle this in the most sensible
>> way for that protocol. MQTT might default to “multicast”, STOMP “anycast”,
>> and AMQP to “error”.
>>
>
> Yep, this works great. I think there are two flags on the acceptors.. one
> for auto-create and one for default handling of name collision. The
> defaults would most likely be the same.
>
> Something along the lines of:
> auto-create-default = "multicast | anycast"
> no-prefix-default = "multicast | anycast | error"
>
> 5. Fully qualified address names
>>
>> This feature allows a client to identify a particular address on a
>> specific
>> broker in a cluster. This could be achieved by the client using some form
>> of address as:
>>
>> queue:///host/broker/address/
>>
>> Matt could you elaborate on the drivers behind this requirement please.
>>
>> I am of the opinion that this is out of the scope of the addressing
>> changes, and is more to do with redirecting in cluster scenarios. The
>> current model will support this address syntax if we want to use it in the
>> future.
>>
> I agree that tackling the impl of this should be out-of-scope. My
> recommendation is to consider it in addressing now, so we can hopefully
> avoid any breakage down the road.
>
> A widely used feature in other EMS brokers (IBM MQ, Tibco EMS, etc) is the
> ability to fully address a destination using a format similar to this:
>
> queue://brokerB/myQueue
>
The advantage of this is to allow for scaling of the number of destinations
> and allows for more dynamic broker networks to be created without
> applications having to have connection information for all brokers in a
> broker network. Think simple delivery+routing, and not horizontal scaling.
> It is very analogous to SMTP mail routing.
>
> Producer behavior:
>
> 1. Client X connects to brokerA and sends it a message addressed:
> queue://brokerB/myQueue
> 2. brokerA accepts the message on behalf of brokerB and handles all
> acknowledgement and persistence accordingly
> 3. brokerA would then store the message in a "queue" for brokerB. Note:
> All messages for brokerB are generally stored in one queue-- this is how it
> helps with destination scaling
>
> Broker to broker behavior:
>
> There are generally two scenarios: always-on or periodic-check
>
> In "always-on"
> 1. brokerA looks for a brokerB in its list of cluster connections and then
> sends all messages for all queues for brokerB (or brokerB pulls all
> messages, depending on cluster connection config)
>
> In "periodic-check"
> 1. brokerB connects to brokerA (or vice-versa) on a given time interval
> and then receives any messages that have arrived since last check
>
> TL;DR;
>
> It would be cool to consider remote broker delivery for messages while
> refactoring the address handling code. This would bring Artemis inline with
> the rest of the commercial EMS brokers. The impact now, hopefully, is minor
> and just thinking about default prefixes.
>
Understood, from our conversations on IRC I can see why this might be
useful.

>
> Thanks,
> -Matt
>
>
>

Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

Posted by Clebert Suconic <cl...@gmail.com>.
On Mon, Nov 21, 2016 at 2:02 PM, Matt Pavlovich <ma...@gmail.com> wrote:
> Martyn-
>
> I think you nailed it here-- well done =)

+1000

Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

Posted by Matt Pavlovich <ma...@gmail.com>.
Martyn-

I think you nailed it here-- well done =)

My notes in-line--

On 11/21/16 10:45 AM, Martyn Taylor wrote:
> 1. Ability to route messages to queues with the same address, but different
> routing semantics.
>
> The proposal outlined in ARTEMIS-780 outlines a new model that introduces
> an address object at the configuration and management layer. In the
> proposal it is not possible to create 2 addresses with different routing
> types. This causes a problem with existing clients (JMS, STOMP and for
> compatability with other vendors).
>
> Potential Modification: Addresses can have multiple routing type
> \u201cendpoints\u201d, either \u201cmulticast\u201d only, \u201canycast\u201d only or both. The example
> below would be used to represent a JMS Topic called \u201cfoo\u201d, with a single
> subscription queue and a JMS Queue called \u201cfoo\u201d. N.B. The actual XML is
> just an example, there are multiple ways this could be represented that we
> can define later.
>
> <*addresses*>   <*address* *name**="foo"*>      <*anycast*>
> <*queues*>            <*queue* *name**="**foo\u201d* />         </*queues*>
>       </*anycast*>      <*mulcast*>         <*queues*>
> <*queue* *name**="my.topic.subscription" */>         </*queues*>
> </*multicast*>   </*address*></*addresses*>
I think this solves it. The crux of the issues (for me) boils down to 
auto-creation of destinations across protocols. Having this show up in 
the configs would give developers and admins more information to 
troubleshoot the mixed address type+protocol scenario.

> 2. Sending to \u201cmulticast\u201d, \u201canycast\u201d or \u201call\u201d
>
> As mentioned earlier JMS (and other clients such as STOMP via prefixing)
> allow the producer to identify the type of end point it would like to send
> to.
>
> If a JMS client creates a producer and passes in a topic with address
> \u201cfoo\u201d. Then only the queues associated with the \u201cmulticast\u201d section of the
> address. A similar thing happens when the JMS producer sends to a \u201cqueue\u201d
> messages should be distributed amongst the queues associated with the
> \u201canycast\u201d section of the address.
>
> There may also be a case when a producer does not identify the endpoint
> type, and simply sends to \u201cfoo\u201d. AMQP or MQTT may want to do this. In this
> scenario both should happen. All the queues under the multicast section get
> a copy of the message, and one queue under the anycast section gets the
> message.
>
> Modification: None Needed. Internal APIs would need to be updated to allow
> this functionality.
I think the "deliver to all" scenario should be fine. This seems 
analogous to a CompositeDestination in ActiveMQ 5.x. I'll map through 
some scenarios and report back any gotchas.

> 3. Support for prefixes to identify endpoint types
>
> Many clients, ActiveMQ 5.x, STOMP and potential clients from alternate
> vendors, identify the endpoint type (in producer and consumer) using a
> prefix notation.
>
> e.g. queue:///foo
>
> Which would identify:
>
> <*addresses*>   <*address* *name**="foo"*>      <*anycast*>
> <*queues*>            <*queue* *name**="my.foo.queue" */>
> </*queues*>      </*anycast*>   </*address*></*addresses*>
>
> Modifications Needed: None to the model. An additional parameter to the
> acceptors should be added to identify the prefix.
Just as a check point in the syntax+naming convention in your provided 
example... would the name actually be:

<*queue* *name**="foo" .. vs "my.foo.queue" ?

> 4. Multiple endpoints are defined, but client does not specify \u201cendpoint
> routing type\u201d when consuming
>
> Handling cases where consumers does not pass enough information in their
> address or via protocol specific mechanisms to identify an endpoint. Let\u2019s
> say an AMQP client, requests to subscribe to the address \u201cfoo\u201d, but passes
> no extra information. In the cases where there are only a single endpoint
> type defined, the consumer would associated with that endpoint type.
> However, when both endpoint types are defined, the protocol handler does
> not know whether to associate this consumer with a queue under the
> \u201canycast\u201d section, or whether to create a new queue under the \u201cmulticast\u201d
> section. e.g.
>
> Consume: \u201cfoo\u201d
>
> <*addresses*>
>
>     <*address* *name**="foo"*>      <*anycast*>         <*queues*>
>        <*queue* *name**="**foo\u201d* />         </*queues*>
> </*anycast*>      <*multicast*>         <*queues*>            <*queue*
> *name**="my.topic.subscription" */>         </*queues*>
> </*multicast*>   </*address*></*addresses*>
>
> In this scenario, we can make the default configurable on the
> protocol/acceptor. Possible options for this could be:
>
> \u201cmulticast\u201d: Defaults multicast
>
> \u201canycast\u201d: Defaults to anycast
>
> \u201cerror\u201d: Returns an error to the client
>
> Alternatively each protocol handler could handle this in the most sensible
> way for that protocol. MQTT might default to \u201cmulticast\u201d, STOMP \u201canycast\u201d,
> and AMQP to \u201cerror\u201d.

Yep, this works great. I think there are two flags on the acceptors.. 
one for auto-create and one for default handling of name collision. The 
defaults would most likely be the same.

Something along the lines of:
auto-create-default = "multicast | anycast"
no-prefix-default = "multicast | anycast | error"

> 5. Fully qualified address names
>
> This feature allows a client to identify a particular address on a specific
> broker in a cluster. This could be achieved by the client using some form
> of address as:
>
> queue:///host/broker/address/
>
> Matt could you elaborate on the drivers behind this requirement please.
>
> I am of the opinion that this is out of the scope of the addressing
> changes, and is more to do with redirecting in cluster scenarios. The
> current model will support this address syntax if we want to use it in the
> future.
I agree that tackling the impl of this should be out-of-scope. My 
recommendation is to consider it in addressing now, so we can hopefully 
avoid any breakage down the road.

A widely used feature in other EMS brokers (IBM MQ, Tibco EMS, etc) is 
the ability to fully address a destination using a format similar to this:

queue://brokerB/myQueue

The advantage of this is to allow for scaling of the number of 
destinations and allows for more dynamic broker networks to be created 
without applications having to have connection information for all 
brokers in a broker network. Think simple delivery+routing, and not 
horizontal scaling. It is very analogous to SMTP mail routing.

Producer behavior:

1. Client X connects to brokerA and sends it a message addressed: 
queue://brokerB/myQueue
2. brokerA accepts the message on behalf of brokerB and handles all 
acknowledgement and persistence accordingly
3. brokerA would then store the message in a "queue" for brokerB. Note: 
All messages for brokerB are generally stored in one queue-- this is how 
it helps with destination scaling

Broker to broker behavior:

There are generally two scenarios: always-on or periodic-check

In "always-on"
1. brokerA looks for a brokerB in its list of cluster connections and 
then sends all messages for all queues for brokerB (or brokerB pulls all 
messages, depending on cluster connection config)

In "periodic-check"
1. brokerB connects to brokerA (or vice-versa) on a given time interval 
and then receives any messages that have arrived since last check

TL;DR;

It would be cool to consider remote broker delivery for messages while 
refactoring the address handling code. This would bring Artemis inline 
with the rest of the commercial EMS brokers. The impact now, hopefully, 
is minor and just thinking about default prefixes.

Thanks,
-Matt



Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

Posted by Martyn Taylor <mt...@redhat.com>.
All,


I have read back through this list, the requirements document provided by
Matt, and our conversations on IRC.

From what I have read it seems there are a few missing pieces of
functionality that we need to address in these addressing changes and at
least one piece of functionality that should be possible to support in the
future, without changes to the model. I’ve tried my best to outline the
requirements and related them to the original proposal. Could those of you
who are interested please ack or comment to ensure I have covered all
scenarios. I’ve also outlined a potential modification to the model that
take these requirements into consideration.

1. Ability to route messages to queues with the same address, but different
routing semantics.

The proposal outlined in ARTEMIS-780 outlines a new model that introduces
an address object at the configuration and management layer. In the
proposal it is not possible to create 2 addresses with different routing
types. This causes a problem with existing clients (JMS, STOMP and for
compatability with other vendors).

Potential Modification: Addresses can have multiple routing type
“endpoints”, either “multicast” only, “anycast” only or both. The example
below would be used to represent a JMS Topic called “foo”, with a single
subscription queue and a JMS Queue called “foo”. N.B. The actual XML is
just an example, there are multiple ways this could be represented that we
can define later.

<*addresses*>   <*address* *name**="foo"*>      <*anycast*>
<*queues*>            <*queue* *name**="**foo”* />         </*queues*>
     </*anycast*>      <*mulcast*>         <*queues*>
<*queue* *name**="my.topic.subscription" */>         </*queues*>
</*multicast*>   </*address*></*addresses*>

2. Sending to “multicast”, “anycast” or “all”

As mentioned earlier JMS (and other clients such as STOMP via prefixing)
allow the producer to identify the type of end point it would like to send
to.

If a JMS client creates a producer and passes in a topic with address
“foo”. Then only the queues associated with the “multicast” section of the
address. A similar thing happens when the JMS producer sends to a “queue”
messages should be distributed amongst the queues associated with the
“anycast” section of the address.

There may also be a case when a producer does not identify the endpoint
type, and simply sends to “foo”. AMQP or MQTT may want to do this. In this
scenario both should happen. All the queues under the multicast section get
a copy of the message, and one queue under the anycast section gets the
message.

Modification: None Needed. Internal APIs would need to be updated to allow
this functionality.

3. Support for prefixes to identify endpoint types

Many clients, ActiveMQ 5.x, STOMP and potential clients from alternate
vendors, identify the endpoint type (in producer and consumer) using a
prefix notation.

e.g. queue:///foo

Which would identify:

<*addresses*>   <*address* *name**="foo"*>      <*anycast*>
<*queues*>            <*queue* *name**="my.foo.queue" */>
</*queues*>      </*anycast*>   </*address*></*addresses*>

Modifications Needed: None to the model. An additional parameter to the
acceptors should be added to identify the prefix.

4. Multiple endpoints are defined, but client does not specify “endpoint
routing type” when consuming

Handling cases where consumers does not pass enough information in their
address or via protocol specific mechanisms to identify an endpoint. Let’s
say an AMQP client, requests to subscribe to the address “foo”, but passes
no extra information. In the cases where there are only a single endpoint
type defined, the consumer would associated with that endpoint type.
However, when both endpoint types are defined, the protocol handler does
not know whether to associate this consumer with a queue under the
“anycast” section, or whether to create a new queue under the “multicast”
section. e.g.

Consume: “foo”

<*addresses*>

   <*address* *name**="foo"*>      <*anycast*>         <*queues*>
      <*queue* *name**="**foo”* />         </*queues*>
</*anycast*>      <*multicast*>         <*queues*>            <*queue*
*name**="my.topic.subscription" */>         </*queues*>
</*multicast*>   </*address*></*addresses*>

In this scenario, we can make the default configurable on the
protocol/acceptor. Possible options for this could be:

“multicast”: Defaults multicast

“anycast”: Defaults to anycast

“error”: Returns an error to the client

Alternatively each protocol handler could handle this in the most sensible
way for that protocol. MQTT might default to “multicast”, STOMP “anycast”,
and AMQP to “error”.

5. Fully qualified address names

This feature allows a client to identify a particular address on a specific
broker in a cluster. This could be achieved by the client using some form
of address as:

queue:///host/broker/address/

Matt could you elaborate on the drivers behind this requirement please.

I am of the opinion that this is out of the scope of the addressing
changes, and is more to do with redirecting in cluster scenarios. The
current model will support this address syntax if we want to use it in the
future.

Regards
Martyn

Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

Posted by Clebert Suconic <cl...@gmail.com>.
> —————————————————
> 1. Destination addresses will be stored in canonical form without prefixes
>     ie.  queue:///foo will be stored as name=“foo” type=“anycast”
>           topic://bar will be stored as name=“bar” type=“multicast”
>

I am trying to avoid that for compatibility issues.

I almost actually sent a long email here.. but what about this.. lets
park this till Monday (or maybe Tuesday).. and see if a proposal from
Martyn would help this.

Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

Posted by Matt Pavlovich <ma...@gmail.com>.
jbertram noted some misuse of "destination" terminology.  Updated notes 
(hopefully mostly correct):

http://pastebin.com/gumnStwq


On 11/18/16 11:57 AM, Matt Pavlovich wrote:
> Good call, sounds like a plan. Here is the link and a copy of the 
> latest set of my notes trying to do the same.
>
> ref: http://pastebin.com/0WaMT8Yx
>
>
> Addressing Behavior Use Cases [Draft]
> \u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014
> 1. Destination addresses will be stored in canonical form without 
> prefixes
>     ie.  queue:///foo will be stored as name=\u201cfoo\u201d type=\u201canycast\u201d
>           topic://bar will be stored as name=\u201cbar\u201d type=\u201cmulticast\u201d
>
> 2. Destinations must specify a type in the configuration
>
> 3. Destinations may be auto-created when the intersection of protocol 
> support, broker-side configuration and user permissions permits
>
> 4. [JMS] session.createQueue(\u201cfoo\u201d) must be translated to the fully 
> qualified name of \u201cqueue:///foo\u201d in the client provider library (or 
> some internal core protocol/api equiv)
>
> 5. [JMS] session.createQueue(\u201cqueue:///foo\u201d) and 
> session.createQueue(\u201cqueue://foo\u201d) and session.createQueue(\u201cfoo\u201d) 
> refer to the same destination
>
> 6. [STOMP and AMQP] will default to \u201canycast\u201d address type 
> (configurable at broker transport?) during lookup and throw an 
> exception if there is a name collision when using unqualified 
> destination name.
>     ie. User specifies \u201cblah\u201d and both types exist
>
> 7. [MQTT] will default to topic name unless fully qualified to use the 
> queue address (needs research.. MQTT may not allow URI format in 
> destination name)
>
> 8. [STOMP, AMQP and other protocols] will support appropriate prefix 
> mapping to their destination name format.  For example: /queue/foo and 
> /topic/foo will be translated to foo  type=\u201cany cast\u201d and foo  
> type=\u201cmulticast\u201d internally
>
> Questions:
>
> Q-0: Does Artemis support the separation of queue-like and topic-like 
> addresses?  ie.. foo type=\u201canycast\u201d and foo type=\u201cmulticast\u201d can 
> co-exist and are distinct addresses. (Not currently the behavior)
>
> Q-1: What destination type should be created by default for STOMP, 
> AMPQ and MQTT unqualified addresses?
>
> Q-2: What destination type should be looked up by default for STOMP, 
> AMPQ and MQTT of unqualified addresses?
>
> Q-3: How would queue://$broker/$host fully qualified destinations 
> names be supported in STOMP, AMQP and MQTT?
>
> Discussions:
>
> D-1: jbertram would like to table support for Q-3 fully qualified 
> names (host+destination) until after ARTEMIS-780 is done. The 
> reasoning is to keep things simple and avoid uncertain future complexity.
>
> D-2: mattrpav recommends planning for fully qualified names before 2.0 
> is released (doesn\u2019t need to be part of 780) in order in order to 
> avoid any impacts post-2.0. The reasoning is that in order for Artemis 
> to compete as a replacement with majority of EMS products (IBM MQ, 
> Tibco EMS, etc) host+destination routing is a must-have.
>
>
> On 11/18/16 11:29 AM, Martyn Taylor wrote:
>> All,
>>
>> I think we need to take a step back here and try to capture all the use
>> cases discussed thus far, we've had a few use cases outlined here and
>> plenty of discussion @ #apache-activemq channel.  I think it's 
>> difficult to
>> discuss solutions until everyone is on the same page when it comes to 
>> the
>> requirements.
>>
>> I'll start pulling this together, and reply here once I am done.
>>
>> Thanks for all the input so far.
>>
>> On Fri, Nov 18, 2016 at 4:00 PM, Matt Pavlovich <ma...@gmail.com> 
>> wrote:
>>
>>>    No. What I am thinking is that all addresses are prefixless. What 
>>> you are
>>>> really saying when you say \u201cqueue://foo\u201d is not that I want to
>>>> send/consume
>>>> to/from an address \u201cqueue://foo\u201d but that you want to send/consume 
>>>> to/from
>>>> an address named \u201cfoo\u201d and that you expect queue semantics on that
>>>> address.
>>>>
>>>> If you don\u2019t specify semantics with the address name. For example, 
>>>> let\u2019s
>>>> say in MQTT you send to \u201cfoo\u201d. This message would be sent to 1 
>>>> consumer
>>>> that have specified \u201cqueue://foo\u201d and all consumers that specified
>>>> \u201ctopic://foo\u201d. As far as Artemis is concerned the address is just 
>>>> "foo".
>>>> The prefixes are added in the clients, and used by the protocol 
>>>> managers
>>>> to
>>>> ask Artemis for certain behaviours.
>>>>
>>> How do you see this use case working out? If a JMS client sends a 
>>> message
>>> to session.createQueue("foo") and Artemis auto-creates a "foo"
>>> type="anycast". Then two MQTT clients (MQTT being topic-based) come 
>>> around
>>> a subscribe to "foo" do the MQTT clients round-robin the data or 
>>> each get a
>>> copy of the message?.
>> Can we shelve this for now and pick it up once we have outlined all 
>> the use
>> cases.
>>
>


Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

Posted by Matt Pavlovich <ma...@gmail.com>.
Good call, sounds like a plan. Here is the link and a copy of the latest 
set of my notes trying to do the same.

ref: http://pastebin.com/0WaMT8Yx


Addressing Behavior Use Cases [Draft]
\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014
1. Destination addresses will be stored in canonical form without prefixes
     ie.  queue:///foo will be stored as name=\u201cfoo\u201d type=\u201canycast\u201d
           topic://bar will be stored as name=\u201cbar\u201d type=\u201cmulticast\u201d

2. Destinations must specify a type in the configuration

3. Destinations may be auto-created when the intersection of protocol 
support, broker-side configuration and user permissions permits

4. [JMS] session.createQueue(\u201cfoo\u201d) must be translated to the fully 
qualified name of \u201cqueue:///foo\u201d in the client provider library (or some 
internal core protocol/api equiv)

5. [JMS] session.createQueue(\u201cqueue:///foo\u201d) and 
session.createQueue(\u201cqueue://foo\u201d) and session.createQueue(\u201cfoo\u201d) refer 
to the same destination

6. [STOMP and AMQP] will default to \u201canycast\u201d address type (configurable 
at broker transport?) during lookup and throw an exception if there is a 
name collision when using unqualified destination name.
     ie. User specifies \u201cblah\u201d and both types exist

7. [MQTT] will default to topic name unless fully qualified to use the 
queue address (needs research.. MQTT may not allow URI format in 
destination name)

8. [STOMP, AMQP and other protocols] will support appropriate prefix 
mapping to their destination name format.  For example: /queue/foo and 
/topic/foo will be translated to foo  type=\u201cany cast\u201d and foo  
type=\u201cmulticast\u201d internally

Questions:

Q-0: Does Artemis support the separation of queue-like and topic-like 
addresses?  ie.. foo type=\u201canycast\u201d and foo type=\u201cmulticast\u201d can 
co-exist and are distinct addresses. (Not currently the behavior)

Q-1: What destination type should be created by default for STOMP, AMPQ 
and MQTT unqualified addresses?

Q-2: What destination type should be looked up by default for STOMP, 
AMPQ and MQTT of unqualified addresses?

Q-3: How would queue://$broker/$host fully qualified destinations names 
be supported in STOMP, AMQP and MQTT?

Discussions:

D-1: jbertram would like to table support for Q-3 fully qualified names 
(host+destination) until after ARTEMIS-780 is done. The reasoning is to 
keep things simple and avoid uncertain future complexity.

D-2: mattrpav recommends planning for fully qualified names before 2.0 
is released (doesn\u2019t need to be part of 780) in order in order to avoid 
any impacts post-2.0. The reasoning is that in order for Artemis to 
compete as a replacement with majority of EMS products (IBM MQ, Tibco 
EMS, etc) host+destination routing is a must-have.


On 11/18/16 11:29 AM, Martyn Taylor wrote:
> All,
>
> I think we need to take a step back here and try to capture all the use
> cases discussed thus far, we've had a few use cases outlined here and
> plenty of discussion @ #apache-activemq channel.  I think it's difficult to
> discuss solutions until everyone is on the same page when it comes to the
> requirements.
>
> I'll start pulling this together, and reply here once I am done.
>
> Thanks for all the input so far.
>
> On Fri, Nov 18, 2016 at 4:00 PM, Matt Pavlovich <ma...@gmail.com> wrote:
>
>>    No. What I am thinking is that all addresses are prefixless. What you are
>>> really saying when you say \u201cqueue://foo\u201d is not that I want to
>>> send/consume
>>> to/from an address \u201cqueue://foo\u201d but that you want to send/consume to/from
>>> an address named \u201cfoo\u201d and that you expect queue semantics on that
>>> address.
>>>
>>> If you don\u2019t specify semantics with the address name. For example, let\u2019s
>>> say in MQTT you send to \u201cfoo\u201d. This message would be sent to 1 consumer
>>> that have specified \u201cqueue://foo\u201d and all consumers that specified
>>> \u201ctopic://foo\u201d. As far as Artemis is concerned the address is just "foo".
>>> The prefixes are added in the clients, and used by the protocol managers
>>> to
>>> ask Artemis for certain behaviours.
>>>
>> How do you see this use case working out? If a JMS client sends a message
>> to session.createQueue("foo") and Artemis auto-creates a "foo"
>> type="anycast". Then two MQTT clients (MQTT being topic-based) come around
>> a subscribe to "foo" do the MQTT clients round-robin the data or each get a
>> copy of the message?.
> Can we shelve this for now and pick it up once we have outlined all the use
> cases.
>


Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

Posted by Martyn Taylor <mt...@redhat.com>.
All,

I think we need to take a step back here and try to capture all the use
cases discussed thus far, we've had a few use cases outlined here and
plenty of discussion @ #apache-activemq channel.  I think it's difficult to
discuss solutions until everyone is on the same page when it comes to the
requirements.

I'll start pulling this together, and reply here once I am done.

Thanks for all the input so far.

On Fri, Nov 18, 2016 at 4:00 PM, Matt Pavlovich <ma...@gmail.com> wrote:

>   No. What I am thinking is that all addresses are prefixless. What you are
>> really saying when you say “queue://foo” is not that I want to
>> send/consume
>> to/from an address “queue://foo” but that you want to send/consume to/from
>> an address named “foo” and that you expect queue semantics on that
>> address.
>>
>> If you don’t specify semantics with the address name. For example, let’s
>> say in MQTT you send to “foo”. This message would be sent to 1 consumer
>> that have specified “queue://foo” and all consumers that specified
>> “topic://foo”. As far as Artemis is concerned the address is just "foo".
>> The prefixes are added in the clients, and used by the protocol managers
>> to
>> ask Artemis for certain behaviours.
>>
> How do you see this use case working out? If a JMS client sends a message
> to session.createQueue("foo") and Artemis auto-creates a "foo"
> type="anycast". Then two MQTT clients (MQTT being topic-based) come around
> a subscribe to "foo" do the MQTT clients round-robin the data or each get a
> copy of the message?.

Can we shelve this for now and pick it up once we have outlined all the use
cases.

Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

Posted by Matt Pavlovich <ma...@gmail.com>.
>   No. What I am thinking is that all addresses are prefixless. What you are
> really saying when you say \u201cqueue://foo\u201d is not that I want to send/consume
> to/from an address \u201cqueue://foo\u201d but that you want to send/consume to/from
> an address named \u201cfoo\u201d and that you expect queue semantics on that address.
>
> If you don\u2019t specify semantics with the address name. For example, let\u2019s
> say in MQTT you send to \u201cfoo\u201d. This message would be sent to 1 consumer
> that have specified \u201cqueue://foo\u201d and all consumers that specified
> \u201ctopic://foo\u201d. As far as Artemis is concerned the address is just "foo".
> The prefixes are added in the clients, and used by the protocol managers to
> ask Artemis for certain behaviours.
How do you see this use case working out? If a JMS client sends a 
message to session.createQueue("foo") and Artemis auto-creates a "foo" 
type="anycast". Then two MQTT clients (MQTT being topic-based) come 
around a subscribe to "foo" do the MQTT clients round-robin the data or 
each get a copy of the message?


Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

Posted by Martyn Taylor <mt...@redhat.com>.
On Thu, Nov 17, 2016 at 1:56 PM, Matt Pavlovich <ma...@gmail.com> wrote:

> On 11/17/16 5:49 AM, Martyn Taylor wrote:
>
> This is great feedback Matt, thanks.  I have a couple of questions/comments
>> below:
>>
>> On Wed, Nov 16, 2016 at 6:23 PM, Matt Pavlovich <ma...@gmail.com>
>> wrote:
>>
> <snip..>
>
>> Just so I understand exactly what you are saying here.  You're saying that
>> a client sends to "foo" and a consumer received messages sent to "foo".
>> In
>> order for the consumer to consume from "foo" it passes in either "foo",
>> "queue:///foo" or "topic:///foo" which determines how the messages are
>> propagated to the client?  "foo" means let the broker decide,
>> "queue:///foo" and "topic:///foo" mean let the client decide.  In addition
>> to these two approaches, it may be that the protocol itself wants to
>> decide.  MQTT for example, always requires a subscription.
>>
> 1. I hadn't thought of "prefix-less" destinations (aka "foo").  Are you
> thinking Artemis throws an exception if there is an overlap in destination
> names and it can't auto-resolve?
>
 No. What I am thinking is that all addresses are prefixless. What you are
really saying when you say “queue://foo” is not that I want to send/consume
to/from an address “queue://foo” but that you want to send/consume to/from
an address named “foo” and that you expect queue semantics on that address.

If you don’t specify semantics with the address name. For example, let’s
say in MQTT you send to “foo”. This message would be sent to 1 consumer
that have specified “queue://foo” and all consumers that specified
“topic://foo”. As far as Artemis is concerned the address is just "foo".
The prefixes are added in the clients, and used by the protocol managers to
ask Artemis for certain behaviours.

>
> 2. MQTT always requires a subscription? I don't find that to be the case.
> You can produce without a subscriber/consumer, no?
>
I was talking about the consumer behaviour only.

>   !! Warning probably off-topic!! a major gap in MQTT is its lack of
> queues/PTP/unicast. Being able to use the MQTT wire protocol to send to
> queues would be really useful.  ActiveMQ 5.x allows wiring MQTT protocol to
> use virtual topics for consumption, which is a big win.
>
> 3. The rub seems to be that JMS leans to having a definitive split b/w
> Topic and Queue, whereas STOMP and AMQP just have addresses and MQTT is
> topic-only-ish.

And Artemis CORE only has queues and addresses.  What we want to ensure is
that the CORE engine is capable of supporting all of the requirements for
JMS, MQTT and other protocols.

>


> One way to do this, not straying too far from the original proposal, would
>> be to make the address uniqueness a combination of the routing type and
>> the
>> address name.  This would allow something like:
>>
>> <address name="foo" routingType="anycast">
>> <address name="foo" routingType="multicast">
>>
>> We'd need to ensure there is a precedent set for times when a subscriber
>> just subscribes to "foo".  I'd say it makes sense for "multicast" to take
>> precedence in this case.
>>
> I'd disagree that we should have a precedence behavior. It too easily
> break if I produce JMS queue:///FOO and consume STOMP "foo" I wouldn't
> receive the message.





> I think an unqualified address should throw an exception when there is an
> overlap.

That's another approach.  Seems reasonable.  We could make it configurable
on a per protocol basis.


>


>
> I think it probably makes the most sense to have the following precedence
>> for the deciding party:
>>
>> 1. Broker
>> 2. Address prefixing/name scheme
>> 3. Protocol
>>
>> I think the prefix also needs to be configurable, but "queue:///"
>> "topic:///" seems like a sensible default.
>>
> +1
>
>> 3. As far as destination behaviors, how about using uri parameters to pass
>>> provider (Artemis) specific settings on-the-fly?
>>>
>>>      For example:  in AMPQ the address could be
>>> topic:///foo?type=nonSharedNonDurable etc..  same for MQTT, STOMP, etc.
>>>      There is precedence in using uri parameters to configure the
>>> Destination in JMS as well. IBM MQ has session.createQueue("My.Queue?
>>> targetClient=1")
>>>
>>>      Note: AMQP supports options as well, so that could be used as well.
>>> However, uri's tend to be better for externalizing configuration
>>> management.
>>>
>>> I think supporting both options, uri and protocol specific parameters is
>> useful.  Rather than "nonSharedDurable" I think I'd prefer to map these
>> things onto address properties.  For example:
>>
>> "topic://foo?maxConsumers=1"
>>
>> Where the "topic:///" prefix is configurable.  This is essentially a non
>> shared, durable subscription.
>>
> I'm not married to any naming convention, but I it should reflect both the
> durability and "shared-ness". Regarding maxConsumers=1, are you thinking
> you'd want to limit "shared-ness" to an integer or true | false?
>
> ie.. would maxConsumer=3 limit shared consumers to three threads?

It would limit it to 3 server consumers.

N.B. The threading model in Artemis is different to ActiveMQ 5.x, we use
shared thread pools.

>
>
> 4. Destination name separation b/w protocol handlers.  STOMP and MQTT like
>>> "/" and JMS likes "." as destination name separators. Is there any
>>> thought
>>> to having a native destination name separator and then have
>>> protocol-specific convertors?
>>>
>>> This is how it works right now.  We have a native path separator which is
>> ".".  Protocol handlers map subscription addresses down to this.  This
>> does
>> mean that to define a multicast address for MQTT, you would need to do:
>>
>> "foo.bar" (vs "foo/bar" which is protocol specific).
>>
>> I've also outlined in the proposal a goal to allow these path separators
>> to
>> be configurable.  So you can specify pathSeparator="/", "." which would
>> mean that you can configure "foo/bar" or "foo.bar" they'd both act in the
>> same way.
>>
> +1 Configurable separator sounds good, I recommend that be a broker-wide
> config option and accessible by plugins so they can reference it correctly
> w/o presuming ".".
>
> 5. Fully qualified destination names that include a broker name. Other
>>> providers support fully-qualified destination names in JMS following the
>>> format:  queue://$brokerName/$queueName. Adding this would go a long way
>>> to supporting migration of current applications without having to change
>>> client-code.
>>>
>>> This is a little differently to how clustering currently works, I think
>> we
>> need to give this one some more thought.  Right now queues are clustered
>> automatically (well providing you enable the correct address namespace for
>> the cluster connection).  If you have a client listening on broker2 and a
>> producer from broker1, the messages will get propagated across the
>> cluster.
>>
>> You may have already explained this to me in the past, but can you give me
>> an example use case of when this might be necessary.
>>
> IoT/Retail Store/Kiosk scenario:
>
> The use case is "clustered brokers, non-clustered destinations". ie. Same
> simple queue name exist on all "edge" brokers, but the subscriptions are
> unique per edge broker.
>
> An IOT device comes online and its broker creates a duplex connection to
> central broker. IoT devices sends a "Hello World" to
> queue://central/HELLO.REQUEST with a replyTo of
> queue://iot1234/HELLO.RESPONSE.
>
> The central application receives the event, creates response and addresses
> the response to queue://iot1234/HELLO.RESPONSE. The message is then sent
> to the "central" broker who routes the message to the "iot1234" broker over
> the duplex connection which delivers it to the local
> queue:///HELLO.RESPONSE.
>
> 1. Edge device applications are all coded using the same queue names to
> keep things simple
> 2. Central applications are unaware and do not need to manage the the
> url's of edge brokers
> 3. Central broker handles routing, similar to email MTA
> 4. IBM MQ, Tibco EMS (~90% of EMS market share), and others support this
> and its heavily utilized.
>
> Thanks.  I think I still need more information/thought to fully understand
this and how it might work in Artemis.

> -Matt
>
>

Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

Posted by Matt Pavlovich <ma...@gmail.com>.
On 11/17/16 5:49 AM, Martyn Taylor wrote:

> This is great feedback Matt, thanks.  I have a couple of questions/comments
> below:
>
> On Wed, Nov 16, 2016 at 6:23 PM, Matt Pavlovich <ma...@gmail.com> wrote:
<snip..>
> Just so I understand exactly what you are saying here.  You're saying that
> a client sends to "foo" and a consumer received messages sent to "foo".  In
> order for the consumer to consume from "foo" it passes in either "foo",
> "queue:///foo" or "topic:///foo" which determines how the messages are
> propagated to the client?  "foo" means let the broker decide,
> "queue:///foo" and "topic:///foo" mean let the client decide.  In addition
> to these two approaches, it may be that the protocol itself wants to
> decide.  MQTT for example, always requires a subscription.
1. I hadn't thought of "prefix-less" destinations (aka "foo").  Are you 
thinking Artemis throws an exception if there is an overlap in 
destination names and it can't auto-resolve?

2. MQTT always requires a subscription? I don't find that to be the 
case. You can produce without a subscriber/consumer, no?
   !! Warning probably off-topic!! a major gap in MQTT is its lack of 
queues/PTP/unicast. Being able to use the MQTT wire protocol to send to 
queues would be really useful.  ActiveMQ 5.x allows wiring MQTT protocol 
to use virtual topics for consumption, which is a big win.

3. The rub seems to be that JMS leans to having a definitive split b/w 
Topic and Queue, whereas STOMP and AMQP just have addresses and MQTT is 
topic-only-ish.
> One way to do this, not straying too far from the original proposal, would
> be to make the address uniqueness a combination of the routing type and the
> address name.  This would allow something like:
>
> <address name="foo" routingType="anycast">
> <address name="foo" routingType="multicast">
>
> We'd need to ensure there is a precedent set for times when a subscriber
> just subscribes to "foo".  I'd say it makes sense for "multicast" to take
> precedence in this case.
I'd disagree that we should have a precedence behavior. It too easily 
break if I produce JMS queue:///FOO and consume STOMP "foo" I wouldn't 
receive the message. I think an unqualified address should throw an 
exception when there is an overlap.

> I think it probably makes the most sense to have the following precedence
> for the deciding party:
>
> 1. Broker
> 2. Address prefixing/name scheme
> 3. Protocol
>
> I think the prefix also needs to be configurable, but "queue:///"
> "topic:///" seems like a sensible default.
+1
>> 3. As far as destination behaviors, how about using uri parameters to pass
>> provider (Artemis) specific settings on-the-fly?
>>
>>      For example:  in AMPQ the address could be
>> topic:///foo?type=nonSharedNonDurable etc..  same for MQTT, STOMP, etc.
>>      There is precedence in using uri parameters to configure the
>> Destination in JMS as well. IBM MQ has session.createQueue("My.Queue?
>> targetClient=1")
>>
>>      Note: AMQP supports options as well, so that could be used as well.
>> However, uri's tend to be better for externalizing configuration management.
>>
> I think supporting both options, uri and protocol specific parameters is
> useful.  Rather than "nonSharedDurable" I think I'd prefer to map these
> things onto address properties.  For example:
>
> "topic://foo?maxConsumers=1"
>
> Where the "topic:///" prefix is configurable.  This is essentially a non
> shared, durable subscription.
I'm not married to any naming convention, but I it should reflect both 
the durability and "shared-ness". Regarding maxConsumers=1, are you 
thinking you'd want to limit "shared-ness" to an integer or true | false?

ie.. would maxConsumer=3 limit shared consumers to three threads?

>> 4. Destination name separation b/w protocol handlers.  STOMP and MQTT like
>> "/" and JMS likes "." as destination name separators. Is there any thought
>> to having a native destination name separator and then have
>> protocol-specific convertors?
>>
> This is how it works right now.  We have a native path separator which is
> ".".  Protocol handlers map subscription addresses down to this.  This does
> mean that to define a multicast address for MQTT, you would need to do:
>
> "foo.bar" (vs "foo/bar" which is protocol specific).
>
> I've also outlined in the proposal a goal to allow these path separators to
> be configurable.  So you can specify pathSeparator="/", "." which would
> mean that you can configure "foo/bar" or "foo.bar" they'd both act in the
> same way.
+1 Configurable separator sounds good, I recommend that be a broker-wide 
config option and accessible by plugins so they can reference it 
correctly w/o presuming ".".

>> 5. Fully qualified destination names that include a broker name. Other
>> providers support fully-qualified destination names in JMS following the
>> format:  queue://$brokerName/$queueName. Adding this would go a long way
>> to supporting migration of current applications without having to change
>> client-code.
>>
> This is a little differently to how clustering currently works, I think we
> need to give this one some more thought.  Right now queues are clustered
> automatically (well providing you enable the correct address namespace for
> the cluster connection).  If you have a client listening on broker2 and a
> producer from broker1, the messages will get propagated across the cluster.
>
> You may have already explained this to me in the past, but can you give me
> an example use case of when this might be necessary.
IoT/Retail Store/Kiosk scenario:

The use case is "clustered brokers, non-clustered destinations". ie. 
Same simple queue name exist on all "edge" brokers, but the 
subscriptions are unique per edge broker.

An IOT device comes online and its broker creates a duplex connection to 
central broker. IoT devices sends a "Hello World" to 
queue://central/HELLO.REQUEST with a replyTo of 
queue://iot1234/HELLO.RESPONSE.

The central application receives the event, creates response and 
addresses the response to queue://iot1234/HELLO.RESPONSE. The message is 
then sent to the "central" broker who routes the message to the 
"iot1234" broker over the duplex connection which delivers it to the 
local queue:///HELLO.RESPONSE.

1. Edge device applications are all coded using the same queue names to 
keep things simple
2. Central applications are unaware and do not need to manage the the 
url's of edge brokers
3. Central broker handles routing, similar to email MTA
4. IBM MQ, Tibco EMS (~90% of EMS market share), and others support this 
and its heavily utilized.

-Matt


Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

Posted by Christopher Shannon <ch...@gmail.com>.
I've been keeping an eye on ARTEMIS-780 and I think this is going to be a
really good improvement and should help making things a bit easier in terms
of moving any features from 5.x that we want to move.  For example, I put
in JIRAs to migrate over 5.x style advisories but I am purposefully holding
off because this change should help make it easier to do that.

Also, +1 for bumping the version to 2.0 after this is completed.  This is a
major enough architecture change that I think it would be a good idea to
bump the major version.

On Thu, Nov 17, 2016 at 5:49 AM, Martyn Taylor <mt...@redhat.com> wrote:

> This is great feedback Matt, thanks.  I have a couple of questions/comments
> below:
>
> On Wed, Nov 16, 2016 at 6:23 PM, Matt Pavlovich <ma...@gmail.com>
> wrote:
>
> > Hi Martin-
> >
> > Glad to see this area getting dedicated attention. A couple things I
> > didn't see covered in the doc or the JIRA comments. (I'll be adding to
> the
> > JIRA comments as well.)
> >
> > Items:
> >
> > 0. Pre-configuring destinations is a big brain drain, so anything that
> can
> > be client-driven is a win. Also, protocol specific handlers could perform
> > the admin operations on-demand.
> >
> >    For example:  session.createDurableSubscriber(...)   The JMS handler
> > create the subscription on the behalf of the client.
> >
> Yes I agree.  We need to ensure we can both ways of defining the endpoint
> semantics, i.e. allow clients to request endpoint requirements and also
> have broker side configuration drive endpoint behaviour, ideally using the
> same underlying mechanism.
>
> >
> > 1. Separate topic and queue namespaces.. in JMS topic:///foo !=
> > queue:///foo. The addressing will need some sort of way to separate the
> two
> > during naming collisions.
> >
> Just so I understand exactly what you are saying here.  You're saying that
> a client sends to "foo" and a consumer received messages sent to "foo".  In
> order for the consumer to consume from "foo" it passes in either "foo",
> "queue:///foo" or "topic:///foo" which determines how the messages are
> propagated to the client?  "foo" means let the broker decide,
> "queue:///foo" and "topic:///foo" mean let the client decide.  In addition
> to these two approaches, it may be that the protocol itself wants to
> decide.  MQTT for example, always requires a subscription.
>
> One way to do this, not straying too far from the original proposal, would
> be to make the address uniqueness a combination of the routing type and the
> address name.  This would allow something like:
>
> <address name="foo" routingType="anycast">
> <address name="foo" routingType="multicast">
>
> We'd need to ensure there is a precedent set for times when a subscriber
> just subscribes to "foo".  I'd say it makes sense for "multicast" to take
> precedence in this case.
>
> I think it probably makes the most sense to have the following precedence
> for the deciding party:
>
> 1. Broker
> 2. Address prefixing/name scheme
> 3. Protocol
>
> I think the prefix also needs to be configurable, but "queue:///"
> "topic:///" seems like a sensible default.
>
> >
> > 2. In ActiveMQ 5.x, AMQP and STOMP handled the addressing by using
> > queue:/// and topic:/// prefixes. I don't think that is necessarily a bad
> > thing, but something to consider b/c we need to support #1
> >
> +1
>
> >
> > 3. As far as destination behaviors, how about using uri parameters to
> pass
> > provider (Artemis) specific settings on-the-fly?
> >
> >     For example:  in AMPQ the address could be
> > topic:///foo?type=nonSharedNonDurable etc..  same for MQTT, STOMP, etc.
>
>
> >     There is precedence in using uri parameters to configure the
> > Destination in JMS as well. IBM MQ has session.createQueue("My.Queue?
> > targetClient=1")
> >
> >     Note: AMQP supports options as well, so that could be used as well.
> > However, uri's tend to be better for externalizing configuration
> management.
> >
> I think supporting both options, uri and protocol specific parameters is
> useful.  Rather than "nonSharedDurable" I think I'd prefer to map these
> things onto address properties.  For example:
>
> "topic://foo?maxConsumers=1"
>
> Where the "topic:///" prefix is configurable.  This is essentially a non
> shared, durable subscription.
>
> >
> > 4. Destination name separation b/w protocol handlers.  STOMP and MQTT
> like
> > "/" and JMS likes "." as destination name separators. Is there any
> thought
> > to having a native destination name separator and then have
> > protocol-specific convertors?
> >
> This is how it works right now.  We have a native path separator which is
> ".".  Protocol handlers map subscription addresses down to this.  This does
> mean that to define a multicast address for MQTT, you would need to do:
>
> "foo.bar" (vs "foo/bar" which is protocol specific).
>
> I've also outlined in the proposal a goal to allow these path separators to
> be configurable.  So you can specify pathSeparator="/", "." which would
> mean that you can configure "foo/bar" or "foo.bar" they'd both act in the
> same way.
>
> >
> > 5. Fully qualified destination names that include a broker name. Other
> > providers support fully-qualified destination names in JMS following the
> > format:  queue://$brokerName/$queueName. Adding this would go a long way
> > to supporting migration of current applications without having to change
> > client-code.
> >
> This is a little differently to how clustering currently works, I think we
> need to give this one some more thought.  Right now queues are clustered
> automatically (well providing you enable the correct address namespace for
> the cluster connection).  If you have a client listening on broker2 and a
> producer from broker1, the messages will get propagated across the cluster.
>
> You may have already explained this to me in the past, but can you give me
> an example use case of when this might be necessary.
>
> >
> >     Note: This would probably impact cluster handling as well, so perhaps
> > in phase 1 there is just a placeholder for supporting a broker name in
> the
> > future?
> >
> > -Matt
>
> Thanks
> Martyn
>
> >
> >
> > On 11/16/16 10:16 AM, Martyn Taylor wrote:
> >
> >> All,
> >>
> >> Some discussion has happened around this topic already, but I wanted to
> >> ensure that everyone here, who have not been following the
> >> JIRA/ARTEMIS-780
> >> branch has a chance for input and to digest the information in this
> >> proposal.
> >>
> >> In order to understand the motivators outlined here, you first need to
> >> understand how the existing addressing model works in Artemis. For those
> >> of
> >> you who are not familiar with how things currently work, I’ve added a
> >> document to the ARTEMIS-780 JIRA in the attachments section, that gives
> an
> >> overview of the existing model and some more detail / examples of the
> >> proposal: *https://issues.apache.org/jira/browse/ARTEMIS-780
> >> <https://issues.apache.org/jira/browse/ARTEMIS-780>*
> >>
> >> To summarise here, the Artemis routing/addressing model has some
> >> restrictions:
> >>
> >> 1. It’s not possible with core (and therefore across all protocols) to
> >> define ,at the broker side, semantics about addresses. i.e. whether an
> >> address behaves as a “point to point” or “publish subscribe” end point
> >>
> >> 2. For JMS destinations additional configuration and objects were added
> to
> >> the broker, that rely on name-spacing to add semantics to addresses i.e.
> >> “jms.topic.” “jms.queue.”  A couple of issues with this:
> >>
> >>     1.
> >>
> >>     This only works for JMS and no other protocols
> >>     2.
> >>
> >>     Name-spacing causes issues for cross protocol communication
> >>     3.
> >>
> >>     It means there’s two ways of doing things, 1 for JMS and 1 for
> >>     everything else.
> >>
> >> 3. The JMS and Core destination definitions do not have enough
> information
> >> to define more intricate behaviours. Such as whether an address should
> >> behave like a “shared subscription” or similar to a “volatile
> >> subscription”
> >> where clients don’t get messages missed when they are offline.
> >>
> >> 4. Some protocols (AMQP is a good example) don’t have enough information
> >> in
> >> their frames for the broker to determine how to behave for certain
> >> endpoints and rely on broker side configuration (or provider specific
> >> parameters).
> >>
> >> Proposal
> >>
> >> What I’d like to do (and what I’ve proposed in ARTEMIS-780) is to get
> rid
> >> of the JMS specific components and create a single unified mechanism for
> >> configuring all types of endpoints across all protocols to define:
> >>
> >>     -
> >>
> >>     Point to point (queue)
> >>     -
> >>
> >>     Shared Durable Subscriptions
> >>     -
> >>
> >>     Shared Non Durable Subscriptions
> >>     -
> >>
> >>     Non Shared durable subscriptions
> >>     -
> >>
> >>
> >>     Non Shared Non durable subscriptions
> >>
> >> To do this, the idea is to create a new “Address”
> configuration/management
> >> object, that has certain properties such as a routing type which
> >> represents
> >> how messages are routed to queues with this address.
> >>
> >> When a request for subscription is received by Artemis, the relevant
> piece
> >> can just look up the address and check it’s properties to determine how
> to
> >> behave (or if an address doesn’t exist) then default to our existing
> >> behaviour. For those interested in the details of how this might work
> I’ve
> >> outlined some specific examples in the document on the JIRA.
> >>
> >> What are the user impacts:
> >>
> >> 1. Configuration would need to be revised in order to expose the new
> >> addressing object. I propose that we either continue supporting the old
> >> schema for a while and/or provide a tool to migrate the configuration
> >> schema.
> >>
> >> 2. Some new management operations would need to be added to expose the
> new
> >> objects.
> >>
> >> 3. The JMS configuration and management objects would become obsolete,
> and
> >> would need removing. The Broker side JMS resources were only a thin
> facade
> >> to allow some JMS specific behaviour for managing destinations and for
> >> things like registering objects in JNDI.
> >>
> >> Broker side JNDI was removed in Artemis 1.0 in order to align with
> >> ActiveMQ
> >> 5.x style of client side JNDI.  These JMS pieces and their management
> >> objects don't really do much, creating connection factories for instance
> >> offers no functionality right now.  Going forward, users should be able
> to
> >> use the core management API to do everything.
> >>
> >> 4. All client applications should behave exactly as they were before.
> The
> >> proposal is for adding features to the core model, not removing any.
> For
> >> things like the Artemis JMS client which relied on name-spaces, they’ll
> be
> >> a mechanism to define a name-spaced address and a mechanism to switch
> back
> >> on name-spaces in the client.
> >>
> >> 5. Given some of the API changes and removal of the JMS specific pieces.
> >> This would likely warrant a major bump. i.e. Artemis 2.0.0.
> >>
> >> Whilst I’ve been looking at this, it’s become apparent, that the JMS
> >> pieces
> >> have leaked into lots of areas of the code base, which does mean we’d
> need
> >> to do a fair amount refactoring to move these bits to the new model.
> >>
> >> In my opinion this proposal can only be a good thing. It creates a
> single
> >> place (core) where all addressing objects are configured and managed and
> >> allows all protocol managers to plug into the same mechanism. It solves
> >> some of the cross protocol JMS → other protocols that we’ve seen users
> >> struggle with, but still offers a way to support all the old behaviour
> in
> >> client applications.
> >>
> >> What are others thoughts on this? Any suggestions, comments or concerns?
> >>
> >> Regards
> >> Martyn
> >>
> >>
> >
>

Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

Posted by Martyn Taylor <mt...@redhat.com>.
This is great feedback Matt, thanks.  I have a couple of questions/comments
below:

On Wed, Nov 16, 2016 at 6:23 PM, Matt Pavlovich <ma...@gmail.com> wrote:

> Hi Martin-
>
> Glad to see this area getting dedicated attention. A couple things I
> didn't see covered in the doc or the JIRA comments. (I'll be adding to the
> JIRA comments as well.)
>
> Items:
>
> 0. Pre-configuring destinations is a big brain drain, so anything that can
> be client-driven is a win. Also, protocol specific handlers could perform
> the admin operations on-demand.
>
>    For example:  session.createDurableSubscriber(...)   The JMS handler
> create the subscription on the behalf of the client.
>
Yes I agree.  We need to ensure we can both ways of defining the endpoint
semantics, i.e. allow clients to request endpoint requirements and also
have broker side configuration drive endpoint behaviour, ideally using the
same underlying mechanism.

>
> 1. Separate topic and queue namespaces.. in JMS topic:///foo !=
> queue:///foo. The addressing will need some sort of way to separate the two
> during naming collisions.
>
Just so I understand exactly what you are saying here.  You're saying that
a client sends to "foo" and a consumer received messages sent to "foo".  In
order for the consumer to consume from "foo" it passes in either "foo",
"queue:///foo" or "topic:///foo" which determines how the messages are
propagated to the client?  "foo" means let the broker decide,
"queue:///foo" and "topic:///foo" mean let the client decide.  In addition
to these two approaches, it may be that the protocol itself wants to
decide.  MQTT for example, always requires a subscription.

One way to do this, not straying too far from the original proposal, would
be to make the address uniqueness a combination of the routing type and the
address name.  This would allow something like:

<address name="foo" routingType="anycast">
<address name="foo" routingType="multicast">

We'd need to ensure there is a precedent set for times when a subscriber
just subscribes to "foo".  I'd say it makes sense for "multicast" to take
precedence in this case.

I think it probably makes the most sense to have the following precedence
for the deciding party:

1. Broker
2. Address prefixing/name scheme
3. Protocol

I think the prefix also needs to be configurable, but "queue:///"
"topic:///" seems like a sensible default.

>
> 2. In ActiveMQ 5.x, AMQP and STOMP handled the addressing by using
> queue:/// and topic:/// prefixes. I don't think that is necessarily a bad
> thing, but something to consider b/c we need to support #1
>
+1

>
> 3. As far as destination behaviors, how about using uri parameters to pass
> provider (Artemis) specific settings on-the-fly?
>
>     For example:  in AMPQ the address could be
> topic:///foo?type=nonSharedNonDurable etc..  same for MQTT, STOMP, etc.


>     There is precedence in using uri parameters to configure the
> Destination in JMS as well. IBM MQ has session.createQueue("My.Queue?
> targetClient=1")
>
>     Note: AMQP supports options as well, so that could be used as well.
> However, uri's tend to be better for externalizing configuration management.
>
I think supporting both options, uri and protocol specific parameters is
useful.  Rather than "nonSharedDurable" I think I'd prefer to map these
things onto address properties.  For example:

"topic://foo?maxConsumers=1"

Where the "topic:///" prefix is configurable.  This is essentially a non
shared, durable subscription.

>
> 4. Destination name separation b/w protocol handlers.  STOMP and MQTT like
> "/" and JMS likes "." as destination name separators. Is there any thought
> to having a native destination name separator and then have
> protocol-specific convertors?
>
This is how it works right now.  We have a native path separator which is
".".  Protocol handlers map subscription addresses down to this.  This does
mean that to define a multicast address for MQTT, you would need to do:

"foo.bar" (vs "foo/bar" which is protocol specific).

I've also outlined in the proposal a goal to allow these path separators to
be configurable.  So you can specify pathSeparator="/", "." which would
mean that you can configure "foo/bar" or "foo.bar" they'd both act in the
same way.

>
> 5. Fully qualified destination names that include a broker name. Other
> providers support fully-qualified destination names in JMS following the
> format:  queue://$brokerName/$queueName. Adding this would go a long way
> to supporting migration of current applications without having to change
> client-code.
>
This is a little differently to how clustering currently works, I think we
need to give this one some more thought.  Right now queues are clustered
automatically (well providing you enable the correct address namespace for
the cluster connection).  If you have a client listening on broker2 and a
producer from broker1, the messages will get propagated across the cluster.

You may have already explained this to me in the past, but can you give me
an example use case of when this might be necessary.

>
>     Note: This would probably impact cluster handling as well, so perhaps
> in phase 1 there is just a placeholder for supporting a broker name in the
> future?
>
> -Matt

Thanks
Martyn

>
>
> On 11/16/16 10:16 AM, Martyn Taylor wrote:
>
>> All,
>>
>> Some discussion has happened around this topic already, but I wanted to
>> ensure that everyone here, who have not been following the
>> JIRA/ARTEMIS-780
>> branch has a chance for input and to digest the information in this
>> proposal.
>>
>> In order to understand the motivators outlined here, you first need to
>> understand how the existing addressing model works in Artemis. For those
>> of
>> you who are not familiar with how things currently work, I’ve added a
>> document to the ARTEMIS-780 JIRA in the attachments section, that gives an
>> overview of the existing model and some more detail / examples of the
>> proposal: *https://issues.apache.org/jira/browse/ARTEMIS-780
>> <https://issues.apache.org/jira/browse/ARTEMIS-780>*
>>
>> To summarise here, the Artemis routing/addressing model has some
>> restrictions:
>>
>> 1. It’s not possible with core (and therefore across all protocols) to
>> define ,at the broker side, semantics about addresses. i.e. whether an
>> address behaves as a “point to point” or “publish subscribe” end point
>>
>> 2. For JMS destinations additional configuration and objects were added to
>> the broker, that rely on name-spacing to add semantics to addresses i.e.
>> “jms.topic.” “jms.queue.”  A couple of issues with this:
>>
>>     1.
>>
>>     This only works for JMS and no other protocols
>>     2.
>>
>>     Name-spacing causes issues for cross protocol communication
>>     3.
>>
>>     It means there’s two ways of doing things, 1 for JMS and 1 for
>>     everything else.
>>
>> 3. The JMS and Core destination definitions do not have enough information
>> to define more intricate behaviours. Such as whether an address should
>> behave like a “shared subscription” or similar to a “volatile
>> subscription”
>> where clients don’t get messages missed when they are offline.
>>
>> 4. Some protocols (AMQP is a good example) don’t have enough information
>> in
>> their frames for the broker to determine how to behave for certain
>> endpoints and rely on broker side configuration (or provider specific
>> parameters).
>>
>> Proposal
>>
>> What I’d like to do (and what I’ve proposed in ARTEMIS-780) is to get rid
>> of the JMS specific components and create a single unified mechanism for
>> configuring all types of endpoints across all protocols to define:
>>
>>     -
>>
>>     Point to point (queue)
>>     -
>>
>>     Shared Durable Subscriptions
>>     -
>>
>>     Shared Non Durable Subscriptions
>>     -
>>
>>     Non Shared durable subscriptions
>>     -
>>
>>
>>     Non Shared Non durable subscriptions
>>
>> To do this, the idea is to create a new “Address” configuration/management
>> object, that has certain properties such as a routing type which
>> represents
>> how messages are routed to queues with this address.
>>
>> When a request for subscription is received by Artemis, the relevant piece
>> can just look up the address and check it’s properties to determine how to
>> behave (or if an address doesn’t exist) then default to our existing
>> behaviour. For those interested in the details of how this might work I’ve
>> outlined some specific examples in the document on the JIRA.
>>
>> What are the user impacts:
>>
>> 1. Configuration would need to be revised in order to expose the new
>> addressing object. I propose that we either continue supporting the old
>> schema for a while and/or provide a tool to migrate the configuration
>> schema.
>>
>> 2. Some new management operations would need to be added to expose the new
>> objects.
>>
>> 3. The JMS configuration and management objects would become obsolete, and
>> would need removing. The Broker side JMS resources were only a thin facade
>> to allow some JMS specific behaviour for managing destinations and for
>> things like registering objects in JNDI.
>>
>> Broker side JNDI was removed in Artemis 1.0 in order to align with
>> ActiveMQ
>> 5.x style of client side JNDI.  These JMS pieces and their management
>> objects don't really do much, creating connection factories for instance
>> offers no functionality right now.  Going forward, users should be able to
>> use the core management API to do everything.
>>
>> 4. All client applications should behave exactly as they were before. The
>> proposal is for adding features to the core model, not removing any.  For
>> things like the Artemis JMS client which relied on name-spaces, they’ll be
>> a mechanism to define a name-spaced address and a mechanism to switch back
>> on name-spaces in the client.
>>
>> 5. Given some of the API changes and removal of the JMS specific pieces.
>> This would likely warrant a major bump. i.e. Artemis 2.0.0.
>>
>> Whilst I’ve been looking at this, it’s become apparent, that the JMS
>> pieces
>> have leaked into lots of areas of the code base, which does mean we’d need
>> to do a fair amount refactoring to move these bits to the new model.
>>
>> In my opinion this proposal can only be a good thing. It creates a single
>> place (core) where all addressing objects are configured and managed and
>> allows all protocol managers to plug into the same mechanism. It solves
>> some of the cross protocol JMS → other protocols that we’ve seen users
>> struggle with, but still offers a way to support all the old behaviour in
>> client applications.
>>
>> What are others thoughts on this? Any suggestions, comments or concerns?
>>
>> Regards
>> Martyn
>>
>>
>

Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

Posted by Clebert Suconic <cl...@gmail.com>.
Why would someone want a queue and a topic address with the same name?

Notice that users can still use prefixes on their addresses. This will
be optional now though.  If an user wants that they can call it
topic.Order and queue.Order for instance. or whatever other prefix
they like. But they don't have to.

The only difference here is that the user would be in control. (AFAIK)

> 1. Separate topic and queue namespaces.. in JMS topic:///foo !=
> queue:///foo. The addressing will need some sort of way to separate the two
> during naming collisions.



Notice that these issues were raised for some time already. For
instance Lionel Cons raised this one about 10 months ago:

https://issues.apache.org/jira/browse/ARTEMIS-410

But  I have seen other complains even before that also. (from
questions on the user's list that were solved with explaining the
prefix schema for instance).



We will need to talk about the 2.0 bump on another thread probably.

Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0

Posted by Matt Pavlovich <ma...@gmail.com>.
Hi Martin-

Glad to see this area getting dedicated attention. A couple things I 
didn't see covered in the doc or the JIRA comments. (I'll be adding to 
the JIRA comments as well.)

Items:

0. Pre-configuring destinations is a big brain drain, so anything that 
can be client-driven is a win. Also, protocol specific handlers could 
perform the admin operations on-demand.

    For example:  session.createDurableSubscriber(...)   The JMS handler 
create the subscription on the behalf of the client.

1. Separate topic and queue namespaces.. in JMS topic:///foo != 
queue:///foo. The addressing will need some sort of way to separate the 
two during naming collisions.

2. In ActiveMQ 5.x, AMQP and STOMP handled the addressing by using 
queue:/// and topic:/// prefixes. I don't think that is necessarily a 
bad thing, but something to consider b/c we need to support #1

3. As far as destination behaviors, how about using uri parameters to 
pass provider (Artemis) specific settings on-the-fly?

     For example:  in AMPQ the address could be 
topic:///foo?type=nonSharedNonDurable etc..  same for MQTT, STOMP, etc.

     There is precedence in using uri parameters to configure the 
Destination in JMS as well. IBM MQ has 
session.createQueue("My.Queue?targetClient=1")

     Note: AMQP supports options as well, so that could be used as well. 
However, uri's tend to be better for externalizing configuration management.

4. Destination name separation b/w protocol handlers.  STOMP and MQTT 
like "/" and JMS likes "." as destination name separators. Is there any 
thought to having a native destination name separator and then have 
protocol-specific convertors?

5. Fully qualified destination names that include a broker name. Other 
providers support fully-qualified destination names in JMS following the 
format:  queue://$brokerName/$queueName. Adding this would go a long way 
to supporting migration of current applications without having to change 
client-code.

     Note: This would probably impact cluster handling as well, so 
perhaps in phase 1 there is just a placeholder for supporting a broker 
name in the future?

-Matt

On 11/16/16 10:16 AM, Martyn Taylor wrote:
> All,
>
> Some discussion has happened around this topic already, but I wanted to
> ensure that everyone here, who have not been following the JIRA/ARTEMIS-780
> branch has a chance for input and to digest the information in this
> proposal.
>
> In order to understand the motivators outlined here, you first need to
> understand how the existing addressing model works in Artemis. For those of
> you who are not familiar with how things currently work, I\u2019ve added a
> document to the ARTEMIS-780 JIRA in the attachments section, that gives an
> overview of the existing model and some more detail / examples of the
> proposal: *https://issues.apache.org/jira/browse/ARTEMIS-780
> <https://issues.apache.org/jira/browse/ARTEMIS-780>*
>
> To summarise here, the Artemis routing/addressing model has some
> restrictions:
>
> 1. It\u2019s not possible with core (and therefore across all protocols) to
> define ,at the broker side, semantics about addresses. i.e. whether an
> address behaves as a \u201cpoint to point\u201d or \u201cpublish subscribe\u201d end point
>
> 2. For JMS destinations additional configuration and objects were added to
> the broker, that rely on name-spacing to add semantics to addresses i.e.
> \u201cjms.topic.\u201d \u201cjms.queue.\u201d  A couple of issues with this:
>
>     1.
>
>     This only works for JMS and no other protocols
>     2.
>
>     Name-spacing causes issues for cross protocol communication
>     3.
>
>     It means there\u2019s two ways of doing things, 1 for JMS and 1 for
>     everything else.
>
> 3. The JMS and Core destination definitions do not have enough information
> to define more intricate behaviours. Such as whether an address should
> behave like a \u201cshared subscription\u201d or similar to a \u201cvolatile subscription\u201d
> where clients don\u2019t get messages missed when they are offline.
>
> 4. Some protocols (AMQP is a good example) don\u2019t have enough information in
> their frames for the broker to determine how to behave for certain
> endpoints and rely on broker side configuration (or provider specific
> parameters).
>
> Proposal
>
> What I\u2019d like to do (and what I\u2019ve proposed in ARTEMIS-780) is to get rid
> of the JMS specific components and create a single unified mechanism for
> configuring all types of endpoints across all protocols to define:
>
>     -
>
>     Point to point (queue)
>     -
>
>     Shared Durable Subscriptions
>     -
>
>     Shared Non Durable Subscriptions
>     -
>
>     Non Shared durable subscriptions
>     -
>
>     Non Shared Non durable subscriptions
>
> To do this, the idea is to create a new \u201cAddress\u201d configuration/management
> object, that has certain properties such as a routing type which represents
> how messages are routed to queues with this address.
>
> When a request for subscription is received by Artemis, the relevant piece
> can just look up the address and check it\u2019s properties to determine how to
> behave (or if an address doesn\u2019t exist) then default to our existing
> behaviour. For those interested in the details of how this might work I\u2019ve
> outlined some specific examples in the document on the JIRA.
>
> What are the user impacts:
>
> 1. Configuration would need to be revised in order to expose the new
> addressing object. I propose that we either continue supporting the old
> schema for a while and/or provide a tool to migrate the configuration
> schema.
>
> 2. Some new management operations would need to be added to expose the new
> objects.
>
> 3. The JMS configuration and management objects would become obsolete, and
> would need removing. The Broker side JMS resources were only a thin facade
> to allow some JMS specific behaviour for managing destinations and for
> things like registering objects in JNDI.
>
> Broker side JNDI was removed in Artemis 1.0 in order to align with ActiveMQ
> 5.x style of client side JNDI.  These JMS pieces and their management
> objects don't really do much, creating connection factories for instance
> offers no functionality right now.  Going forward, users should be able to
> use the core management API to do everything.
>
> 4. All client applications should behave exactly as they were before. The
> proposal is for adding features to the core model, not removing any.  For
> things like the Artemis JMS client which relied on name-spaces, they\u2019ll be
> a mechanism to define a name-spaced address and a mechanism to switch back
> on name-spaces in the client.
>
> 5. Given some of the API changes and removal of the JMS specific pieces.
> This would likely warrant a major bump. i.e. Artemis 2.0.0.
>
> Whilst I\u2019ve been looking at this, it\u2019s become apparent, that the JMS pieces
> have leaked into lots of areas of the code base, which does mean we\u2019d need
> to do a fair amount refactoring to move these bits to the new model.
>
> In my opinion this proposal can only be a good thing. It creates a single
> place (core) where all addressing objects are configured and managed and
> allows all protocol managers to plug into the same mechanism. It solves
> some of the cross protocol JMS \u2192 other protocols that we\u2019ve seen users
> struggle with, but still offers a way to support all the old behaviour in
> client applications.
>
> What are others thoughts on this? Any suggestions, comments or concerns?
>
> Regards
> Martyn
>