You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@activemq.apache.org by Victor <vi...@gmail.com> on 2018/06/27 04:25:44 UTC

[ARTEMIS] Can Artemis act as a proxy for Artemis?

I'm still playing with different topologies of ActiveMQ Artemis in
Kubernetes. An almost satisfactory one (also playing with colocated but
anti-affinity is difficult there) is to have master and slaves paired in
two stateful sets:

        +-----------------------+
        |                       |
        |                       |
        |                       |
+-------+--------+     +--------+-------+
|artemis master 1|     |artemis master 2|
+----------------+     +--------+ ------+
        |group-name=artemis-1   |
        |                       |
        v   group-name=artemis-2|
+-------+--------+     +------> --------+
|artemis slave 1 |     |artemis slave 2 |
+----------------+     +----------------+

Note that this configuration also has inter-pod anti-affinity
<https://kubernetes.io/docs/concepts/configuration/assign-pod-node/>
between master and slaves do they don't end up working on the same physical
node. Also, there is a disruption budget
<https://kubernetes.io/docs/concepts/workloads/pods/disruptions/> of just
one and only one master or slave could be down at the same time without
involving data loss.

This could be acceptable as version 1, as it might be useful for many
users. However, I found a little thing that is usually fine but seems to be
bothersome for Kubernetes. Slaves do not open ports nor serve traffic while
they are just being slaves. Kubernetes have one special nuance in terms of
load balancing, the load balancer does not check for the pods to be
healthy. Its Kubernetes itself doing two checks, liveness (should I restart
you?) and readiness (are you ready?). the readiness mean both I'm started
and also I'm ready to receive traffic. Given that slaves do not open ports
they won't typically be ready (if they where the load balancer would route
to them and those request would fail). And thus, the helm chart present
weird behaviors like for instance the following:

helm install activemq-artemis --wait

Will timeout, as --wait will try to wait for every pod to be in ready
state. Unless I go for much more sophisticated balancing solutions this is
mostly unavoidable and undesirable.

One possible solution I have contemplated might be perhaps a bit too
creative and I'd preferred to run it here before executing. What If I set
up a cluster of Artemis with no persistence, no local queues and just core
connections to the real servers:


          +-----load balancer----+
          |                      |
          |                      |
          |                      |
          |                      |
    +--proxy 1--+         +---proxy 2--+
    |           |         |            |
    |           |         |            |
    |           |         |            |
    |           |         |            |
    |           |         |            |
 master 1    slave 1    master 2   slave 2


With my limited understanding, I believe those mostly stateless Artemis
would act as a proxy just as I wanted to wrap the needs of Kubernetes into
a proxy with no need of new code.

Is this assumption right? Would there be a risk of data loss? I assume
there would be unless I activate persistence, would there be a work-around
for this?

Thanks!

Re: [ARTEMIS] Can Artemis act as a proxy for Artemis?

Posted by Victor <vi...@gmail.com>.
>I'm not aware of any functionality in the broker that fulfills this
> requirement.

Thanks

> At this point I can't really comment on any alternative solution because
I ...

Yeah, sorry about that I might have overshared too many details :(



2018-06-27 17:13 GMT-07:00 Justin Bertram <jb...@apache.org>:

> > I was wondering if such a thing as a store-and-forward minus the store
> part
> > (hence the no queues) was possible at all...
>
> I'm not aware of any functionality in the broker that fulfills this
> requirement.
>
> At this point I can't really comment on any alternative solution because I
> really don't understand the problem.  I've not really worked with
> Kubernetes or helm before so I'm not clear on how the architecture works
> and how it fits (or doesn't fit) with Artemis clustering & HA.
>
>
> Justin
>
> On Wed, Jun 27, 2018 at 3:12 PM, Victor <vi...@gmail.com> wrote:
>
> > > I'm not clear on the role which the proxies would play.  Can you
> clarify
> > that?
> >
> > I believe today it is doable in a network of brokers fashion doing store
> > and forward. If I store then I'd need backups and the point is lost.
> >
> > I was wondering if such a thing as a store-and-forward minus the store
> part
> > (hence the no queues) was possible at all, but I am conscious it's a bit
> of
> > a stretch request.
> >
> > > In general, if you had a broker with no queues then any client trying
> to
> > > send message to that broker would fail.  Neither the connections nor
> the
> > > messages would somehow pass through that broker to another broker.
> >
> > I see, I'll probably have to think about something custom at the network
> > layer instead of the k8s constructions, haproxy or similar. Or perhaps I
> > should stick to AMQP only and use the qpid proton dispatcher.
> >
> > Thanks
> >
> > 2018-06-27 11:14 GMT-07:00 Justin Bertram <jb...@apache.org>:
> >
> > > I'm not clear on the role which the proxies would play.  Can you
> clarify
> > > that?
> > >
> > > In general, if you had a broker with no queues then any client trying
> to
> > > send message to that broker would fail.  Neither the connections nor
> the
> > > messages would somehow pass through that broker to another broker.
> > >
> > >
> > > Justin
> > >
> > > On Tue, Jun 26, 2018 at 11:25 PM, Victor <vi...@gmail.com>
> > wrote:
> > >
> > > > I'm still playing with different topologies of ActiveMQ Artemis in
> > > > Kubernetes. An almost satisfactory one (also playing with colocated
> but
> > > > anti-affinity is difficult there) is to have master and slaves paired
> > in
> > > > two stateful sets:
> > > >
> > > >         +-----------------------+
> > > >         |                       |
> > > >         |                       |
> > > >         |                       |
> > > > +-------+--------+     +--------+-------+
> > > > |artemis master 1|     |artemis master 2|
> > > > +----------------+     +--------+ ------+
> > > >         |group-name=artemis-1   |
> > > >         |                       |
> > > >         v   group-name=artemis-2|
> > > > +-------+--------+     +------> --------+
> > > > |artemis slave 1 |     |artemis slave 2 |
> > > > +----------------+     +----------------+
> > > >
> > > > Note that this configuration also has inter-pod anti-affinity
> > > > <https://kubernetes.io/docs/concepts/configuration/assign-pod-node/>
> > > > between master and slaves do they don't end up working on the same
> > > physical
> > > > node. Also, there is a disruption budget
> > > > <https://kubernetes.io/docs/concepts/workloads/pods/disruptions/> of
> > > just
> > > > one and only one master or slave could be down at the same time
> without
> > > > involving data loss.
> > > >
> > > > This could be acceptable as version 1, as it might be useful for many
> > > > users. However, I found a little thing that is usually fine but seems
> > to
> > > be
> > > > bothersome for Kubernetes. Slaves do not open ports nor serve traffic
> > > while
> > > > they are just being slaves. Kubernetes have one special nuance in
> terms
> > > of
> > > > load balancing, the load balancer does not check for the pods to be
> > > > healthy. Its Kubernetes itself doing two checks, liveness (should I
> > > restart
> > > > you?) and readiness (are you ready?). the readiness mean both I'm
> > started
> > > > and also I'm ready to receive traffic. Given that slaves do not open
> > > ports
> > > > they won't typically be ready (if they where the load balancer would
> > > route
> > > > to them and those request would fail). And thus, the helm chart
> present
> > > > weird behaviors like for instance the following:
> > > >
> > > > helm install activemq-artemis --wait
> > > >
> > > > Will timeout, as --wait will try to wait for every pod to be in ready
> > > > state. Unless I go for much more sophisticated balancing solutions
> this
> > > is
> > > > mostly unavoidable and undesirable.
> > > >
> > > > One possible solution I have contemplated might be perhaps a bit too
> > > > creative and I'd preferred to run it here before executing. What If I
> > set
> > > > up a cluster of Artemis with no persistence, no local queues and just
> > > core
> > > > connections to the real servers:
> > > >
> > > >
> > > >           +-----load balancer----+
> > > >           |                      |
> > > >           |                      |
> > > >           |                      |
> > > >           |                      |
> > > >     +--proxy 1--+         +---proxy 2--+
> > > >     |           |         |            |
> > > >     |           |         |            |
> > > >     |           |         |            |
> > > >     |           |         |            |
> > > >     |           |         |            |
> > > >  master 1    slave 1    master 2   slave 2
> > > >
> > > >
> > > > With my limited understanding, I believe those mostly stateless
> Artemis
> > > > would act as a proxy just as I wanted to wrap the needs of Kubernetes
> > > into
> > > > a proxy with no need of new code.
> > > >
> > > > Is this assumption right? Would there be a risk of data loss? I
> assume
> > > > there would be unless I activate persistence, would there be a
> > > work-around
> > > > for this?
> > > >
> > > > Thanks!
> > > >
> > >
> >
>

Re: [ARTEMIS] Can Artemis act as a proxy for Artemis?

Posted by Justin Bertram <jb...@apache.org>.
> I was wondering if such a thing as a store-and-forward minus the store
part
> (hence the no queues) was possible at all...

I'm not aware of any functionality in the broker that fulfills this
requirement.

At this point I can't really comment on any alternative solution because I
really don't understand the problem.  I've not really worked with
Kubernetes or helm before so I'm not clear on how the architecture works
and how it fits (or doesn't fit) with Artemis clustering & HA.


Justin

On Wed, Jun 27, 2018 at 3:12 PM, Victor <vi...@gmail.com> wrote:

> > I'm not clear on the role which the proxies would play.  Can you clarify
> that?
>
> I believe today it is doable in a network of brokers fashion doing store
> and forward. If I store then I'd need backups and the point is lost.
>
> I was wondering if such a thing as a store-and-forward minus the store part
> (hence the no queues) was possible at all, but I am conscious it's a bit of
> a stretch request.
>
> > In general, if you had a broker with no queues then any client trying to
> > send message to that broker would fail.  Neither the connections nor the
> > messages would somehow pass through that broker to another broker.
>
> I see, I'll probably have to think about something custom at the network
> layer instead of the k8s constructions, haproxy or similar. Or perhaps I
> should stick to AMQP only and use the qpid proton dispatcher.
>
> Thanks
>
> 2018-06-27 11:14 GMT-07:00 Justin Bertram <jb...@apache.org>:
>
> > I'm not clear on the role which the proxies would play.  Can you clarify
> > that?
> >
> > In general, if you had a broker with no queues then any client trying to
> > send message to that broker would fail.  Neither the connections nor the
> > messages would somehow pass through that broker to another broker.
> >
> >
> > Justin
> >
> > On Tue, Jun 26, 2018 at 11:25 PM, Victor <vi...@gmail.com>
> wrote:
> >
> > > I'm still playing with different topologies of ActiveMQ Artemis in
> > > Kubernetes. An almost satisfactory one (also playing with colocated but
> > > anti-affinity is difficult there) is to have master and slaves paired
> in
> > > two stateful sets:
> > >
> > >         +-----------------------+
> > >         |                       |
> > >         |                       |
> > >         |                       |
> > > +-------+--------+     +--------+-------+
> > > |artemis master 1|     |artemis master 2|
> > > +----------------+     +--------+ ------+
> > >         |group-name=artemis-1   |
> > >         |                       |
> > >         v   group-name=artemis-2|
> > > +-------+--------+     +------> --------+
> > > |artemis slave 1 |     |artemis slave 2 |
> > > +----------------+     +----------------+
> > >
> > > Note that this configuration also has inter-pod anti-affinity
> > > <https://kubernetes.io/docs/concepts/configuration/assign-pod-node/>
> > > between master and slaves do they don't end up working on the same
> > physical
> > > node. Also, there is a disruption budget
> > > <https://kubernetes.io/docs/concepts/workloads/pods/disruptions/> of
> > just
> > > one and only one master or slave could be down at the same time without
> > > involving data loss.
> > >
> > > This could be acceptable as version 1, as it might be useful for many
> > > users. However, I found a little thing that is usually fine but seems
> to
> > be
> > > bothersome for Kubernetes. Slaves do not open ports nor serve traffic
> > while
> > > they are just being slaves. Kubernetes have one special nuance in terms
> > of
> > > load balancing, the load balancer does not check for the pods to be
> > > healthy. Its Kubernetes itself doing two checks, liveness (should I
> > restart
> > > you?) and readiness (are you ready?). the readiness mean both I'm
> started
> > > and also I'm ready to receive traffic. Given that slaves do not open
> > ports
> > > they won't typically be ready (if they where the load balancer would
> > route
> > > to them and those request would fail). And thus, the helm chart present
> > > weird behaviors like for instance the following:
> > >
> > > helm install activemq-artemis --wait
> > >
> > > Will timeout, as --wait will try to wait for every pod to be in ready
> > > state. Unless I go for much more sophisticated balancing solutions this
> > is
> > > mostly unavoidable and undesirable.
> > >
> > > One possible solution I have contemplated might be perhaps a bit too
> > > creative and I'd preferred to run it here before executing. What If I
> set
> > > up a cluster of Artemis with no persistence, no local queues and just
> > core
> > > connections to the real servers:
> > >
> > >
> > >           +-----load balancer----+
> > >           |                      |
> > >           |                      |
> > >           |                      |
> > >           |                      |
> > >     +--proxy 1--+         +---proxy 2--+
> > >     |           |         |            |
> > >     |           |         |            |
> > >     |           |         |            |
> > >     |           |         |            |
> > >     |           |         |            |
> > >  master 1    slave 1    master 2   slave 2
> > >
> > >
> > > With my limited understanding, I believe those mostly stateless Artemis
> > > would act as a proxy just as I wanted to wrap the needs of Kubernetes
> > into
> > > a proxy with no need of new code.
> > >
> > > Is this assumption right? Would there be a risk of data loss? I assume
> > > there would be unless I activate persistence, would there be a
> > work-around
> > > for this?
> > >
> > > Thanks!
> > >
> >
>

Re: [ARTEMIS] Can Artemis act as a proxy for Artemis?

Posted by Victor <vi...@gmail.com>.
> I'm not clear on the role which the proxies would play.  Can you clarify
that?

I believe today it is doable in a network of brokers fashion doing store
and forward. If I store then I'd need backups and the point is lost.

I was wondering if such a thing as a store-and-forward minus the store part
(hence the no queues) was possible at all, but I am conscious it's a bit of
a stretch request.

> In general, if you had a broker with no queues then any client trying to
> send message to that broker would fail.  Neither the connections nor the
> messages would somehow pass through that broker to another broker.

I see, I'll probably have to think about something custom at the network
layer instead of the k8s constructions, haproxy or similar. Or perhaps I
should stick to AMQP only and use the qpid proton dispatcher.

Thanks

2018-06-27 11:14 GMT-07:00 Justin Bertram <jb...@apache.org>:

> I'm not clear on the role which the proxies would play.  Can you clarify
> that?
>
> In general, if you had a broker with no queues then any client trying to
> send message to that broker would fail.  Neither the connections nor the
> messages would somehow pass through that broker to another broker.
>
>
> Justin
>
> On Tue, Jun 26, 2018 at 11:25 PM, Victor <vi...@gmail.com> wrote:
>
> > I'm still playing with different topologies of ActiveMQ Artemis in
> > Kubernetes. An almost satisfactory one (also playing with colocated but
> > anti-affinity is difficult there) is to have master and slaves paired in
> > two stateful sets:
> >
> >         +-----------------------+
> >         |                       |
> >         |                       |
> >         |                       |
> > +-------+--------+     +--------+-------+
> > |artemis master 1|     |artemis master 2|
> > +----------------+     +--------+ ------+
> >         |group-name=artemis-1   |
> >         |                       |
> >         v   group-name=artemis-2|
> > +-------+--------+     +------> --------+
> > |artemis slave 1 |     |artemis slave 2 |
> > +----------------+     +----------------+
> >
> > Note that this configuration also has inter-pod anti-affinity
> > <https://kubernetes.io/docs/concepts/configuration/assign-pod-node/>
> > between master and slaves do they don't end up working on the same
> physical
> > node. Also, there is a disruption budget
> > <https://kubernetes.io/docs/concepts/workloads/pods/disruptions/> of
> just
> > one and only one master or slave could be down at the same time without
> > involving data loss.
> >
> > This could be acceptable as version 1, as it might be useful for many
> > users. However, I found a little thing that is usually fine but seems to
> be
> > bothersome for Kubernetes. Slaves do not open ports nor serve traffic
> while
> > they are just being slaves. Kubernetes have one special nuance in terms
> of
> > load balancing, the load balancer does not check for the pods to be
> > healthy. Its Kubernetes itself doing two checks, liveness (should I
> restart
> > you?) and readiness (are you ready?). the readiness mean both I'm started
> > and also I'm ready to receive traffic. Given that slaves do not open
> ports
> > they won't typically be ready (if they where the load balancer would
> route
> > to them and those request would fail). And thus, the helm chart present
> > weird behaviors like for instance the following:
> >
> > helm install activemq-artemis --wait
> >
> > Will timeout, as --wait will try to wait for every pod to be in ready
> > state. Unless I go for much more sophisticated balancing solutions this
> is
> > mostly unavoidable and undesirable.
> >
> > One possible solution I have contemplated might be perhaps a bit too
> > creative and I'd preferred to run it here before executing. What If I set
> > up a cluster of Artemis with no persistence, no local queues and just
> core
> > connections to the real servers:
> >
> >
> >           +-----load balancer----+
> >           |                      |
> >           |                      |
> >           |                      |
> >           |                      |
> >     +--proxy 1--+         +---proxy 2--+
> >     |           |         |            |
> >     |           |         |            |
> >     |           |         |            |
> >     |           |         |            |
> >     |           |         |            |
> >  master 1    slave 1    master 2   slave 2
> >
> >
> > With my limited understanding, I believe those mostly stateless Artemis
> > would act as a proxy just as I wanted to wrap the needs of Kubernetes
> into
> > a proxy with no need of new code.
> >
> > Is this assumption right? Would there be a risk of data loss? I assume
> > there would be unless I activate persistence, would there be a
> work-around
> > for this?
> >
> > Thanks!
> >
>

Re: [ARTEMIS] Can Artemis act as a proxy for Artemis?

Posted by Justin Bertram <jb...@apache.org>.
I'm not clear on the role which the proxies would play.  Can you clarify
that?

In general, if you had a broker with no queues then any client trying to
send message to that broker would fail.  Neither the connections nor the
messages would somehow pass through that broker to another broker.


Justin

On Tue, Jun 26, 2018 at 11:25 PM, Victor <vi...@gmail.com> wrote:

> I'm still playing with different topologies of ActiveMQ Artemis in
> Kubernetes. An almost satisfactory one (also playing with colocated but
> anti-affinity is difficult there) is to have master and slaves paired in
> two stateful sets:
>
>         +-----------------------+
>         |                       |
>         |                       |
>         |                       |
> +-------+--------+     +--------+-------+
> |artemis master 1|     |artemis master 2|
> +----------------+     +--------+ ------+
>         |group-name=artemis-1   |
>         |                       |
>         v   group-name=artemis-2|
> +-------+--------+     +------> --------+
> |artemis slave 1 |     |artemis slave 2 |
> +----------------+     +----------------+
>
> Note that this configuration also has inter-pod anti-affinity
> <https://kubernetes.io/docs/concepts/configuration/assign-pod-node/>
> between master and slaves do they don't end up working on the same physical
> node. Also, there is a disruption budget
> <https://kubernetes.io/docs/concepts/workloads/pods/disruptions/> of just
> one and only one master or slave could be down at the same time without
> involving data loss.
>
> This could be acceptable as version 1, as it might be useful for many
> users. However, I found a little thing that is usually fine but seems to be
> bothersome for Kubernetes. Slaves do not open ports nor serve traffic while
> they are just being slaves. Kubernetes have one special nuance in terms of
> load balancing, the load balancer does not check for the pods to be
> healthy. Its Kubernetes itself doing two checks, liveness (should I restart
> you?) and readiness (are you ready?). the readiness mean both I'm started
> and also I'm ready to receive traffic. Given that slaves do not open ports
> they won't typically be ready (if they where the load balancer would route
> to them and those request would fail). And thus, the helm chart present
> weird behaviors like for instance the following:
>
> helm install activemq-artemis --wait
>
> Will timeout, as --wait will try to wait for every pod to be in ready
> state. Unless I go for much more sophisticated balancing solutions this is
> mostly unavoidable and undesirable.
>
> One possible solution I have contemplated might be perhaps a bit too
> creative and I'd preferred to run it here before executing. What If I set
> up a cluster of Artemis with no persistence, no local queues and just core
> connections to the real servers:
>
>
>           +-----load balancer----+
>           |                      |
>           |                      |
>           |                      |
>           |                      |
>     +--proxy 1--+         +---proxy 2--+
>     |           |         |            |
>     |           |         |            |
>     |           |         |            |
>     |           |         |            |
>     |           |         |            |
>  master 1    slave 1    master 2   slave 2
>
>
> With my limited understanding, I believe those mostly stateless Artemis
> would act as a proxy just as I wanted to wrap the needs of Kubernetes into
> a proxy with no need of new code.
>
> Is this assumption right? Would there be a risk of data loss? I assume
> there would be unless I activate persistence, would there be a work-around
> for this?
>
> Thanks!
>