You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@activemq.apache.org by AntonR <an...@volvo.com> on 2018/03/09 11:19:26 UTC

Disable prefetch buffer check?

Hi,

I have encountered what I believe is a fringe issue with forwards within a
network of brokers.

The setup I am running features multiple components posting and reading
messages to each other, where the larger flows are connected to all brokers
at once for increased throughput, whereas the small components just connect
randomly and from time to time rely on internal forwards to get their
messages.

Since we always want to prefer local consumers to reduce unnecessary traffic
between brokers, we use the setting "decreaseNetworkConsumerPriority".
Everything works as expected, except for when any of the "larger" components
becomes unavailable and gets a backlog. This generates extreme amounts of
forwards and my analysis is that it happens because the prefetch buffer of
the receiving component is full.

This seems to trigger a forward, and I can understand the logic by why it
does that... the issue though, is that all consumers on the receiving broker
also has all their prefetch buffers full, so that broker also forwards the
message... this keeps on going until all backlog is processed.

So my question is, can I either A, change something in my setup to prevent
the issue, or B, have the prefetch buffer be a non factor in determining if
there are available consumers on the broker or not. Maybe a configurable
flag?

Br,
Anton



--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html

Re: Disable prefetch buffer check?

Posted by Tim Bain <tb...@alumni.duke.edu>.
Anton,

Unfortunately that sounds like expected behavior in that situation.

It sounds like you'd want a new config flag called something like
dispatchOnlyToHighestPriorityConsumers, which would route only to local
consumers (if at least one exists) or to only the set of consumers
connected via the fewest number of networked brokers.

There's no such capability built into ActiveMQ 5.x at the moment, but if
you were up for implementing it yourself, I don't think it would be very
difficult to do. (You can also submit an enhancement request in JIRA and
hope that someone else will implement it for you, but doing it yourself is
the only way to guarantee that it gets done.) If you choose to go down this
path, I think you'll find the right code to modify just by tracing how the
reduceNetworkConsumerPriority flag was implemented.

Tim

On Mar 9, 2018 4:19 AM, "AntonR" <an...@volvo.com> wrote:

> Hi,
>
> I have encountered what I believe is a fringe issue with forwards within a
> network of brokers.
>
> The setup I am running features multiple components posting and reading
> messages to each other, where the larger flows are connected to all brokers
> at once for increased throughput, whereas the small components just connect
> randomly and from time to time rely on internal forwards to get their
> messages.
>
> Since we always want to prefer local consumers to reduce unnecessary
> traffic
> between brokers, we use the setting "decreaseNetworkConsumerPriority".
> Everything works as expected, except for when any of the "larger"
> components
> becomes unavailable and gets a backlog. This generates extreme amounts of
> forwards and my analysis is that it happens because the prefetch buffer of
> the receiving component is full.
>
> This seems to trigger a forward, and I can understand the logic by why it
> does that... the issue though, is that all consumers on the receiving
> broker
> also has all their prefetch buffers full, so that broker also forwards the
> message... this keeps on going until all backlog is processed.
>
> So my question is, can I either A, change something in my setup to prevent
> the issue, or B, have the prefetch buffer be a non factor in determining if
> there are available consumers on the broker or not. Maybe a configurable
> flag?
>
> Br,
> Anton
>
>
>
> --
> Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-
> f2341805.html
>