You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@activemq.apache.org by Stephen Baker <st...@rmssoftwareinc.com> on 2022/04/13 13:30:31 UTC

failed to expire messages - TimeoutException

Hello,

We’re using Artemis 2.20. We had a misbehaving application that had been opening consumers without closing them which I recently addressed. The fix was deployed today and since then I have been seeing a lot of the following error (as the consumer count is very slowly trickling down)

2022-04-13 08:54:42,957 ERROR [org.apache.activemq.artemis.core.server] AMQ224013: failed to expire messages for queue: java.util.concurrent.TimeoutException: UpdateOutboundRetry at org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl$ExpiryReaper.run(PostOfficeImpl.java:1861) [artemis-server-2.20.0.jar:2.20.0] at org.apache.activemq.artemis.core.server.ActiveMQScheduledComponent.runForExecutor(ActiveMQScheduledComponent.java:313) [artemis-commons-2.20.0.jar:2.20.0] at org.apache.activemq.artemis.core.server.ActiveMQScheduledComponent.lambda$bookedRunForScheduler$2(ActiveMQScheduledComponent.java:320) [artemis-commons-2.20.0.jar:2.20.0] at org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:42) [artemis-commons-2.20.0.jar:] at org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:31) [artemis-commons-2.20.0.jar:] at org.apache.activemq.artemis.utils.actors.ProcessorBase.executePendingTasks(ProcessorBase.java:65) [artemis-commons-2.20.0.jar:] at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [java.base:] at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [java.base:] at org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118) [artemis-commons-2.20.0.jar:]

Just wondering if these errors cause any lasting damage, and if it means that something is not tuned correctly, or is a normal part of recovering from such a severe leak (we had hundreds of thousands of stale consumers on that queue.)

Stephen E Baker



Re: failed to expire messages - TimeoutException

Posted by Clebert Suconic <cl...@gmail.com>.
Just out of coincidence, without even reading this thread.. I had
opened this JIRA and Pull Requests:

https://issues.apache.org/jira/browse/ARTEMIS-3778

https://github.com/apache/activemq-artemis/pull/4029


Feedback is welcomed since you're hitting the warning... it's pretty
much removing the warning now.

On Wed, Apr 13, 2022 at 10:45 AM Justin Bertram <jb...@apache.org> wrote:
>
> The exception is just indicating that the expiration task didn't finish in
> the hard-coded 10 second allotment of time. There is no "damage" lasting or
> otherwise. The expiry reaper scans the messages in the queue for any that
> have expired. I wouldn't expect the number of consumers on that queue to
> directly impact this. However, if all those consumers are causing the
> broker to run more slowly in general I suppose that could indirectly impact
> this.
>
> In regards to tuning, you could run the expiry scanner less often by
> configuring the message-expiry-scan-period in broker.xml. See the
> documentation [1] for more details on that. Keep in mind that removing
> expired messages from the queue is just a memory saving feature and not
> strictly necessary. All messages are checked for expiration synchronously
> before they are dispatched so no consumer should receive an expired message
> even if the reaper thread is completely disabled.
>
>
> Justin
>
> [1]
> https://activemq.apache.org/components/artemis/documentation/latest/message-expiry.html#configuring-the-expiry-reaper-thread
>
> On Wed, Apr 13, 2022 at 8:30 AM Stephen Baker <
> stephen.baker@rmssoftwareinc.com> wrote:
>
> > Hello,
> >
> > We’re using Artemis 2.20. We had a misbehaving application that had been
> > opening consumers without closing them which I recently addressed. The fix
> > was deployed today and since then I have been seeing a lot of the following
> > error (as the consumer count is very slowly trickling down)
> >
> > 2022-04-13 08:54:42,957 ERROR [org.apache.activemq.artemis.core.server]
> > AMQ224013: failed to expire messages for queue:
> > java.util.concurrent.TimeoutException: UpdateOutboundRetry at
> > org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl$ExpiryReaper.run(PostOfficeImpl.java:1861)
> > [artemis-server-2.20.0.jar:2.20.0] at
> > org.apache.activemq.artemis.core.server.ActiveMQScheduledComponent.runForExecutor(ActiveMQScheduledComponent.java:313)
> > [artemis-commons-2.20.0.jar:2.20.0] at
> > org.apache.activemq.artemis.core.server.ActiveMQScheduledComponent.lambda$bookedRunForScheduler$2(ActiveMQScheduledComponent.java:320)
> > [artemis-commons-2.20.0.jar:2.20.0] at
> > org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:42)
> > [artemis-commons-2.20.0.jar:] at
> > org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:31)
> > [artemis-commons-2.20.0.jar:] at
> > org.apache.activemq.artemis.utils.actors.ProcessorBase.executePendingTasks(ProcessorBase.java:65)
> > [artemis-commons-2.20.0.jar:] at
> > java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> > [java.base:] at
> > java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> > [java.base:] at
> > org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
> > [artemis-commons-2.20.0.jar:]
> >
> > Just wondering if these errors cause any lasting damage, and if it means
> > that something is not tuned correctly, or is a normal part of recovering
> > from such a severe leak (we had hundreds of thousands of stale consumers on
> > that queue.)
> >
> > Stephen E Baker
> >
> >
> >



-- 
Clebert Suconic

Re: failed to expire messages - TimeoutException

Posted by Justin Bertram <jb...@apache.org>.
The exception is just indicating that the expiration task didn't finish in
the hard-coded 10 second allotment of time. There is no "damage" lasting or
otherwise. The expiry reaper scans the messages in the queue for any that
have expired. I wouldn't expect the number of consumers on that queue to
directly impact this. However, if all those consumers are causing the
broker to run more slowly in general I suppose that could indirectly impact
this.

In regards to tuning, you could run the expiry scanner less often by
configuring the message-expiry-scan-period in broker.xml. See the
documentation [1] for more details on that. Keep in mind that removing
expired messages from the queue is just a memory saving feature and not
strictly necessary. All messages are checked for expiration synchronously
before they are dispatched so no consumer should receive an expired message
even if the reaper thread is completely disabled.


Justin

[1]
https://activemq.apache.org/components/artemis/documentation/latest/message-expiry.html#configuring-the-expiry-reaper-thread

On Wed, Apr 13, 2022 at 8:30 AM Stephen Baker <
stephen.baker@rmssoftwareinc.com> wrote:

> Hello,
>
> We’re using Artemis 2.20. We had a misbehaving application that had been
> opening consumers without closing them which I recently addressed. The fix
> was deployed today and since then I have been seeing a lot of the following
> error (as the consumer count is very slowly trickling down)
>
> 2022-04-13 08:54:42,957 ERROR [org.apache.activemq.artemis.core.server]
> AMQ224013: failed to expire messages for queue:
> java.util.concurrent.TimeoutException: UpdateOutboundRetry at
> org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl$ExpiryReaper.run(PostOfficeImpl.java:1861)
> [artemis-server-2.20.0.jar:2.20.0] at
> org.apache.activemq.artemis.core.server.ActiveMQScheduledComponent.runForExecutor(ActiveMQScheduledComponent.java:313)
> [artemis-commons-2.20.0.jar:2.20.0] at
> org.apache.activemq.artemis.core.server.ActiveMQScheduledComponent.lambda$bookedRunForScheduler$2(ActiveMQScheduledComponent.java:320)
> [artemis-commons-2.20.0.jar:2.20.0] at
> org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:42)
> [artemis-commons-2.20.0.jar:] at
> org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:31)
> [artemis-commons-2.20.0.jar:] at
> org.apache.activemq.artemis.utils.actors.ProcessorBase.executePendingTasks(ProcessorBase.java:65)
> [artemis-commons-2.20.0.jar:] at
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> [java.base:] at
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> [java.base:] at
> org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
> [artemis-commons-2.20.0.jar:]
>
> Just wondering if these errors cause any lasting damage, and if it means
> that something is not tuned correctly, or is a normal part of recovering
> from such a severe leak (we had hundreds of thousands of stale consumers on
> that queue.)
>
> Stephen E Baker
>
>
>