You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@activemq.apache.org by "Rahn Nicholas, Bedag" <Ni...@bedag.ch> on 2014/07/03 11:26:34 UTC

Messages in queue but not consumed

We've noticed a problem in our production ActiveMQ 5.8 instances where messages remain in a queue even when there are active consumers on that queue. Most messages are consumed by the consumers, but a few do not and remain in the queue. They just seem to be stuck there.  Here's our setup:

-       2 instances of ActiveMQ 5.8 in a failover (master/slave) setup on linux
-       JDBC (MSSQL) for the message store

I've been able to reproduce this situation manually by doing the following:

1.      Create 2 consumers on a queue.
2.      Run 2 Producers (from 2 separate processes) simultaneously to send 1000 messages each to the queue.
3.      Use JMX to check the QueueSize of the queue.

Not everytime, but every second or third run of the 2 simultaneous producers, not all of the 2000 messages will be consumed.  When this happens, the consumers are still running and idle, but the QueueSize is non-zero (can be anything from 1 - 5, usually) and I can see the messages in the database. The unconsumed messages do not stop later messages from being consumed, however. A broker restart causes the 'stuck' messages to be immediately sent to the consumers.

It seems to have something to do with the multiple simultaneous producers as running just 1 producer at a time works as normal with all messages consumed. I have tried to create a self-contained unit test (i.e. vm broker) for this, but was not able to reproduce the situation. However, a unit test connecting to a remote broker did show the same situation (with many more stuck messages). That would lead me to guess that the issue is perhaps in the JDBC store code, but that's just a guess.

Is this a known issue?  Is there any workaround for it? Is it corrected in a version later than 5.8?

Thanks for the help.
Nick



Re: Messages in queue but not consumed

Posted by Rural Hunter <ru...@gmail.com>.
Maybe this one?
https://issues.apache.org/jira/browse/AMQ-2009

于 2014/7/3 17:26, Rahn Nicholas, Bedag 写道:
> We've noticed a problem in our production ActiveMQ 5.8 instances where messages remain in a queue even when there are active consumers on that queue. Most messages are consumed by the consumers, but a few do not and remain in the queue. They just seem to be stuck there.  Here's our setup:
>
> -       2 instances of ActiveMQ 5.8 in a failover (master/slave) setup on linux
> -       JDBC (MSSQL) for the message store
>
> I've been able to reproduce this situation manually by doing the following:
>
> 1.      Create 2 consumers on a queue.
> 2.      Run 2 Producers (from 2 separate processes) simultaneously to send 1000 messages each to the queue.
> 3.      Use JMX to check the QueueSize of the queue.
>
> Not everytime, but every second or third run of the 2 simultaneous producers, not all of the 2000 messages will be consumed.  When this happens, the consumers are still running and idle, but the QueueSize is non-zero (can be anything from 1 - 5, usually) and I can see the messages in the database. The unconsumed messages do not stop later messages from being consumed, however. A broker restart causes the 'stuck' messages to be immediately sent to the consumers.
>
> It seems to have something to do with the multiple simultaneous producers as running just 1 producer at a time works as normal with all messages consumed. I have tried to create a self-contained unit test (i.e. vm broker) for this, but was not able to reproduce the situation. However, a unit test connecting to a remote broker did show the same situation (with many more stuck messages). That would lead me to guess that the issue is perhaps in the JDBC store code, but that's just a guess.
>
> Is this a known issue?  Is there any workaround for it? Is it corrected in a version later than 5.8?
>
> Thanks for the help.
> Nick
>
>
>


RE: Messages in queue but not consumed

Posted by "Rahn Nicholas, Bedag" <Ni...@bedag.ch>.
Thanks for the tip.

Does anyone know which change/bug-fix/Jira in 5.9 could have fixed this? Any clues/keywords to locating the fix...?

Thanks,
Nick


-----Original Message-----
From: Donnell Alwyn [mailto:Alwyn.Donnell@uk.mizuho-sc.com] 
Sent: Donnerstag, 3. Juli 2014 11:53
To: 'users@activemq.apache.org'
Subject: RE: Messages in queue but not consumed

I had a similar issue in ActiveMQ 5.7. 

Messages stuck on the pending queue would never come off, new messages would process ok. A broker restart was required to get messages off the pending queue.

Upgrading to 5.9 solved my problem.

Regards
Alwyn

------------------------------------------------------------------------
Alwyn Donnell

ISD Middleware and Architecture
Mizuho International
Bracken House
1 Friday Street
London EC4M 9JA

email: alwyn.donnell@uk.mizuho-sc.com
Tel.: +44 (0)20 7090 6569


-----Original Message-----
From: Rahn Nicholas, Bedag [mailto:Nicholas.Rahn@bedag.ch] 
Sent: 03 July 2014 10:27
To: 'users@activemq.apache.org'
Subject: Messages in queue but not consumed

We've noticed a problem in our production ActiveMQ 5.8 instances where messages remain in a queue even when there are active consumers on that queue. Most messages are consumed by the consumers, but a few do not and remain in the queue. They just seem to be stuck there.  Here's our setup:

-       2 instances of ActiveMQ 5.8 in a failover (master/slave) setup on linux
-       JDBC (MSSQL) for the message store

I've been able to reproduce this situation manually by doing the following:

1.      Create 2 consumers on a queue.
2.      Run 2 Producers (from 2 separate processes) simultaneously to send 1000 messages each to the queue.
3.      Use JMX to check the QueueSize of the queue.

Not everytime, but every second or third run of the 2 simultaneous producers, not all of the 2000 messages will be consumed.  When this happens, the consumers are still running and idle, but the QueueSize is non-zero (can be anything from 1 - 5, usually) and I can see the messages in the database. The unconsumed messages do not stop later messages from being consumed, however. A broker restart causes the 'stuck' messages to be immediately sent to the consumers.

It seems to have something to do with the multiple simultaneous producers as running just 1 producer at a time works as normal with all messages consumed. I have tried to create a self-contained unit test (i.e. vm broker) for this, but was not able to reproduce the situation. However, a unit test connecting to a remote broker did show the same situation (with many more stuck messages). That would lead me to guess that the issue is perhaps in the JDBC store code, but that's just a guess.

Is this a known issue?  Is there any workaround for it? Is it corrected in a version later than 5.8?

Thanks for the help.
Nick



This message and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this message in error please delete it and any files transmitted with it, after notifying postmaster@uk.mizuho-sc.com.
 
Any opinions expressed in this message may be those of the author and not necessarily those of the company. The company accepts no responsibility for the accuracy or completeness of any information contained herein. This message is not intended to create legal relations between the company and the recipient. 

Recipients should please note that messages sent via the Internet may be intercepted and that caution should therefore be exercised before dispatching to the company any confidential or sensitive information. 
Mizuho International plc Bracken House, One Friday Street, London EC4M 9JA. TEL. 020 72361090. Wholly owned subsidiary of Mizuho Securities Co., Ltd. Member of Mizuho Financial Group. Authorised by the Prudential Regulation Authority and regulated by the Financial Conduct Authority and the Prudential Regulation Authority. Member of the London Stock Exchange.

Registered in England No. 1203696. Registered office as above.

RE: Messages in queue but not consumed

Posted by Donnell Alwyn <Al...@uk.mizuho-sc.com>.
I had a similar issue in ActiveMQ 5.7. 

Messages stuck on the pending queue would never come off, new messages would process ok. A broker restart was required to get messages off the pending queue.

Upgrading to 5.9 solved my problem.

Regards
Alwyn

------------------------------------------------------------------------
Alwyn Donnell

ISD Middleware and Architecture
Mizuho International
Bracken House
1 Friday Street
London EC4M 9JA

email: alwyn.donnell@uk.mizuho-sc.com
Tel.: +44 (0)20 7090 6569


-----Original Message-----
From: Rahn Nicholas, Bedag [mailto:Nicholas.Rahn@bedag.ch] 
Sent: 03 July 2014 10:27
To: 'users@activemq.apache.org'
Subject: Messages in queue but not consumed

We've noticed a problem in our production ActiveMQ 5.8 instances where messages remain in a queue even when there are active consumers on that queue. Most messages are consumed by the consumers, but a few do not and remain in the queue. They just seem to be stuck there.  Here's our setup:

-       2 instances of ActiveMQ 5.8 in a failover (master/slave) setup on linux
-       JDBC (MSSQL) for the message store

I've been able to reproduce this situation manually by doing the following:

1.      Create 2 consumers on a queue.
2.      Run 2 Producers (from 2 separate processes) simultaneously to send 1000 messages each to the queue.
3.      Use JMX to check the QueueSize of the queue.

Not everytime, but every second or third run of the 2 simultaneous producers, not all of the 2000 messages will be consumed.  When this happens, the consumers are still running and idle, but the QueueSize is non-zero (can be anything from 1 - 5, usually) and I can see the messages in the database. The unconsumed messages do not stop later messages from being consumed, however. A broker restart causes the 'stuck' messages to be immediately sent to the consumers.

It seems to have something to do with the multiple simultaneous producers as running just 1 producer at a time works as normal with all messages consumed. I have tried to create a self-contained unit test (i.e. vm broker) for this, but was not able to reproduce the situation. However, a unit test connecting to a remote broker did show the same situation (with many more stuck messages). That would lead me to guess that the issue is perhaps in the JDBC store code, but that's just a guess.

Is this a known issue?  Is there any workaround for it? Is it corrected in a version later than 5.8?

Thanks for the help.
Nick



This message and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this message in error please delete it and any files transmitted with it, after notifying postmaster@uk.mizuho-sc.com.
 
Any opinions expressed in this message may be those of the author and not necessarily those of the company. The company accepts no responsibility for the accuracy or completeness of any information contained herein. This message is not intended to create legal relations between the company and the recipient. 

Recipients should please note that messages sent via the Internet may be intercepted and that caution should therefore be exercised before dispatching to the company any confidential or sensitive information. 
Mizuho International plc Bracken House, One Friday Street, London EC4M 9JA. TEL. 020 72361090. Wholly owned subsidiary of Mizuho Securities Co., Ltd. Member of Mizuho Financial Group. Authorised by the Prudential Regulation Authority and regulated by the Financial Conduct Authority and the Prudential Regulation Authority. Member of the London Stock Exchange.

Registered in England No. 1203696. Registered office as above.