You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@activemq.apache.org by ba...@wellsfargo.com on 2013/11/21 18:51:27 UTC

Producer Flow Block - Consumer Deadlock after max memory limits exceeded

Version: Active MQ v5.8
Embedded Broker, Producer, Consumer all within same JVM

If max memory limits are set to 320MB, which equates to 10 journal files (32MB per file), the files cannot be cleared even if there is 1 message on the DLQ.  This 1 message blocks the freeing up of the journal file where it resides.  In order to resolve this, the JVM is recycled.  I'm sure there is a better way of resolving this issue.  Any advice?

Regards,

Barry Barnett
WMQ Enterprise Services & Solutions
Wells Fargo
Cell: 704-564-5501




RE: Producer Flow Block - Consumer Deadlock after max memory limits exceeded

Posted by ba...@wellsfargo.com.
The journal files were cut down in size to avoid running into the issue, but there is still potential to have a message on the DLQ that is a 'useful artifact' which is in the journal, no?

<context:property-placeholder system-properties-mode="OVERRIDE"/>-<broker tmpDataDirectory="${ACTIVEMQ_STORE_DIR}/data/tmp_storage_dir" persistent="true" schedulerSupport="false" useJmx="true" brokerName="web-console">-<managementContext><managementContext createMBeanServer="false" createConnector="false"/></managementContext>-<persistenceAdapter><kahaDB journalMaxFileLength="5mb" checkForCorruptJournalFiles="true" ignoreMissingJournalfiles="true" checksumJournalFiles="true" archiveCorruptedIndex="false" directory="${ACTIVEMQ_STORE_DIR}/data/kahadb"/></persistenceAdapter>-<transportConnectors><transportConnector uri="tcp://localhost:${OPENWIRE_PORT}" name="openwire"/></transportConnectors>-<systemUsage>-<systemUsage sendFailIfNoSpace="true">-<memoryUsage><memoryUsage limit="100 mb"/></memoryUsage>+<storeUsage>-<tempUsage><tempUsage limit="500 mb"/></tempUsage></systemUsage></systemUsage></broker></beans:beans>

Regards,

Barry Barnett
WMQ Enterprise Services & Solutions
Wells Fargo
Cell: 704-564-5501


-----Original Message-----
From: Christian Posta [mailto:christian.posta@gmail.com] 
Sent: Thursday, November 21, 2013 6:13 PM
To: users@activemq.apache.org
Subject: Re: Producer Flow Block - Consumer Deadlock after max memory limits exceeded

Inline...

On Thu, Nov 21, 2013 at 10:51 AM,  <ba...@wellsfargo.com> wrote:
> Version: Active MQ v5.8
> Embedded Broker, Producer, Consumer all within same JVM
>
> If max memory limits are set to 320MB, which equates to 10 journal files (32MB per file), the files cannot be cleared even if there is 1 message on the DLQ.

So you might need to post your config (or show the code for your
config if embedded). "Memory Limits" set to 320MB isn't the same thing
as "Store Limits" set to 320MB with 32MB journal files. Individual
files will be cleared out if there are no useful artifacts in them
(messages, durable subscription info, producer audit data structures,
etc...). The default cleanup period is 30s:

eg:

<kahaDB cleanupInterval="30000" ..>



>This 1 message blocks the freeing up of the journal file where it resides.  In order to resolve this, the JVM is >recycled.  I'm sure there is a better way of resolving this issue.  Any advice?

Are producer/consumer using same connection? What ack mode is your
consumer using?

Since this is embedded (broker,producer,consumer) it should be easy
enough to extract out the salient points and put together a unit test.
If you provide something concrete like that, I can take a look and
tell you exactly what's happening.


>
> Regards,
>
> Barry Barnett
> WMQ Enterprise Services & Solutions
> Wells Fargo
> Cell: 704-564-5501
>
>
>



-- 
Christian Posta
http://www.christianposta.com/blog
twitter: @christianposta

Re: Producer Flow Block - Consumer Deadlock after max memory limits exceeded

Posted by Gary Tully <ga...@gmail.com>.
also look at multi kahadb which allows you to split destinations
across stores, such that different usage patterns don't collide

http://activemq.apache.org/kahadb.html#KahaDB-Multi%28m%29kahaDBpersistenceadapter
http://blog.garytully.com/2011/11/activemq-multiple-kahadb-instances.html

On 22 November 2013 19:25, Paul Gale <pa...@gmail.com> wrote:
> The checkpoint worker (which is responsible for determining which data
> log files should be removed) runs every 30 seconds.
>
> One can learn a lot about how the data log purge is happening by
> enabling its logger. See here for more details:
> http://activemq.apache.org/why-do-kahadb-log-files-remain-after-cleanup.html
> to your log4j.properties file.
>
> By examining the log output of the checkpoint worker one can determine
> which topics/queues contain unacknowledged messages which are in turn
> causing data logs (identified by their IDs) to be retained. More often
> than not the cause of unexpected data log retention is due to an
> offline durable subscriber and/or entries in the DLQ.
>
> If the checkpoint worker log output indicates that a lot of one's
> topics/queues (or perhaps all) have unacknowledged messages then
> perhaps the consumers are not acknowledging messages the way you
> thought they were.
>
> Thanks,
> Paul
>
> On Fri, Nov 22, 2013 at 11:05 AM,  <ba...@wellsfargo.com> wrote:
>> So once the message is removed from the DLQ, then the journal would clear in 30 seconds?  Is that post v5.8?
>>
>> Regards,
>>
>> Barry Barnett
>> WMQ Enterprise Services & Solutions
>> Wells Fargo
>> Cell: 704-564-5501
>>
>>
>> -----Original Message-----
>> From: Christian Posta [mailto:christian.posta@gmail.com]
>> Sent: Friday, November 22, 2013 10:23 AM
>> To: users@activemq.apache.org
>> Subject: Re: Producer Flow Block - Consumer Deadlock after max memory limits exceeded
>>
>> Right. It means that particular file that holds the message cannot be deleted/archived.
>>
>>
>> On Fri, Nov 22, 2013 at 6:48 AM,  <ba...@wellsfargo.com> wrote:
>>> If there is a 'useful' artificat in the journal which is 'tied' to a message on the DLQ, that means the journal can't be cleared, right?  The only way to clear the journal is to delete the message from the DLQ first, correct?
>>>
>>> Regards,
>>>
>>> Barry Barnett
>>> WMQ Enterprise Services & Solutions
>>> Wells Fargo
>>> Cell: 704-564-5501
>>>
>>> -----Original Message-----
>>> From: Christian Posta [mailto:christian.posta@gmail.com]
>>> Sent: Thursday, November 21, 2013 6:13 PM
>>> To: users@activemq.apache.org
>>> Subject: Re: Producer Flow Block - Consumer Deadlock after max memory
>>> limits exceeded
>>>
>>> Inline...
>>>
>>> On Thu, Nov 21, 2013 at 10:51 AM,  <ba...@wellsfargo.com> wrote:
>>>> Version: Active MQ v5.8
>>>> Embedded Broker, Producer, Consumer all within same JVM
>>>>
>>>> If max memory limits are set to 320MB, which equates to 10 journal files (32MB per file), the files cannot be cleared even if there is 1 message on the DLQ.
>>>
>>> So you might need to post your config (or show the code for your config if embedded). "Memory Limits" set to 320MB isn't the same thing as "Store Limits" set to 320MB with 32MB journal files. Individual files will be cleared out if there are no useful artifacts in them (messages, durable subscription info, producer audit data structures, etc...). The default cleanup period is 30s:
>>>
>>> eg:
>>>
>>> <kahaDB cleanupInterval="30000" ..>
>>>
>>>
>>>
>>>>This 1 message blocks the freeing up of the journal file where it resides.  In order to resolve this, the JVM is >recycled.  I'm sure there is a better way of resolving this issue.  Any advice?
>>>
>>> Are producer/consumer using same connection? What ack mode is your consumer using?
>>>
>>> Since this is embedded (broker,producer,consumer) it should be easy enough to extract out the salient points and put together a unit test.
>>> If you provide something concrete like that, I can take a look and tell you exactly what's happening.
>>>
>>>
>>>>
>>>> Regards,
>>>>
>>>> Barry Barnett
>>>> WMQ Enterprise Services & Solutions
>>>> Wells Fargo
>>>> Cell: 704-564-5501
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>> --
>>> Christian Posta
>>> http://www.christianposta.com/blog
>>> twitter: @christianposta
>>
>>
>>
>> --
>> Christian Posta
>> http://www.christianposta.com/blog
>> twitter: @christianposta



-- 
http://redhat.com
http://blog.garytully.com

Re: Producer Flow Block - Consumer Deadlock after max memory limits exceeded

Posted by Paul Gale <pa...@gmail.com>.
The checkpoint worker (which is responsible for determining which data
log files should be removed) runs every 30 seconds.

One can learn a lot about how the data log purge is happening by
enabling its logger. See here for more details:
http://activemq.apache.org/why-do-kahadb-log-files-remain-after-cleanup.html
to your log4j.properties file.

By examining the log output of the checkpoint worker one can determine
which topics/queues contain unacknowledged messages which are in turn
causing data logs (identified by their IDs) to be retained. More often
than not the cause of unexpected data log retention is due to an
offline durable subscriber and/or entries in the DLQ.

If the checkpoint worker log output indicates that a lot of one's
topics/queues (or perhaps all) have unacknowledged messages then
perhaps the consumers are not acknowledging messages the way you
thought they were.

Thanks,
Paul

On Fri, Nov 22, 2013 at 11:05 AM,  <ba...@wellsfargo.com> wrote:
> So once the message is removed from the DLQ, then the journal would clear in 30 seconds?  Is that post v5.8?
>
> Regards,
>
> Barry Barnett
> WMQ Enterprise Services & Solutions
> Wells Fargo
> Cell: 704-564-5501
>
>
> -----Original Message-----
> From: Christian Posta [mailto:christian.posta@gmail.com]
> Sent: Friday, November 22, 2013 10:23 AM
> To: users@activemq.apache.org
> Subject: Re: Producer Flow Block - Consumer Deadlock after max memory limits exceeded
>
> Right. It means that particular file that holds the message cannot be deleted/archived.
>
>
> On Fri, Nov 22, 2013 at 6:48 AM,  <ba...@wellsfargo.com> wrote:
>> If there is a 'useful' artificat in the journal which is 'tied' to a message on the DLQ, that means the journal can't be cleared, right?  The only way to clear the journal is to delete the message from the DLQ first, correct?
>>
>> Regards,
>>
>> Barry Barnett
>> WMQ Enterprise Services & Solutions
>> Wells Fargo
>> Cell: 704-564-5501
>>
>> -----Original Message-----
>> From: Christian Posta [mailto:christian.posta@gmail.com]
>> Sent: Thursday, November 21, 2013 6:13 PM
>> To: users@activemq.apache.org
>> Subject: Re: Producer Flow Block - Consumer Deadlock after max memory
>> limits exceeded
>>
>> Inline...
>>
>> On Thu, Nov 21, 2013 at 10:51 AM,  <ba...@wellsfargo.com> wrote:
>>> Version: Active MQ v5.8
>>> Embedded Broker, Producer, Consumer all within same JVM
>>>
>>> If max memory limits are set to 320MB, which equates to 10 journal files (32MB per file), the files cannot be cleared even if there is 1 message on the DLQ.
>>
>> So you might need to post your config (or show the code for your config if embedded). "Memory Limits" set to 320MB isn't the same thing as "Store Limits" set to 320MB with 32MB journal files. Individual files will be cleared out if there are no useful artifacts in them (messages, durable subscription info, producer audit data structures, etc...). The default cleanup period is 30s:
>>
>> eg:
>>
>> <kahaDB cleanupInterval="30000" ..>
>>
>>
>>
>>>This 1 message blocks the freeing up of the journal file where it resides.  In order to resolve this, the JVM is >recycled.  I'm sure there is a better way of resolving this issue.  Any advice?
>>
>> Are producer/consumer using same connection? What ack mode is your consumer using?
>>
>> Since this is embedded (broker,producer,consumer) it should be easy enough to extract out the salient points and put together a unit test.
>> If you provide something concrete like that, I can take a look and tell you exactly what's happening.
>>
>>
>>>
>>> Regards,
>>>
>>> Barry Barnett
>>> WMQ Enterprise Services & Solutions
>>> Wells Fargo
>>> Cell: 704-564-5501
>>>
>>>
>>>
>>
>>
>>
>> --
>> Christian Posta
>> http://www.christianposta.com/blog
>> twitter: @christianposta
>
>
>
> --
> Christian Posta
> http://www.christianposta.com/blog
> twitter: @christianposta

RE: Producer Flow Block - Consumer Deadlock after max memory limits exceeded

Posted by ba...@wellsfargo.com.
So once the message is removed from the DLQ, then the journal would clear in 30 seconds?  Is that post v5.8?

Regards,

Barry Barnett
WMQ Enterprise Services & Solutions
Wells Fargo
Cell: 704-564-5501


-----Original Message-----
From: Christian Posta [mailto:christian.posta@gmail.com] 
Sent: Friday, November 22, 2013 10:23 AM
To: users@activemq.apache.org
Subject: Re: Producer Flow Block - Consumer Deadlock after max memory limits exceeded

Right. It means that particular file that holds the message cannot be deleted/archived.


On Fri, Nov 22, 2013 at 6:48 AM,  <ba...@wellsfargo.com> wrote:
> If there is a 'useful' artificat in the journal which is 'tied' to a message on the DLQ, that means the journal can't be cleared, right?  The only way to clear the journal is to delete the message from the DLQ first, correct?
>
> Regards,
>
> Barry Barnett
> WMQ Enterprise Services & Solutions
> Wells Fargo
> Cell: 704-564-5501
>
> -----Original Message-----
> From: Christian Posta [mailto:christian.posta@gmail.com]
> Sent: Thursday, November 21, 2013 6:13 PM
> To: users@activemq.apache.org
> Subject: Re: Producer Flow Block - Consumer Deadlock after max memory 
> limits exceeded
>
> Inline...
>
> On Thu, Nov 21, 2013 at 10:51 AM,  <ba...@wellsfargo.com> wrote:
>> Version: Active MQ v5.8
>> Embedded Broker, Producer, Consumer all within same JVM
>>
>> If max memory limits are set to 320MB, which equates to 10 journal files (32MB per file), the files cannot be cleared even if there is 1 message on the DLQ.
>
> So you might need to post your config (or show the code for your config if embedded). "Memory Limits" set to 320MB isn't the same thing as "Store Limits" set to 320MB with 32MB journal files. Individual files will be cleared out if there are no useful artifacts in them (messages, durable subscription info, producer audit data structures, etc...). The default cleanup period is 30s:
>
> eg:
>
> <kahaDB cleanupInterval="30000" ..>
>
>
>
>>This 1 message blocks the freeing up of the journal file where it resides.  In order to resolve this, the JVM is >recycled.  I'm sure there is a better way of resolving this issue.  Any advice?
>
> Are producer/consumer using same connection? What ack mode is your consumer using?
>
> Since this is embedded (broker,producer,consumer) it should be easy enough to extract out the salient points and put together a unit test.
> If you provide something concrete like that, I can take a look and tell you exactly what's happening.
>
>
>>
>> Regards,
>>
>> Barry Barnett
>> WMQ Enterprise Services & Solutions
>> Wells Fargo
>> Cell: 704-564-5501
>>
>>
>>
>
>
>
> --
> Christian Posta
> http://www.christianposta.com/blog
> twitter: @christianposta



--
Christian Posta
http://www.christianposta.com/blog
twitter: @christianposta

Re: Producer Flow Block - Consumer Deadlock after max memory limits exceeded

Posted by Christian Posta <ch...@gmail.com>.
Right. It means that particular file that holds the message cannot be
deleted/archived.


On Fri, Nov 22, 2013 at 6:48 AM,  <ba...@wellsfargo.com> wrote:
> If there is a 'useful' artificat in the journal which is 'tied' to a message on the DLQ, that means the journal can't be cleared, right?  The only way to clear the journal is to delete the message from the DLQ first, correct?
>
> Regards,
>
> Barry Barnett
> WMQ Enterprise Services & Solutions
> Wells Fargo
> Cell: 704-564-5501
>
> -----Original Message-----
> From: Christian Posta [mailto:christian.posta@gmail.com]
> Sent: Thursday, November 21, 2013 6:13 PM
> To: users@activemq.apache.org
> Subject: Re: Producer Flow Block - Consumer Deadlock after max memory limits exceeded
>
> Inline...
>
> On Thu, Nov 21, 2013 at 10:51 AM,  <ba...@wellsfargo.com> wrote:
>> Version: Active MQ v5.8
>> Embedded Broker, Producer, Consumer all within same JVM
>>
>> If max memory limits are set to 320MB, which equates to 10 journal files (32MB per file), the files cannot be cleared even if there is 1 message on the DLQ.
>
> So you might need to post your config (or show the code for your config if embedded). "Memory Limits" set to 320MB isn't the same thing as "Store Limits" set to 320MB with 32MB journal files. Individual files will be cleared out if there are no useful artifacts in them (messages, durable subscription info, producer audit data structures, etc...). The default cleanup period is 30s:
>
> eg:
>
> <kahaDB cleanupInterval="30000" ..>
>
>
>
>>This 1 message blocks the freeing up of the journal file where it resides.  In order to resolve this, the JVM is >recycled.  I'm sure there is a better way of resolving this issue.  Any advice?
>
> Are producer/consumer using same connection? What ack mode is your consumer using?
>
> Since this is embedded (broker,producer,consumer) it should be easy enough to extract out the salient points and put together a unit test.
> If you provide something concrete like that, I can take a look and tell you exactly what's happening.
>
>
>>
>> Regards,
>>
>> Barry Barnett
>> WMQ Enterprise Services & Solutions
>> Wells Fargo
>> Cell: 704-564-5501
>>
>>
>>
>
>
>
> --
> Christian Posta
> http://www.christianposta.com/blog
> twitter: @christianposta



-- 
Christian Posta
http://www.christianposta.com/blog
twitter: @christianposta

RE: Producer Flow Block - Consumer Deadlock after max memory limits exceeded

Posted by ba...@wellsfargo.com.
If there is a 'useful' artificat in the journal which is 'tied' to a message on the DLQ, that means the journal can't be cleared, right?  The only way to clear the journal is to delete the message from the DLQ first, correct?

Regards,

Barry Barnett
WMQ Enterprise Services & Solutions
Wells Fargo
Cell: 704-564-5501

-----Original Message-----
From: Christian Posta [mailto:christian.posta@gmail.com] 
Sent: Thursday, November 21, 2013 6:13 PM
To: users@activemq.apache.org
Subject: Re: Producer Flow Block - Consumer Deadlock after max memory limits exceeded

Inline...

On Thu, Nov 21, 2013 at 10:51 AM,  <ba...@wellsfargo.com> wrote:
> Version: Active MQ v5.8
> Embedded Broker, Producer, Consumer all within same JVM
>
> If max memory limits are set to 320MB, which equates to 10 journal files (32MB per file), the files cannot be cleared even if there is 1 message on the DLQ.

So you might need to post your config (or show the code for your config if embedded). "Memory Limits" set to 320MB isn't the same thing as "Store Limits" set to 320MB with 32MB journal files. Individual files will be cleared out if there are no useful artifacts in them (messages, durable subscription info, producer audit data structures, etc...). The default cleanup period is 30s:

eg:

<kahaDB cleanupInterval="30000" ..>



>This 1 message blocks the freeing up of the journal file where it resides.  In order to resolve this, the JVM is >recycled.  I'm sure there is a better way of resolving this issue.  Any advice?

Are producer/consumer using same connection? What ack mode is your consumer using?

Since this is embedded (broker,producer,consumer) it should be easy enough to extract out the salient points and put together a unit test.
If you provide something concrete like that, I can take a look and tell you exactly what's happening.


>
> Regards,
>
> Barry Barnett
> WMQ Enterprise Services & Solutions
> Wells Fargo
> Cell: 704-564-5501
>
>
>



--
Christian Posta
http://www.christianposta.com/blog
twitter: @christianposta

Re: Producer Flow Block - Consumer Deadlock after max memory limits exceeded

Posted by Christian Posta <ch...@gmail.com>.
Inline...

On Thu, Nov 21, 2013 at 10:51 AM,  <ba...@wellsfargo.com> wrote:
> Version: Active MQ v5.8
> Embedded Broker, Producer, Consumer all within same JVM
>
> If max memory limits are set to 320MB, which equates to 10 journal files (32MB per file), the files cannot be cleared even if there is 1 message on the DLQ.

So you might need to post your config (or show the code for your
config if embedded). "Memory Limits" set to 320MB isn't the same thing
as "Store Limits" set to 320MB with 32MB journal files. Individual
files will be cleared out if there are no useful artifacts in them
(messages, durable subscription info, producer audit data structures,
etc...). The default cleanup period is 30s:

eg:

<kahaDB cleanupInterval="30000" ..>



>This 1 message blocks the freeing up of the journal file where it resides.  In order to resolve this, the JVM is >recycled.  I'm sure there is a better way of resolving this issue.  Any advice?

Are producer/consumer using same connection? What ack mode is your
consumer using?

Since this is embedded (broker,producer,consumer) it should be easy
enough to extract out the salient points and put together a unit test.
If you provide something concrete like that, I can take a look and
tell you exactly what's happening.


>
> Regards,
>
> Barry Barnett
> WMQ Enterprise Services & Solutions
> Wells Fargo
> Cell: 704-564-5501
>
>
>



-- 
Christian Posta
http://www.christianposta.com/blog
twitter: @christianposta