You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@activemq.apache.org by Christian Schneider <ch...@die-schneider.net> on 2016/06/17 10:16:15 UTC

Some positive feedback about mkahadb

Just wanted to share some positive feedback we got from a customer.

I wrote some time go about a problem we had at a customer that activemq 
failover took too long.
In the end the problem was the sheer amount of data in the kahadb journals.

We found that most of the long term queued data was in some DLQs. In a 
single kahadb this DLQ contents were very scarcely scattered in the 
kahadb journals.
So most journals just contained some kb of still active messages but 
still consumed the whole space.

This lead to a kahadb size of about 34GB. We then decided to switch to 
mkahadb with one kahadb per queue. As now the DLQs were isolated the 
messages were packaed much more densely.
After migrating the production we got feedback from the customer that 
the kahadb size went down to just about 50MB.  This of course also 
removed the big failover times.

So I can very much recommend to use mkahadb for such scenarios.

Christian

-- 
Christian Schneider
http://www.liquid-reality.de

Open Source Architect
http://www.talend.com


Re: Some positive feedback about mkahadb

Posted by Tim Bain <tb...@alumni.duke.edu>.
Christian, that's great news, thanks for sharing.

Since this feels like a post that people may be referring back to, there
are a couple things I think are worth adding.

If DLQ'ed messages are the only long-lived ones, I would expect a similar
result from using mKahaDB with one KahaDB for normal destinations and one
for DLQs; configuring one KahaDB per destination should not be strictly
necessary.

If you use durable topic subscriptions, be sure to run 5.13.3 or later,
because that version provides the ability to resend the subscription
message so it doesn't keep KahaDB files from being deleted.

Tim
On Jun 17, 2016 9:41 AM, "Timothy Bish" <ta...@gmail.com> wrote:

> On 06/17/2016 06:16 AM, Christian Schneider wrote:
>
>> Just wanted to share some positive feedback we got from a customer.
>>
>> I wrote some time go about a problem we had at a customer that activemq
>> failover took too long.
>> In the end the problem was the sheer amount of data in the kahadb
>> journals.
>>
>> We found that most of the long term queued data was in some DLQs. In a
>> single kahadb this DLQ contents were very scarcely scattered in the kahadb
>> journals.
>> So most journals just contained some kb of still active messages but
>> still consumed the whole space.
>>
>> This lead to a kahadb size of about 34GB. We then decided to switch to
>> mkahadb with one kahadb per queue. As now the DLQs were isolated the
>> messages were packaed much more densely.
>> After migrating the production we got feedback from the customer that the
>> kahadb size went down to just about 50MB.  This of course also removed the
>> big failover times.
>>
>> So I can very much recommend to use mkahadb for such scenarios.
>>
>> Christian
>>
>> Great, thanks for the feedback
>
> --
> Tim Bish
> twitter: @tabish121
> blog: http://timbish.blogspot.com/
>
>

Re: Some positive feedback about mkahadb

Posted by Timothy Bish <ta...@gmail.com>.
On 06/17/2016 06:16 AM, Christian Schneider wrote:
> Just wanted to share some positive feedback we got from a customer.
>
> I wrote some time go about a problem we had at a customer that 
> activemq failover took too long.
> In the end the problem was the sheer amount of data in the kahadb 
> journals.
>
> We found that most of the long term queued data was in some DLQs. In a 
> single kahadb this DLQ contents were very scarcely scattered in the 
> kahadb journals.
> So most journals just contained some kb of still active messages but 
> still consumed the whole space.
>
> This lead to a kahadb size of about 34GB. We then decided to switch to 
> mkahadb with one kahadb per queue. As now the DLQs were isolated the 
> messages were packaed much more densely.
> After migrating the production we got feedback from the customer that 
> the kahadb size went down to just about 50MB.  This of course also 
> removed the big failover times.
>
> So I can very much recommend to use mkahadb for such scenarios.
>
> Christian
>
Great, thanks for the feedback

-- 
Tim Bish
twitter: @tabish121
blog: http://timbish.blogspot.com/