You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@qpid.apache.org by Martin Ritchie <ri...@apache.org> on 2010/04/16 17:53:38 UTC

Re: Java M3 Qpid broker memory consumption

On 8 November 2008 05:15, Keith Chow <ke...@xml-asia.org> wrote:
> We applied the server side queue limits, server/client lowered prefetech
> high/low marks, and simplified our test case as follows:
>
> 1) One fast producer at 200 msg/s, msg size ~250bytes, non-transactional,
> non-persistent, no-ack mode, TTL = 5s.
> 2) 2 to 3 extremely slow and/or suspended consumers subscribed to the same
> topic.
>
> We also modified broker's expire message task to remove any node with
> expired = true, ignoring durable / acquire condition (to make sure they're
> purged from the message store).
>
> Result:
>
> Broker size old generation heap would still reach gigabytes in size in less
> than 10 minutes. JConsole showed no significant messages had built up in any
> queues much higher the prefetech count.
>
> Profiling showed the gigabytes of byte[] were referenced by the broker's
> pool event Job queue. And the events themselves where referenced by
> underlying mina's SocketImpl.
>
> The cause is similar to this TCP congestion issue from the apache mina users
> list, http://mina.markmail.org/message/6q5t5gwdozypm6dk?q=byte%5B%5D+gc
>
> Is this expected behaviour with M3 java broker with slow client?
>
> As an interim solution, we've modified the broker to detect slow topic
> consumers (by inspecting expiry timestamp for our usecase) and kill them off
> (with mina protocol's close session call). This allowed
> GC to reclaim the dead client's memory resource.
>
> Keith

Hi Keith,

I'm going to take a look at this slow consumer issue in terms of
reducing the data retained by the broker. I was surprised to read that
you were able to cause the broker memory issues in the mina layers due
to the slow consumer. I would have thought that the prefetch limit
would have protected you.

I was thinking of doing something similar to what you decided to do,
disconnecting the client when they are detected to be 'slow'.

Are you currently using this change in a production system? Do you
have any feedback that you would like me to incorporate in the design
of this feature? I will be putting a design together for this next
week.

Regards

Martin

> On Thu, Nov 6, 2008 at 5:14 AM, Gordon Sim <gs...@redhat.com> wrote:
>
>> Robert Greig wrote:
>>
>>> 2008/11/4 Gordon Sim <gs...@redhat.com>:
>>>
>>>  Can someone from the C++ side indicate whether the C++ broker does
>>>>> this? If not I shall raise enhancement requests for both brokers.
>>>>>
>>>> The c++ broker allows a hard limit to be set for a queue or a system wide
>>>> default.
>>>>
>>>
>>> What actions can you configure when the limit is hit? It occurs to me
>>> that there are two main cases:
>>>
>>> 1) "regular" queue - in this case you want to limit the publisher
>>>
>>> 2) private temporary queue bound to a topic exchange (or indeed other
>>> exchange types) - in this case you probably want to kill the slow
>>> consumer
>>>
>>> Thoughts?
>>>
>>
>> I agree.
>>
>> At present the c++ broker is much more limited and can either: kill the
>> publisher, flow to disk, or discard the oldest message(s) to make room.
>>
>



-- 
Martin Ritchie

---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project:      http://qpid.apache.org
Use/Interact: mailto:users-subscribe@qpid.apache.org


Re: Java M3 Qpid broker memory consumption

Posted by Martin Ritchie <ri...@apache.org>.
On 20 April 2010 01:06, Keith Chow <ke...@xml-asia.org> wrote:
> Hi Martin,
>
> The slow client detection customization has been in production for over  a
> year (still using M3 java broker). Besides this detection technique, we also
> use broker's ProtectIO to guard against the case where a client is hung but
> somehow stays alive.

Are you able to provide a patch for the slow consumer disconnection?

> ProtectedIO allows us to limit the size of each write request queues to a
> pre-definied upper-bound. But we had to made a minor change to enable this
> behaviour (QPID-1980
> <https://issues.apache.org/jira/browse/QPID-1980>)<https://issues.apache.org/jira/browse/QPID-1980>
> .

Yes that was a rather unfortunate issue, thanks for providing a patch for it.

Cheers

Martin

> Regards,
>
> Keith
>
> On Fri, Apr 16, 2010 at 11:53 PM, Martin Ritchie <ri...@apache.org>wrote:
>
>> On 8 November 2008 05:15, Keith Chow <ke...@xml-asia.org> wrote:
>> > We applied the server side queue limits, server/client lowered prefetech
>> > high/low marks, and simplified our test case as follows:
>> >
>> > 1) One fast producer at 200 msg/s, msg size ~250bytes, non-transactional,
>> > non-persistent, no-ack mode, TTL = 5s.
>> > 2) 2 to 3 extremely slow and/or suspended consumers subscribed to the
>> same
>> > topic.
>> >
>> > We also modified broker's expire message task to remove any node with
>> > expired = true, ignoring durable / acquire condition (to make sure
>> they're
>> > purged from the message store).
>> >
>> > Result:
>> >
>> > Broker size old generation heap would still reach gigabytes in size in
>> less
>> > than 10 minutes. JConsole showed no significant messages had built up in
>> any
>> > queues much higher the prefetech count.
>> >
>> > Profiling showed the gigabytes of byte[] were referenced by the broker's
>> > pool event Job queue. And the events themselves where referenced by
>> > underlying mina's SocketImpl.
>> >
>> > The cause is similar to this TCP congestion issue from the apache mina
>> users
>> > list, http://mina.markmail.org/message/6q5t5gwdozypm6dk?q=byte%5B%5D+gc
>> >
>> > Is this expected behaviour with M3 java broker with slow client?
>> >
>> > As an interim solution, we've modified the broker to detect slow topic
>> > consumers (by inspecting expiry timestamp for our usecase) and kill them
>> off
>> > (with mina protocol's close session call). This allowed
>> > GC to reclaim the dead client's memory resource.
>> >
>> > Keith
>>
>> Hi Keith,
>>
>> I'm going to take a look at this slow consumer issue in terms of
>> reducing the data retained by the broker. I was surprised to read that
>> you were able to cause the broker memory issues in the mina layers due
>> to the slow consumer. I would have thought that the prefetch limit
>> would have protected you.
>>
>> I was thinking of doing something similar to what you decided to do,
>> disconnecting the client when they are detected to be 'slow'.
>>
>> Are you currently using this change in a production system? Do you
>> have any feedback that you would like me to incorporate in the design
>> of this feature? I will be putting a design together for this next
>> week.
>>
>> Regards
>>
>> Martin
>>
>> > On Thu, Nov 6, 2008 at 5:14 AM, Gordon Sim <gs...@redhat.com> wrote:
>> >
>> >> Robert Greig wrote:
>> >>
>> >>> 2008/11/4 Gordon Sim <gs...@redhat.com>:
>> >>>
>> >>>  Can someone from the C++ side indicate whether the C++ broker does
>> >>>>> this? If not I shall raise enhancement requests for both brokers.
>> >>>>>
>> >>>> The c++ broker allows a hard limit to be set for a queue or a system
>> wide
>> >>>> default.
>> >>>>
>> >>>
>> >>> What actions can you configure when the limit is hit? It occurs to me
>> >>> that there are two main cases:
>> >>>
>> >>> 1) "regular" queue - in this case you want to limit the publisher
>> >>>
>> >>> 2) private temporary queue bound to a topic exchange (or indeed other
>> >>> exchange types) - in this case you probably want to kill the slow
>> >>> consumer
>> >>>
>> >>> Thoughts?
>> >>>
>> >>
>> >> I agree.
>> >>
>> >> At present the c++ broker is much more limited and can either: kill the
>> >> publisher, flow to disk, or discard the oldest message(s) to make room.
>> >>
>> >
>>
>>
>>
>> --
>> Martin Ritchie
>>
>



-- 
Martin Ritchie

---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project:      http://qpid.apache.org
Use/Interact: mailto:users-subscribe@qpid.apache.org


Re: Java M3 Qpid broker memory consumption

Posted by Keith Chow <ke...@xml-asia.org>.
Hi Martin,

The slow client detection customization has been in production for over  a
year (still using M3 java broker). Besides this detection technique, we also
use broker's ProtectIO to guard against the case where a client is hung but
somehow stays alive.

ProtectedIO allows us to limit the size of each write request queues to a
pre-definied upper-bound. But we had to made a minor change to enable this
behaviour (QPID-1980
<https://issues.apache.org/jira/browse/QPID-1980>)<https://issues.apache.org/jira/browse/QPID-1980>
.

Regards,

Keith

On Fri, Apr 16, 2010 at 11:53 PM, Martin Ritchie <ri...@apache.org>wrote:

> On 8 November 2008 05:15, Keith Chow <ke...@xml-asia.org> wrote:
> > We applied the server side queue limits, server/client lowered prefetech
> > high/low marks, and simplified our test case as follows:
> >
> > 1) One fast producer at 200 msg/s, msg size ~250bytes, non-transactional,
> > non-persistent, no-ack mode, TTL = 5s.
> > 2) 2 to 3 extremely slow and/or suspended consumers subscribed to the
> same
> > topic.
> >
> > We also modified broker's expire message task to remove any node with
> > expired = true, ignoring durable / acquire condition (to make sure
> they're
> > purged from the message store).
> >
> > Result:
> >
> > Broker size old generation heap would still reach gigabytes in size in
> less
> > than 10 minutes. JConsole showed no significant messages had built up in
> any
> > queues much higher the prefetech count.
> >
> > Profiling showed the gigabytes of byte[] were referenced by the broker's
> > pool event Job queue. And the events themselves where referenced by
> > underlying mina's SocketImpl.
> >
> > The cause is similar to this TCP congestion issue from the apache mina
> users
> > list, http://mina.markmail.org/message/6q5t5gwdozypm6dk?q=byte%5B%5D+gc
> >
> > Is this expected behaviour with M3 java broker with slow client?
> >
> > As an interim solution, we've modified the broker to detect slow topic
> > consumers (by inspecting expiry timestamp for our usecase) and kill them
> off
> > (with mina protocol's close session call). This allowed
> > GC to reclaim the dead client's memory resource.
> >
> > Keith
>
> Hi Keith,
>
> I'm going to take a look at this slow consumer issue in terms of
> reducing the data retained by the broker. I was surprised to read that
> you were able to cause the broker memory issues in the mina layers due
> to the slow consumer. I would have thought that the prefetch limit
> would have protected you.
>
> I was thinking of doing something similar to what you decided to do,
> disconnecting the client when they are detected to be 'slow'.
>
> Are you currently using this change in a production system? Do you
> have any feedback that you would like me to incorporate in the design
> of this feature? I will be putting a design together for this next
> week.
>
> Regards
>
> Martin
>
> > On Thu, Nov 6, 2008 at 5:14 AM, Gordon Sim <gs...@redhat.com> wrote:
> >
> >> Robert Greig wrote:
> >>
> >>> 2008/11/4 Gordon Sim <gs...@redhat.com>:
> >>>
> >>>  Can someone from the C++ side indicate whether the C++ broker does
> >>>>> this? If not I shall raise enhancement requests for both brokers.
> >>>>>
> >>>> The c++ broker allows a hard limit to be set for a queue or a system
> wide
> >>>> default.
> >>>>
> >>>
> >>> What actions can you configure when the limit is hit? It occurs to me
> >>> that there are two main cases:
> >>>
> >>> 1) "regular" queue - in this case you want to limit the publisher
> >>>
> >>> 2) private temporary queue bound to a topic exchange (or indeed other
> >>> exchange types) - in this case you probably want to kill the slow
> >>> consumer
> >>>
> >>> Thoughts?
> >>>
> >>
> >> I agree.
> >>
> >> At present the c++ broker is much more limited and can either: kill the
> >> publisher, flow to disk, or discard the oldest message(s) to make room.
> >>
> >
>
>
>
> --
> Martin Ritchie
>