You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@activemq.apache.org by James Green <ja...@gmail.com> on 2012/11/08 11:01:40 UTC

Memory usage with large messages

Hi,

We are narrowing down the chaos that we're encounted in recent weeks with
brokers hanging. The trigger point appears to be a large (10M) message
produced by a Stomp client on a hub topic then read by three spoke Stomp
clients. The Stomp transports on the spokes appear to start dieing after
this message is produced.

Between the hub and spokes is a slow ADSL upload link. The hub talks to the
spokes using the ssl:// scheme.

Do the network connectors between hub and spoke read messages sequentially
or in parallel? Also, is the memory for the entire message allocated the
beginning or does it expand as bytes are read off the wire?

I also wonder if the content of the messages is shared across "interested"
components or are duplicates maintained by each? I.e. transport connector
for openwire + kahadb plus topic cursor + stomp transport?

Trying to understand the behaviour here!

Thanks,

James

Re: Memory usage with large messages

Posted by Gary Tully <ga...@gmail.com>.
> What about the spokes (the receivers)? Similar memory usage?

yes, remember a networks are store and forward. so each spoke gets a
message send and dispatches to its connected consumers in the same way
as the hub responds to a send.


On 8 November 2012 12:38, James Green <ja...@gmail.com> wrote:
>
> As to stomp clients they report an "unable to connect" to localhost.
>
>
>
> On 8 November 2012 11:46, Gary Tully <ga...@gmail.com> wrote:
>
>> There will be a single copy of the message but it will by default get
>> marshaled in parallel due to asyncDispatch. So there may be multiple
>> marshall buffers in existence for the message send via the proxy
>> consumers in the network connectors.
>>
>> asyncDispatch is good because it means that a slow consumer will not
>> block dispatch to other consumers but it will contribute to the memory
>> usage.
>>
>> If the memory limits allow, on the send, the message will be retained
>> in memory and dispatched from memory.
>>
>> It would be great to get a handle on why the stomp clients on the spoke
>> die?
>> is it inactivity timeout?
>>
>>
>> On 8 November 2012 10:01, James Green <ja...@gmail.com> wrote:
>> > Hi,
>> >
>> > We are narrowing down the chaos that we're encounted in recent weeks with
>> > brokers hanging. The trigger point appears to be a large (10M) message
>> > produced by a Stomp client on a hub topic then read by three spoke Stomp
>> > clients. The Stomp transports on the spokes appear to start dieing after
>> > this message is produced.
>> >
>> > Between the hub and spokes is a slow ADSL upload link. The hub talks to
>> the
>> > spokes using the ssl:// scheme.
>> >
>> > Do the network connectors between hub and spoke read messages
>> sequentially
>> > or in parallel? Also, is the memory for the entire message allocated the
>> > beginning or does it expand as bytes are read off the wire?
>> >
>> > I also wonder if the content of the messages is shared across
>> "interested"
>> > components or are duplicates maintained by each? I.e. transport connector
>> > for openwire + kahadb plus topic cursor + stomp transport?
>> >
>> > Trying to understand the behaviour here!
>> >
>> > Thanks,
>> >
>> > James
>>
>>
>>
>> --
>> http://redhat.com
>> http://blog.garytully.com
>>



-- 
http://redhat.com
http://blog.garytully.com

Re: Memory usage with large messages

Posted by James Green <ja...@gmail.com>.
What about the spokes (the receivers)? Similar memory usage?

As to stomp clients they report an "unable to connect" to localhost.



On 8 November 2012 11:46, Gary Tully <ga...@gmail.com> wrote:

> There will be a single copy of the message but it will by default get
> marshaled in parallel due to asyncDispatch. So there may be multiple
> marshall buffers in existence for the message send via the proxy
> consumers in the network connectors.
>
> asyncDispatch is good because it means that a slow consumer will not
> block dispatch to other consumers but it will contribute to the memory
> usage.
>
> If the memory limits allow, on the send, the message will be retained
> in memory and dispatched from memory.
>
> It would be great to get a handle on why the stomp clients on the spoke
> die?
> is it inactivity timeout?
>
>
> On 8 November 2012 10:01, James Green <ja...@gmail.com> wrote:
> > Hi,
> >
> > We are narrowing down the chaos that we're encounted in recent weeks with
> > brokers hanging. The trigger point appears to be a large (10M) message
> > produced by a Stomp client on a hub topic then read by three spoke Stomp
> > clients. The Stomp transports on the spokes appear to start dieing after
> > this message is produced.
> >
> > Between the hub and spokes is a slow ADSL upload link. The hub talks to
> the
> > spokes using the ssl:// scheme.
> >
> > Do the network connectors between hub and spoke read messages
> sequentially
> > or in parallel? Also, is the memory for the entire message allocated the
> > beginning or does it expand as bytes are read off the wire?
> >
> > I also wonder if the content of the messages is shared across
> "interested"
> > components or are duplicates maintained by each? I.e. transport connector
> > for openwire + kahadb plus topic cursor + stomp transport?
> >
> > Trying to understand the behaviour here!
> >
> > Thanks,
> >
> > James
>
>
>
> --
> http://redhat.com
> http://blog.garytully.com
>

Re: Memory usage with large messages

Posted by Gary Tully <ga...@gmail.com>.
There will be a single copy of the message but it will by default get
marshaled in parallel due to asyncDispatch. So there may be multiple
marshall buffers in existence for the message send via the proxy
consumers in the network connectors.

asyncDispatch is good because it means that a slow consumer will not
block dispatch to other consumers but it will contribute to the memory
usage.

If the memory limits allow, on the send, the message will be retained
in memory and dispatched from memory.

It would be great to get a handle on why the stomp clients on the spoke die?
is it inactivity timeout?


On 8 November 2012 10:01, James Green <ja...@gmail.com> wrote:
> Hi,
>
> We are narrowing down the chaos that we're encounted in recent weeks with
> brokers hanging. The trigger point appears to be a large (10M) message
> produced by a Stomp client on a hub topic then read by three spoke Stomp
> clients. The Stomp transports on the spokes appear to start dieing after
> this message is produced.
>
> Between the hub and spokes is a slow ADSL upload link. The hub talks to the
> spokes using the ssl:// scheme.
>
> Do the network connectors between hub and spoke read messages sequentially
> or in parallel? Also, is the memory for the entire message allocated the
> beginning or does it expand as bytes are read off the wire?
>
> I also wonder if the content of the messages is shared across "interested"
> components or are duplicates maintained by each? I.e. transport connector
> for openwire + kahadb plus topic cursor + stomp transport?
>
> Trying to understand the behaviour here!
>
> Thanks,
>
> James



-- 
http://redhat.com
http://blog.garytully.com