You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@activemq.apache.org by Mark Anderson <ma...@gmail.com> on 2012/11/23 16:36:20 UTC

Memory and Temp Usage Questions

I have ActiveMQ 5.6.0 configured as follows:

Producer Flow Control = false
Send Fail If No Space = true
Memory Usage Limit = 128Mb
Temp Usage Limit = 1Gb

All my messages are non-persistent. The temp usage is configured to handle
spikes/slow consumers when processing messages.

I continually see the following in the logs:

WARN  Nov 20 20:55:47 (13748874 [InactivityMonitor Async Task:
java.util.concurrent.ThreadPoolExecutor$Worker@7ea0e15b[State = 0, empty
queue]] org.apac
he.activemq.broker.TransportConnection.Transport) Transport Connection to:
tcp://192.168.2.103:35186 failed: java.net.SocketException: Broken pipe
INFO  Nov 20 20:55:51 (13752162 [ActiveMQ Transport: tcp:///
192.168.2.103:35168] org.apache.activemq.broker.TransportConnection) The
connection to 'tcp:
//192.168.2.103:35166' is taking a long time to shutdown.
INFO  Nov 20 20:55:56 (13757162 [ActiveMQ Transport: tcp:///
192.168.2.103:35168] org.apache.activemq.broker.TransportConnection) The
connection to 'tcp:
//192.168.2.103:35166' is taking a long time to shutdown.
INFO  Nov 20 20:56:01 (13762162 [ActiveMQ Transport: tcp:///
192.168.2.103:35168] org.apache.activemq.broker.TransportConnection) The
connection to 'tcp:
//192.168.2.103:35166' is taking a long time to shutdown.

I'm not sure why the connection will never shutdown.

I then see the following message:

org.apache.activemq.broker.region.TopicSubscription) TopicSubscription:
consumer=ID:linux-5ks2-57958-1353426643811-3:1:378:1, destinations=1,
dispatched=32766, delivered=0, matched=0, discarded=0: Pending message
cursor
[org.apache.activemq.broker.region.cursors.FilePendingMessageCursor@4c41cfa2]
is full, temp usage (0%) or memory usage (211%) limit reached, blocking
message add() pending the release of resources.

This leads me to the following questions:

1) Why would the memory usage be 211% while temp usage is 0%.
2) The thread dump shows that send calls on producers are blocking. Why
would they not throw exceptions when send fail if no space = true?
3) Would the issue with connection shutdown contribute to the memory usage?

Thanks,
Mark

Re: Memory and Temp Usage Questions

Posted by Mark Anderson <ma...@gmail.com>.
The server is configured in code as follows:

    BrokerService broker = new BrokerService();
    broker.setBrokerName(brokerName);
    // enable persistence so that temp storage can be used
    broker.setPersistent(true);
    broker.setDataDirectory(dataDirectory);
    broker.setSchedulerSupport(false);

    broker.addConnector("tcp://0.0.0.0:24726");  // 24726 peer to peer
    broker.addConnector("tcp://0.0.0.0:24727");  // 24727 client to server

    // set the default policy to be used by all queues and topics
    // disable producer flow control so that producers don't block and
    // temp storage is used if memory limit is reached
    PolicyMap policyMap = new PolicyMap();
    PolicyEntry defaultPolicy = new PolicyEntry();
    defaultPolicy.setProducerFlowControl(false);
    policyMap.setDefaultEntry(defaultPolicy);
    // we want to limit the live server domain events topic to 20k pending
msg
    // to prevent messaging to the historical server locking up
    PolicyEntry liveServerDomainEventsPolicy = new PolicyEntry();
    ConstantPendingMessageLimitStrategy constantPendingMessageLimitStrategy
= new ConstantPendingMessageLimitStrategy();
    constantPendingMessageLimitStrategy.setLimit(20000);

liveServerDomainEventsPolicy.setPendingMessageLimitStrategy(constantPendingMessageLimitStrategy);
    policyMap.put(new ActiveMQTopic("liveServerDomainEvents"),
liveServerDomainEventsPolicy);
    broker.setDestinationPolicy(policyMap);

    // adjust default memory settings
    SystemUsage systemUsage = broker.getSystemUsage();
    // enable failure exception as last resort if memory fills otherwise
producers will block
    systemUsage.setSendFailIfNoSpace(true);
    // set in memory limit to 128Mb
    // set temp storage to 1Gb
    // store is not used as all messages are non-persistent
    systemUsage.getMemoryUsage().setLimit(128000000);
    systemUsage.getTempUsage().setLimit(1000000000);

    // optionally connect to a peer broker
    // this will be set on spoke nodes
    if (peerAddress != null)
    {
      URI uri = new URI("static:(tcp://" + peerAddress + ":" + jmsPortPeer
+
")?initialReconnectDelay=5000&useExponentialBackOff=false&jms.prefetchPolicy.topicPrefetch=32766");

      NetworkConnector networkConnector = new
PeerNetworkConnector(peerAddress, uri, this);
      networkConnector.setName("peerConnector-" + peerAddress);
      networkConnector.setDuplex(true);
      networkConnector.setNetworkTTL(networkTTL);
      networkConnector.setPrefetchSize(32766);
      broker.addNetworkConnector(networkConnector);
    }

    ManagementContext managementContext = new ManagementContext();
    managementContext.setCreateConnector(false);
    broker.setManagementContext(managementContext);
    broker.setUseShutdownHook(false);
    broker.start();

    // broker.start() is asynchronous so wait
    // don't want to accidently create an embedded broker
    broker.waitUntilStarted();

The error happened on a customer system so creating a JUnit test case could
be difficult as we have not yet been able to reproduce in our test
environement.


On 27 November 2012 11:11, Gary Tully <ga...@gmail.com> wrote:

> can you post your xml configuration to clarify. Even better, if you can
> produce a junit test case that can reproduce it would help get to the
> bottom of this.
>
>
> w.r.t to  AMQ-3643, it is a long way down on the priority list but is
> something that is on the radar.
>
> On 27 November 2012 09:22, Mark Anderson <ma...@gmail.com> wrote:
>
> > AMQ-3643
>
>
>
>
> --
> http://redhat.com
> http://blog.garytully.com
>

Re: Memory and Temp Usage Questions

Posted by Gary Tully <ga...@gmail.com>.
can you post your xml configuration to clarify. Even better, if you can
produce a junit test case that can reproduce it would help get to the
bottom of this.


w.r.t to  AMQ-3643, it is a long way down on the priority list but is
something that is on the radar.

On 27 November 2012 09:22, Mark Anderson <ma...@gmail.com> wrote:

> AMQ-3643




-- 
http://redhat.com
http://blog.garytully.com

Re: Memory and Temp Usage Questions

Posted by Mark Anderson <ma...@gmail.com>.
Thanks for the information.

I have a default policy entry with no memory limit configured. However, JMX
shows that all topics are using the 128Mb system usage limit. So to me my
memory limits appear to be correct and I would expect it to spool to disk.

Given the above do you think it is likely I am triggering AMQ-3643? Is
there a workaround? Do you have any idea when this is likely to be fixed?


On 26 November 2012 12:21, Gary Tully <ga...@gmail.com> wrote:

> you have a slow or blocked consumer that is blocking the send due to the
> pending message cursor being full. The blocked send will stop the
> connection from being terminated.
>
> To have the cursor spool to disk (temp store) you need to reduce the system
> usage memory limit b/c spooling to disk is based on that shared limit. It
> is independent of the destination limit.
> However, I think it is the destination limit that is visible in the log,
> hence the 211%.
>
> Start by increasing the destination limit to the same value as your system
> usage memory limit. Do this via a destination policy for your (or all)
> topic(s).
>
> There is a known problem with the checking for memory limits in the non
> persistent case to avoid the block on the cursor and respect the
> sendfailifnospace flag, but that needs some work
> Though the cursor is different in the jira, the symptom is related to https
> ://issues.apache.org/jira/browse/AMQ-3643
>
>
>
>
> On 23 November 2012 15:36, Mark Anderson <ma...@gmail.com> wrote:
>
> > I have ActiveMQ 5.6.0 configured as follows:
> >
> > Producer Flow Control = false
> > Send Fail If No Space = true
> > Memory Usage Limit = 128Mb
> > Temp Usage Limit = 1Gb
> >
> > All my messages are non-persistent. The temp usage is configured to
> handle
> > spikes/slow consumers when processing messages.
> >
> > I continually see the following in the logs:
> >
> > WARN  Nov 20 20:55:47 (13748874 [InactivityMonitor Async Task:
> > java.util.concurrent.ThreadPoolExecutor$Worker@7ea0e15b[State = 0, empty
> > queue]] org.apac
> > he.activemq.broker.TransportConnection.Transport) Transport Connection
> to:
> > tcp://192.168.2.103:35186 failed: java.net.SocketException: Broken pipe
> > INFO  Nov 20 20:55:51 (13752162 [ActiveMQ Transport: tcp:///
> > 192.168.2.103:35168] org.apache.activemq.broker.TransportConnection) The
> > connection to 'tcp:
> > //192.168.2.103:35166' is taking a long time to shutdown.
> > INFO  Nov 20 20:55:56 (13757162 [ActiveMQ Transport: tcp:///
> > 192.168.2.103:35168] org.apache.activemq.broker.TransportConnection) The
> > connection to 'tcp:
> > //192.168.2.103:35166' is taking a long time to shutdown.
> > INFO  Nov 20 20:56:01 (13762162 [ActiveMQ Transport: tcp:///
> > 192.168.2.103:35168] org.apache.activemq.broker.TransportConnection) The
> > connection to 'tcp:
> > //192.168.2.103:35166' is taking a long time to shutdown.
> >
> > I'm not sure why the connection will never shutdown.
> >
> > I then see the following message:
> >
> > org.apache.activemq.broker.region.TopicSubscription) TopicSubscription:
> > consumer=ID:linux-5ks2-57958-1353426643811-3:1:378:1, destinations=1,
> > dispatched=32766, delivered=0, matched=0, discarded=0: Pending message
> > cursor
> >
> >
> [org.apache.activemq.broker.region.cursors.FilePendingMessageCursor@4c41cfa2
> > ]
> > is full, temp usage (0%) or memory usage (211%) limit reached, blocking
> > message add() pending the release of resources.
> >
> > This leads me to the following questions:
> >
> > 1) Why would the memory usage be 211% while temp usage is 0%.
> > 2) The thread dump shows that send calls on producers are blocking. Why
> > would they not throw exceptions when send fail if no space = true?
> > 3) Would the issue with connection shutdown contribute to the memory
> usage?
> >
> > Thanks,
> > Mark
> >
>
>
>
> --
> http://redhat.com
> http://blog.garytully.com
>

Re: Memory and Temp Usage Questions

Posted by Gary Tully <ga...@gmail.com>.
you have a slow or blocked consumer that is blocking the send due to the
pending message cursor being full. The blocked send will stop the
connection from being terminated.

To have the cursor spool to disk (temp store) you need to reduce the system
usage memory limit b/c spooling to disk is based on that shared limit. It
is independent of the destination limit.
However, I think it is the destination limit that is visible in the log,
hence the 211%.

Start by increasing the destination limit to the same value as your system
usage memory limit. Do this via a destination policy for your (or all)
topic(s).

There is a known problem with the checking for memory limits in the non
persistent case to avoid the block on the cursor and respect the
sendfailifnospace flag, but that needs some work
Though the cursor is different in the jira, the symptom is related to https
://issues.apache.org/jira/browse/AMQ-3643




On 23 November 2012 15:36, Mark Anderson <ma...@gmail.com> wrote:

> I have ActiveMQ 5.6.0 configured as follows:
>
> Producer Flow Control = false
> Send Fail If No Space = true
> Memory Usage Limit = 128Mb
> Temp Usage Limit = 1Gb
>
> All my messages are non-persistent. The temp usage is configured to handle
> spikes/slow consumers when processing messages.
>
> I continually see the following in the logs:
>
> WARN  Nov 20 20:55:47 (13748874 [InactivityMonitor Async Task:
> java.util.concurrent.ThreadPoolExecutor$Worker@7ea0e15b[State = 0, empty
> queue]] org.apac
> he.activemq.broker.TransportConnection.Transport) Transport Connection to:
> tcp://192.168.2.103:35186 failed: java.net.SocketException: Broken pipe
> INFO  Nov 20 20:55:51 (13752162 [ActiveMQ Transport: tcp:///
> 192.168.2.103:35168] org.apache.activemq.broker.TransportConnection) The
> connection to 'tcp:
> //192.168.2.103:35166' is taking a long time to shutdown.
> INFO  Nov 20 20:55:56 (13757162 [ActiveMQ Transport: tcp:///
> 192.168.2.103:35168] org.apache.activemq.broker.TransportConnection) The
> connection to 'tcp:
> //192.168.2.103:35166' is taking a long time to shutdown.
> INFO  Nov 20 20:56:01 (13762162 [ActiveMQ Transport: tcp:///
> 192.168.2.103:35168] org.apache.activemq.broker.TransportConnection) The
> connection to 'tcp:
> //192.168.2.103:35166' is taking a long time to shutdown.
>
> I'm not sure why the connection will never shutdown.
>
> I then see the following message:
>
> org.apache.activemq.broker.region.TopicSubscription) TopicSubscription:
> consumer=ID:linux-5ks2-57958-1353426643811-3:1:378:1, destinations=1,
> dispatched=32766, delivered=0, matched=0, discarded=0: Pending message
> cursor
>
> [org.apache.activemq.broker.region.cursors.FilePendingMessageCursor@4c41cfa2
> ]
> is full, temp usage (0%) or memory usage (211%) limit reached, blocking
> message add() pending the release of resources.
>
> This leads me to the following questions:
>
> 1) Why would the memory usage be 211% while temp usage is 0%.
> 2) The thread dump shows that send calls on producers are blocking. Why
> would they not throw exceptions when send fail if no space = true?
> 3) Would the issue with connection shutdown contribute to the memory usage?
>
> Thanks,
> Mark
>



-- 
http://redhat.com
http://blog.garytully.com

Re: Memory and Temp Usage Questions

Posted by Christian Posta <ch...@gmail.com>.
So here is my best guest as to what happened:

Your transport connection got hosed somehow and tried to shutdown. But for
some reason, it hung while trying to shutdown.

The subscription kept trying to send messages to the connection to send to
its (at one point) consumer. It sends these async, so it doesn't wait
around to see whether they actually get sent. If they successfully send,
the message is removed from memory. But in your case it didn't send, so
they continued to pile up in memory.

The messages never got sent to disk because the number of messages
dispatched was lower than your prefetch limit. So it just continued to
send, and build up memory. Then when it finally hit the prefetch limit, it
went in to the section of code where it would try to add it to the the
dispatch cursor. It would then most likely sit in a while loop waiting for
memory to become available (and would log the message you saw about the
TopicSubscription). But the FilePendingMessageCursor never got any messages
put into it, so it would have nothing to flush to disk. My guess is it
would just sit in that while loop.





On Tue, Nov 27, 2012 at 7:31 AM, Mark Anderson <ma...@gmail.com>wrote:

> PeerNetworkConnector extends DiscoveryNetworkConnector so I can fire
> listeners for onServiceAdd and onServiceRemove.
>
>
> On 27 November 2012 14:16, Christian Posta <christian.posta@gmail.com
> >wrote:
>
> > Okay good to know. I suppose this error happened just once randomly and
> you
> > cannot reproduce?
> >
> > BTW... what is PeerNetworkConnector in your config:
> >
> >     NetworkConnector networkConnector = new
> > PeerNetworkConnector(peerAddress, uri, this);
> >
> >
> > On Tue, Nov 27, 2012 at 7:08 AM, Mark Anderson <manderson23@gmail.com
> > >wrote:
> >
> > > The prefetch size was set on the network connector as we were getting
> > > messages about slow consumers across the network bridge.
> > >
> > > As far as I can see the network bridge had not failed. The connector
> > > entries in the log are for a client subscription that will also have
> the
> > > topic prefetch set to 32766. I am trying to get logs from the client.
> > >
> > > The broker on the other end of the bridge uses the same configuration.
> > >
> > >
> > > On 27 November 2012 13:41, Christian Posta <christian.posta@gmail.com
> > > >wrote:
> > >
> > > > Answers to your questions:
> > > >
> > > > 1) Not sure yet
> > > > 2) Because at the moment, send fail if no space is only triggered
> when
> > > > producer flow control is on (at least for this case, topics)
> > > > 3) like gtully said, connections could not be shutdown if they are
> > > blocked
> > > > somehow
> > > >
> > > > I noticed in your config you explicitly set the prefetch on the
> network
> > > > connector to 32766. The default for network connectors is 1000 and
> the
> > > > default for regular topics is Short.MAX_VALUE (which is 32767). Since
> > the
> > > > bridge doesn't have a prefetch buffer like normal clients do, setting
> > the
> > > > prefetch to 32766 could end up flooding it. Any reason why you have
> it
> > > set
> > > > to 32766?
> > > >
> > > > So TopicSubscriptions should always have the broker's main memory
> > usage.
> > > If
> > > > it has the destination's memory limit, then something went wrong.
> Like
> > > Gary
> > > > said, the pending message cursor's messages would be spooled to disk
> > when
> > > > the main memory limit reaches its high water mark (70% by
> default)....
> > > but
> > > > that appears to not have happened in this case.
> > > >
> > > > Are there any indications that the TopicSubscription is for the
> network
> > > > bridge? Or maybe that the network bridge failed somehow? I see that
> the
> > > > dispatched count is the same as what you've set for you prefertch on
> > the
> > > > bridge, but if anything else can point to that it might be helpful.
> For
> > > > example, are those port numbers on the transport connector logs for
> the
> > > > network bridge?
> > > >
> > > > How is the broker on the other end of the bridge configured? Same?
> > > >
> > > >
> > > > On Fri, Nov 23, 2012 at 8:36 AM, Mark Anderson <
> manderson23@gmail.com
> > > > >wrote:
> > > >
> > > > > I have ActiveMQ 5.6.0 configured as follows:
> > > > >
> > > > > Producer Flow Control = false
> > > > > Send Fail If No Space = true
> > > > > Memory Usage Limit = 128Mb
> > > > > Temp Usage Limit = 1Gb
> > > > >
> > > > > All my messages are non-persistent. The temp usage is configured to
> > > > handle
> > > > > spikes/slow consumers when processing messages.
> > > > >
> > > > > I continually see the following in the logs:
> > > > >
> > > > > WARN  Nov 20 20:55:47 (13748874 [InactivityMonitor Async Task:
> > > > > java.util.concurrent.ThreadPoolExecutor$Worker@7ea0e15b[State = 0,
> > > empty
> > > > > queue]] org.apac
> > > > > he.activemq.broker.TransportConnection.Transport) Transport
> > Connection
> > > > to:
> > > > > tcp://192.168.2.103:35186 failed: java.net.SocketException: Broken
> > > pipe
> > > > > INFO  Nov 20 20:55:51 (13752162 [ActiveMQ Transport: tcp:///
> > > > > 192.168.2.103:35168]
> org.apache.activemq.broker.TransportConnection)
> > > The
> > > > > connection to 'tcp:
> > > > > //192.168.2.103:35166' is taking a long time to shutdown.
> > > > > INFO  Nov 20 20:55:56 (13757162 [ActiveMQ Transport: tcp:///
> > > > > 192.168.2.103:35168]
> org.apache.activemq.broker.TransportConnection)
> > > The
> > > > > connection to 'tcp:
> > > > > //192.168.2.103:35166' is taking a long time to shutdown.
> > > > > INFO  Nov 20 20:56:01 (13762162 [ActiveMQ Transport: tcp:///
> > > > > 192.168.2.103:35168]
> org.apache.activemq.broker.TransportConnection)
> > > The
> > > > > connection to 'tcp:
> > > > > //192.168.2.103:35166' is taking a long time to shutdown.
> > > > >
> > > > > I'm not sure why the connection will never shutdown.
> > > > >
> > > > > I then see the following message:
> > > > >
> > > > > org.apache.activemq.broker.region.TopicSubscription)
> > TopicSubscription:
> > > > > consumer=ID:linux-5ks2-57958-1353426643811-3:1:378:1,
> destinations=1,
> > > > > dispatched=32766, delivered=0, matched=0, discarded=0: Pending
> > message
> > > > > cursor
> > > > >
> > > > >
> > > >
> > >
> >
> [org.apache.activemq.broker.region.cursors.FilePendingMessageCursor@4c41cfa2
> > > > > ]
> > > > > is full, temp usage (0%) or memory usage (211%) limit reached,
> > blocking
> > > > > message add() pending the release of resources.
> > > > >
> > > > > This leads me to the following questions:
> > > > >
> > > > > 1) Why would the memory usage be 211% while temp usage is 0%.
> > > > > 2) The thread dump shows that send calls on producers are blocking.
> > Why
> > > > > would they not throw exceptions when send fail if no space = true?
> > > > > 3) Would the issue with connection shutdown contribute to the
> memory
> > > > usage?
> > > > >
> > > > > Thanks,
> > > > > Mark
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > *Christian Posta*
> > > > http://www.christianposta.com/blog
> > > > twitter: @christianposta
> > > >
> > >
> >
> >
> >
> > --
> > *Christian Posta*
> > http://www.christianposta.com/blog
> > twitter: @christianposta
> >
>



-- 
*Christian Posta*
http://www.christianposta.com/blog
twitter: @christianposta

Re: Memory and Temp Usage Questions

Posted by Mark Anderson <ma...@gmail.com>.
PeerNetworkConnector extends DiscoveryNetworkConnector so I can fire
listeners for onServiceAdd and onServiceRemove.


On 27 November 2012 14:16, Christian Posta <ch...@gmail.com>wrote:

> Okay good to know. I suppose this error happened just once randomly and you
> cannot reproduce?
>
> BTW... what is PeerNetworkConnector in your config:
>
>     NetworkConnector networkConnector = new
> PeerNetworkConnector(peerAddress, uri, this);
>
>
> On Tue, Nov 27, 2012 at 7:08 AM, Mark Anderson <manderson23@gmail.com
> >wrote:
>
> > The prefetch size was set on the network connector as we were getting
> > messages about slow consumers across the network bridge.
> >
> > As far as I can see the network bridge had not failed. The connector
> > entries in the log are for a client subscription that will also have the
> > topic prefetch set to 32766. I am trying to get logs from the client.
> >
> > The broker on the other end of the bridge uses the same configuration.
> >
> >
> > On 27 November 2012 13:41, Christian Posta <christian.posta@gmail.com
> > >wrote:
> >
> > > Answers to your questions:
> > >
> > > 1) Not sure yet
> > > 2) Because at the moment, send fail if no space is only triggered when
> > > producer flow control is on (at least for this case, topics)
> > > 3) like gtully said, connections could not be shutdown if they are
> > blocked
> > > somehow
> > >
> > > I noticed in your config you explicitly set the prefetch on the network
> > > connector to 32766. The default for network connectors is 1000 and the
> > > default for regular topics is Short.MAX_VALUE (which is 32767). Since
> the
> > > bridge doesn't have a prefetch buffer like normal clients do, setting
> the
> > > prefetch to 32766 could end up flooding it. Any reason why you have it
> > set
> > > to 32766?
> > >
> > > So TopicSubscriptions should always have the broker's main memory
> usage.
> > If
> > > it has the destination's memory limit, then something went wrong. Like
> > Gary
> > > said, the pending message cursor's messages would be spooled to disk
> when
> > > the main memory limit reaches its high water mark (70% by default)....
> > but
> > > that appears to not have happened in this case.
> > >
> > > Are there any indications that the TopicSubscription is for the network
> > > bridge? Or maybe that the network bridge failed somehow? I see that the
> > > dispatched count is the same as what you've set for you prefertch on
> the
> > > bridge, but if anything else can point to that it might be helpful. For
> > > example, are those port numbers on the transport connector logs for the
> > > network bridge?
> > >
> > > How is the broker on the other end of the bridge configured? Same?
> > >
> > >
> > > On Fri, Nov 23, 2012 at 8:36 AM, Mark Anderson <manderson23@gmail.com
> > > >wrote:
> > >
> > > > I have ActiveMQ 5.6.0 configured as follows:
> > > >
> > > > Producer Flow Control = false
> > > > Send Fail If No Space = true
> > > > Memory Usage Limit = 128Mb
> > > > Temp Usage Limit = 1Gb
> > > >
> > > > All my messages are non-persistent. The temp usage is configured to
> > > handle
> > > > spikes/slow consumers when processing messages.
> > > >
> > > > I continually see the following in the logs:
> > > >
> > > > WARN  Nov 20 20:55:47 (13748874 [InactivityMonitor Async Task:
> > > > java.util.concurrent.ThreadPoolExecutor$Worker@7ea0e15b[State = 0,
> > empty
> > > > queue]] org.apac
> > > > he.activemq.broker.TransportConnection.Transport) Transport
> Connection
> > > to:
> > > > tcp://192.168.2.103:35186 failed: java.net.SocketException: Broken
> > pipe
> > > > INFO  Nov 20 20:55:51 (13752162 [ActiveMQ Transport: tcp:///
> > > > 192.168.2.103:35168] org.apache.activemq.broker.TransportConnection)
> > The
> > > > connection to 'tcp:
> > > > //192.168.2.103:35166' is taking a long time to shutdown.
> > > > INFO  Nov 20 20:55:56 (13757162 [ActiveMQ Transport: tcp:///
> > > > 192.168.2.103:35168] org.apache.activemq.broker.TransportConnection)
> > The
> > > > connection to 'tcp:
> > > > //192.168.2.103:35166' is taking a long time to shutdown.
> > > > INFO  Nov 20 20:56:01 (13762162 [ActiveMQ Transport: tcp:///
> > > > 192.168.2.103:35168] org.apache.activemq.broker.TransportConnection)
> > The
> > > > connection to 'tcp:
> > > > //192.168.2.103:35166' is taking a long time to shutdown.
> > > >
> > > > I'm not sure why the connection will never shutdown.
> > > >
> > > > I then see the following message:
> > > >
> > > > org.apache.activemq.broker.region.TopicSubscription)
> TopicSubscription:
> > > > consumer=ID:linux-5ks2-57958-1353426643811-3:1:378:1, destinations=1,
> > > > dispatched=32766, delivered=0, matched=0, discarded=0: Pending
> message
> > > > cursor
> > > >
> > > >
> > >
> >
> [org.apache.activemq.broker.region.cursors.FilePendingMessageCursor@4c41cfa2
> > > > ]
> > > > is full, temp usage (0%) or memory usage (211%) limit reached,
> blocking
> > > > message add() pending the release of resources.
> > > >
> > > > This leads me to the following questions:
> > > >
> > > > 1) Why would the memory usage be 211% while temp usage is 0%.
> > > > 2) The thread dump shows that send calls on producers are blocking.
> Why
> > > > would they not throw exceptions when send fail if no space = true?
> > > > 3) Would the issue with connection shutdown contribute to the memory
> > > usage?
> > > >
> > > > Thanks,
> > > > Mark
> > > >
> > >
> > >
> > >
> > > --
> > > *Christian Posta*
> > > http://www.christianposta.com/blog
> > > twitter: @christianposta
> > >
> >
>
>
>
> --
> *Christian Posta*
> http://www.christianposta.com/blog
> twitter: @christianposta
>

Re: Memory and Temp Usage Questions

Posted by Christian Posta <ch...@gmail.com>.
Okay good to know. I suppose this error happened just once randomly and you
cannot reproduce?

BTW... what is PeerNetworkConnector in your config:

    NetworkConnector networkConnector = new
PeerNetworkConnector(peerAddress, uri, this);


On Tue, Nov 27, 2012 at 7:08 AM, Mark Anderson <ma...@gmail.com>wrote:

> The prefetch size was set on the network connector as we were getting
> messages about slow consumers across the network bridge.
>
> As far as I can see the network bridge had not failed. The connector
> entries in the log are for a client subscription that will also have the
> topic prefetch set to 32766. I am trying to get logs from the client.
>
> The broker on the other end of the bridge uses the same configuration.
>
>
> On 27 November 2012 13:41, Christian Posta <christian.posta@gmail.com
> >wrote:
>
> > Answers to your questions:
> >
> > 1) Not sure yet
> > 2) Because at the moment, send fail if no space is only triggered when
> > producer flow control is on (at least for this case, topics)
> > 3) like gtully said, connections could not be shutdown if they are
> blocked
> > somehow
> >
> > I noticed in your config you explicitly set the prefetch on the network
> > connector to 32766. The default for network connectors is 1000 and the
> > default for regular topics is Short.MAX_VALUE (which is 32767). Since the
> > bridge doesn't have a prefetch buffer like normal clients do, setting the
> > prefetch to 32766 could end up flooding it. Any reason why you have it
> set
> > to 32766?
> >
> > So TopicSubscriptions should always have the broker's main memory usage.
> If
> > it has the destination's memory limit, then something went wrong. Like
> Gary
> > said, the pending message cursor's messages would be spooled to disk when
> > the main memory limit reaches its high water mark (70% by default)....
> but
> > that appears to not have happened in this case.
> >
> > Are there any indications that the TopicSubscription is for the network
> > bridge? Or maybe that the network bridge failed somehow? I see that the
> > dispatched count is the same as what you've set for you prefertch on the
> > bridge, but if anything else can point to that it might be helpful. For
> > example, are those port numbers on the transport connector logs for the
> > network bridge?
> >
> > How is the broker on the other end of the bridge configured? Same?
> >
> >
> > On Fri, Nov 23, 2012 at 8:36 AM, Mark Anderson <manderson23@gmail.com
> > >wrote:
> >
> > > I have ActiveMQ 5.6.0 configured as follows:
> > >
> > > Producer Flow Control = false
> > > Send Fail If No Space = true
> > > Memory Usage Limit = 128Mb
> > > Temp Usage Limit = 1Gb
> > >
> > > All my messages are non-persistent. The temp usage is configured to
> > handle
> > > spikes/slow consumers when processing messages.
> > >
> > > I continually see the following in the logs:
> > >
> > > WARN  Nov 20 20:55:47 (13748874 [InactivityMonitor Async Task:
> > > java.util.concurrent.ThreadPoolExecutor$Worker@7ea0e15b[State = 0,
> empty
> > > queue]] org.apac
> > > he.activemq.broker.TransportConnection.Transport) Transport Connection
> > to:
> > > tcp://192.168.2.103:35186 failed: java.net.SocketException: Broken
> pipe
> > > INFO  Nov 20 20:55:51 (13752162 [ActiveMQ Transport: tcp:///
> > > 192.168.2.103:35168] org.apache.activemq.broker.TransportConnection)
> The
> > > connection to 'tcp:
> > > //192.168.2.103:35166' is taking a long time to shutdown.
> > > INFO  Nov 20 20:55:56 (13757162 [ActiveMQ Transport: tcp:///
> > > 192.168.2.103:35168] org.apache.activemq.broker.TransportConnection)
> The
> > > connection to 'tcp:
> > > //192.168.2.103:35166' is taking a long time to shutdown.
> > > INFO  Nov 20 20:56:01 (13762162 [ActiveMQ Transport: tcp:///
> > > 192.168.2.103:35168] org.apache.activemq.broker.TransportConnection)
> The
> > > connection to 'tcp:
> > > //192.168.2.103:35166' is taking a long time to shutdown.
> > >
> > > I'm not sure why the connection will never shutdown.
> > >
> > > I then see the following message:
> > >
> > > org.apache.activemq.broker.region.TopicSubscription) TopicSubscription:
> > > consumer=ID:linux-5ks2-57958-1353426643811-3:1:378:1, destinations=1,
> > > dispatched=32766, delivered=0, matched=0, discarded=0: Pending message
> > > cursor
> > >
> > >
> >
> [org.apache.activemq.broker.region.cursors.FilePendingMessageCursor@4c41cfa2
> > > ]
> > > is full, temp usage (0%) or memory usage (211%) limit reached, blocking
> > > message add() pending the release of resources.
> > >
> > > This leads me to the following questions:
> > >
> > > 1) Why would the memory usage be 211% while temp usage is 0%.
> > > 2) The thread dump shows that send calls on producers are blocking. Why
> > > would they not throw exceptions when send fail if no space = true?
> > > 3) Would the issue with connection shutdown contribute to the memory
> > usage?
> > >
> > > Thanks,
> > > Mark
> > >
> >
> >
> >
> > --
> > *Christian Posta*
> > http://www.christianposta.com/blog
> > twitter: @christianposta
> >
>



-- 
*Christian Posta*
http://www.christianposta.com/blog
twitter: @christianposta

Re: Memory and Temp Usage Questions

Posted by Mark Anderson <ma...@gmail.com>.
The prefetch size was set on the network connector as we were getting
messages about slow consumers across the network bridge.

As far as I can see the network bridge had not failed. The connector
entries in the log are for a client subscription that will also have the
topic prefetch set to 32766. I am trying to get logs from the client.

The broker on the other end of the bridge uses the same configuration.


On 27 November 2012 13:41, Christian Posta <ch...@gmail.com>wrote:

> Answers to your questions:
>
> 1) Not sure yet
> 2) Because at the moment, send fail if no space is only triggered when
> producer flow control is on (at least for this case, topics)
> 3) like gtully said, connections could not be shutdown if they are blocked
> somehow
>
> I noticed in your config you explicitly set the prefetch on the network
> connector to 32766. The default for network connectors is 1000 and the
> default for regular topics is Short.MAX_VALUE (which is 32767). Since the
> bridge doesn't have a prefetch buffer like normal clients do, setting the
> prefetch to 32766 could end up flooding it. Any reason why you have it set
> to 32766?
>
> So TopicSubscriptions should always have the broker's main memory usage. If
> it has the destination's memory limit, then something went wrong. Like Gary
> said, the pending message cursor's messages would be spooled to disk when
> the main memory limit reaches its high water mark (70% by default).... but
> that appears to not have happened in this case.
>
> Are there any indications that the TopicSubscription is for the network
> bridge? Or maybe that the network bridge failed somehow? I see that the
> dispatched count is the same as what you've set for you prefertch on the
> bridge, but if anything else can point to that it might be helpful. For
> example, are those port numbers on the transport connector logs for the
> network bridge?
>
> How is the broker on the other end of the bridge configured? Same?
>
>
> On Fri, Nov 23, 2012 at 8:36 AM, Mark Anderson <manderson23@gmail.com
> >wrote:
>
> > I have ActiveMQ 5.6.0 configured as follows:
> >
> > Producer Flow Control = false
> > Send Fail If No Space = true
> > Memory Usage Limit = 128Mb
> > Temp Usage Limit = 1Gb
> >
> > All my messages are non-persistent. The temp usage is configured to
> handle
> > spikes/slow consumers when processing messages.
> >
> > I continually see the following in the logs:
> >
> > WARN  Nov 20 20:55:47 (13748874 [InactivityMonitor Async Task:
> > java.util.concurrent.ThreadPoolExecutor$Worker@7ea0e15b[State = 0, empty
> > queue]] org.apac
> > he.activemq.broker.TransportConnection.Transport) Transport Connection
> to:
> > tcp://192.168.2.103:35186 failed: java.net.SocketException: Broken pipe
> > INFO  Nov 20 20:55:51 (13752162 [ActiveMQ Transport: tcp:///
> > 192.168.2.103:35168] org.apache.activemq.broker.TransportConnection) The
> > connection to 'tcp:
> > //192.168.2.103:35166' is taking a long time to shutdown.
> > INFO  Nov 20 20:55:56 (13757162 [ActiveMQ Transport: tcp:///
> > 192.168.2.103:35168] org.apache.activemq.broker.TransportConnection) The
> > connection to 'tcp:
> > //192.168.2.103:35166' is taking a long time to shutdown.
> > INFO  Nov 20 20:56:01 (13762162 [ActiveMQ Transport: tcp:///
> > 192.168.2.103:35168] org.apache.activemq.broker.TransportConnection) The
> > connection to 'tcp:
> > //192.168.2.103:35166' is taking a long time to shutdown.
> >
> > I'm not sure why the connection will never shutdown.
> >
> > I then see the following message:
> >
> > org.apache.activemq.broker.region.TopicSubscription) TopicSubscription:
> > consumer=ID:linux-5ks2-57958-1353426643811-3:1:378:1, destinations=1,
> > dispatched=32766, delivered=0, matched=0, discarded=0: Pending message
> > cursor
> >
> >
> [org.apache.activemq.broker.region.cursors.FilePendingMessageCursor@4c41cfa2
> > ]
> > is full, temp usage (0%) or memory usage (211%) limit reached, blocking
> > message add() pending the release of resources.
> >
> > This leads me to the following questions:
> >
> > 1) Why would the memory usage be 211% while temp usage is 0%.
> > 2) The thread dump shows that send calls on producers are blocking. Why
> > would they not throw exceptions when send fail if no space = true?
> > 3) Would the issue with connection shutdown contribute to the memory
> usage?
> >
> > Thanks,
> > Mark
> >
>
>
>
> --
> *Christian Posta*
> http://www.christianposta.com/blog
> twitter: @christianposta
>

Re: Memory and Temp Usage Questions

Posted by Christian Posta <ch...@gmail.com>.
Answers to your questions:

1) Not sure yet
2) Because at the moment, send fail if no space is only triggered when
producer flow control is on (at least for this case, topics)
3) like gtully said, connections could not be shutdown if they are blocked
somehow

I noticed in your config you explicitly set the prefetch on the network
connector to 32766. The default for network connectors is 1000 and the
default for regular topics is Short.MAX_VALUE (which is 32767). Since the
bridge doesn't have a prefetch buffer like normal clients do, setting the
prefetch to 32766 could end up flooding it. Any reason why you have it set
to 32766?

So TopicSubscriptions should always have the broker's main memory usage. If
it has the destination's memory limit, then something went wrong. Like Gary
said, the pending message cursor's messages would be spooled to disk when
the main memory limit reaches its high water mark (70% by default).... but
that appears to not have happened in this case.

Are there any indications that the TopicSubscription is for the network
bridge? Or maybe that the network bridge failed somehow? I see that the
dispatched count is the same as what you've set for you prefertch on the
bridge, but if anything else can point to that it might be helpful. For
example, are those port numbers on the transport connector logs for the
network bridge?

How is the broker on the other end of the bridge configured? Same?


On Fri, Nov 23, 2012 at 8:36 AM, Mark Anderson <ma...@gmail.com>wrote:

> I have ActiveMQ 5.6.0 configured as follows:
>
> Producer Flow Control = false
> Send Fail If No Space = true
> Memory Usage Limit = 128Mb
> Temp Usage Limit = 1Gb
>
> All my messages are non-persistent. The temp usage is configured to handle
> spikes/slow consumers when processing messages.
>
> I continually see the following in the logs:
>
> WARN  Nov 20 20:55:47 (13748874 [InactivityMonitor Async Task:
> java.util.concurrent.ThreadPoolExecutor$Worker@7ea0e15b[State = 0, empty
> queue]] org.apac
> he.activemq.broker.TransportConnection.Transport) Transport Connection to:
> tcp://192.168.2.103:35186 failed: java.net.SocketException: Broken pipe
> INFO  Nov 20 20:55:51 (13752162 [ActiveMQ Transport: tcp:///
> 192.168.2.103:35168] org.apache.activemq.broker.TransportConnection) The
> connection to 'tcp:
> //192.168.2.103:35166' is taking a long time to shutdown.
> INFO  Nov 20 20:55:56 (13757162 [ActiveMQ Transport: tcp:///
> 192.168.2.103:35168] org.apache.activemq.broker.TransportConnection) The
> connection to 'tcp:
> //192.168.2.103:35166' is taking a long time to shutdown.
> INFO  Nov 20 20:56:01 (13762162 [ActiveMQ Transport: tcp:///
> 192.168.2.103:35168] org.apache.activemq.broker.TransportConnection) The
> connection to 'tcp:
> //192.168.2.103:35166' is taking a long time to shutdown.
>
> I'm not sure why the connection will never shutdown.
>
> I then see the following message:
>
> org.apache.activemq.broker.region.TopicSubscription) TopicSubscription:
> consumer=ID:linux-5ks2-57958-1353426643811-3:1:378:1, destinations=1,
> dispatched=32766, delivered=0, matched=0, discarded=0: Pending message
> cursor
>
> [org.apache.activemq.broker.region.cursors.FilePendingMessageCursor@4c41cfa2
> ]
> is full, temp usage (0%) or memory usage (211%) limit reached, blocking
> message add() pending the release of resources.
>
> This leads me to the following questions:
>
> 1) Why would the memory usage be 211% while temp usage is 0%.
> 2) The thread dump shows that send calls on producers are blocking. Why
> would they not throw exceptions when send fail if no space = true?
> 3) Would the issue with connection shutdown contribute to the memory usage?
>
> Thanks,
> Mark
>



-- 
*Christian Posta*
http://www.christianposta.com/blog
twitter: @christianposta