You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@activemq.apache.org by Bryan <br...@gmail.com> on 2011/11/30 20:32:07 UTC

Throttling deadlock

I'm encountering a deadlock related to throttling (producer flow control).
Here is the scenario which causes this. I receive a message from a queue,
process this message, and send a message to a different queue inside the
message handler. When throttling kicks in, this results in a deadlock. The
new message send blocks when throttling has kicked in, and it is waiting for
a message to be dequeued. But the original message sent does not seem to be
marked as being dequeued by the throttler until after I exit from the
message handler (which is blocked). Hence I'm deadlocked.

I can avoid the deadlock by using a separate thread to send the new message.
I can also avoid the deadlock by using async sends. Both have downsides,
however. Using a separate thread will require some logic wrapping every
message send. Using async sends bypasses throttling altogether (not what I
want), and has other consequences.

Is using async sends with connctionFactory.setProducerWindowSize() the
correct approach to prevent throttling from deadlocking? I don't really want
to use async sends other than to avoid this problem. Or is there a way to
indicate to the throttler to unblock immediately after a receive() call?


--
View this message in context: http://activemq.2283324.n4.nabble.com/Throttling-deadlock-tp4124447p4124447.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: Throttling deadlock

Posted by Gary Tully <ga...@gmail.com>.
thanks for closing the loop on this one.

Yep, the destination limits have a parent/child relationship with system usage;
sort of pocket money like, all of the money comes from the parent
income so it is reduced!

If hard limits are in place and the expect to be met, then share the
system usage among destinations using memory limits that are a portion
of the total. This works for static numbers of destinations

When the numbers of destinations is dynamic, you can leverage the
cursorMemoryHighWaterMark percentage via a destination policy. Reduce
that such that the available limit is shared across destinations on an
as needed basis. A value of 5% would ensure that 20 destinations can
viably use some memory for caching messages without blocking.

In the single destination case, the default value of 70% with the
default store cursor will ensure caching stops before the system limit
is reached, even if the destination limit == the system limit, so
there is no need to block a send.

When the vmcursor is used, reaching the limit means sends will block
because all messages are always kept in memory.

Final note: with limits, the jvm's max heap -Xmx should exceed the
system usage value, possibly by a factor of 2 depending on the usage
pattern an GC spikes. In ActiveMQ, messages in memory are the only
resource that are accounted for when checking usage limits, so all
other objects, destinations, jmx, store caches etc need jvm resources.
Using all of the available heap for messages will quickly lead to OOM
so don't do that.


On 1 December 2011 22:04, Bryan <br...@gmail.com> wrote:
> For those interested, I resolved my issue. ActiveMQ flow control kicks in
> when the queue memory limit is reached, or more importantly, when system
> memory usage is reached. By default, both the per-queue and system memory
> limits are set to 64mb. If you have more than one queue in use, then you
> will generally hit the system memory limit before the queue limit if there
> are slow consumers and you are using a VM pending queue policy. All queues
> will then end up being throttled based on the shared system memory, and this
> can result in a deadlock. The deadlock on the shared memory limit I consider
> to be a bug in ActiveMQ.
>
> Thus to avoid a deadlock, set the system memory usage limit to be high
> enough that it will never be reached before the per-queue limits, e.g. set
> it to (per queue limit) X (number of queues). Once I did this, there were no
> more deadlocks as a result of producer flow control. Theoretically a
> queue-specific deadlock could still happen if you are consuming then
> re-queuing messages to the same queue, but that isn't an issue for me.
>
> The following sets a 1mb limit per queue, and 200mb limit on system usage,
> thus you can have several queues full before you hit the system usage limit.
>
>  <destinationPolicy>
>        <policyMap>
>          <policyEntries>
>                <policyEntry queue=">" producerFlowControl="true" memoryLimit="1mb">
>                  <pendingQueuePolicy>
>                        <vmQueueCursor/>
>                  </pendingQueuePolicy>
>                </policyEntry>
>          </policyEntries>
>        </policyMap>
> </destinationPolicy>
>
> <systemUsage>
>        <systemUsage>
>                <memoryUsage>
>                        <memoryUsage limit="200 mb"/>
>                </memoryUsage>
>                <storeUsage>
>                        <storeUsage limit="1 gb"/>
>                </storeUsage>
>                <tempUsage>
>                        <tempUsage limit="100 mb"/>
>                </tempUsage>
>        </systemUsage>
> </systemUsage>
>
>
>
> --
> View this message in context: http://activemq.2283324.n4.nabble.com/Throttling-deadlock-tp4124447p4142420.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.



-- 
http://fusesource.com
http://blog.garytully.com

Re: Throttling deadlock

Posted by Bryan <br...@gmail.com>.
For those interested, I resolved my issue. ActiveMQ flow control kicks in
when the queue memory limit is reached, or more importantly, when system
memory usage is reached. By default, both the per-queue and system memory
limits are set to 64mb. If you have more than one queue in use, then you
will generally hit the system memory limit before the queue limit if there
are slow consumers and you are using a VM pending queue policy. All queues
will then end up being throttled based on the shared system memory, and this
can result in a deadlock. The deadlock on the shared memory limit I consider
to be a bug in ActiveMQ.

Thus to avoid a deadlock, set the system memory usage limit to be high
enough that it will never be reached before the per-queue limits, e.g. set
it to (per queue limit) X (number of queues). Once I did this, there were no
more deadlocks as a result of producer flow control. Theoretically a
queue-specific deadlock could still happen if you are consuming then
re-queuing messages to the same queue, but that isn't an issue for me.

The following sets a 1mb limit per queue, and 200mb limit on system usage,
thus you can have several queues full before you hit the system usage limit.

 <destinationPolicy>
	<policyMap>
	  <policyEntries>
		<policyEntry queue=">" producerFlowControl="true" memoryLimit="1mb">
		  <pendingQueuePolicy>
			<vmQueueCursor/>
		  </pendingQueuePolicy>
		</policyEntry>
	  </policyEntries>
	</policyMap>
</destinationPolicy> 
        
<systemUsage>
	<systemUsage>
		<memoryUsage>
			<memoryUsage limit="200 mb"/>
		</memoryUsage>
		<storeUsage>
			<storeUsage limit="1 gb"/>
		</storeUsage>
		<tempUsage>
			<tempUsage limit="100 mb"/>
		</tempUsage>
	</systemUsage>
</systemUsage>



--
View this message in context: http://activemq.2283324.n4.nabble.com/Throttling-deadlock-tp4124447p4142420.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.