You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@activemq.apache.org by DV <vi...@gmail.com> on 2013/01/18 21:46:04 UTC

High swap usage and network timeouts during heavy publishing

Hi all,

I'd like to post some symptoms we're seeing with our ActiveMQ
publishing broker and maybe get some ideas on what we need to tweak.

We're using ActiveMQ to push publishing messages to a set of pre-prod
and prod consumers. During peak of publishing, ActiveMQ becomes
unresponsive (web UI timeout, JMX timeout, process hangs). On the
system side, we're seeing swap usage approach 50% of host's physical
memory (4GB out of 8GB). In logs there are three types of warnings:

Slow KahaDB access:
2013-01-17 23:49:58,616 |  INFO | Slow KahaDB access: Journal append
took: 0 ms, Index update took 1991 ms
2013-01-17 23:50:26,649 |  INFO | Slow KahaDB access: Journal read took: 1818 ms

Timeout of Zookeeper connection:
2013-01-18 03:24:25,195 |  INFO | Client session timed out, have not
heard from server in 14251ms for sessionid 0x23c46ac51160442, closing
socket connection and attempting reconnect

Timeout of consumer connections:
2013-01-18 04:23:58,674 |  WARN | Transport Connection to:
tcp://10.129.18.33:48055 failed:
org.apache.activemq.transport.InactivityIOException: Channel was
inactive for too (>30000) long: tcp://10.129.18.33:48055
2013-01-18 04:23:58,673 |  WARN | Transport Connection to:
tcp://10.14.26.60:48237 failed:
org.apache.activemq.transport.InactivityIOException: Channel was
inactive for too (>30000) long: tcp://10.14.26.60:48237
2013-01-18 04:23:58,673 |  WARN | Transport Connection to:
tcp://10.14.26.61:56382 failed:
org.apache.activemq.transport.InactivityIOException: Channel was
inactive for too (>30000) long: tcp://10.14.26.61:56382

We're running ActiveMQ 5.7.0 on RHEL 5.7 with Java 1.6.0_34. The host
has 8GB of RAM and has /data and /logs NFS-mounted. There's about 180
queues on the broker, and a publishing event usually goes out to about
3-10 queues.

We've already tested the underlying infrastructure (host, network,
etc.) as well as tweaked ActiveMQ settings to our best knowledge, yet
we continue seeing these issues. Recently, we've enabled PFC (producer
flow control), which helped bring down the number of occurrences.

Here's the startup command:

/usr/java/default/bin/java -DuseProfiler=true
-javaagent:/usr/local/appdynamics-agent-3.5.5/javaagent.jar
-Dappdynamics.agent.nodeName=hostname.fqdn
-Dappdynamics.agent.logs.dir=/logs/appdynamics/ -Xmx4096M -Xms4096M
-XX:MaxPermSize=128m
-Dorg.apache.activemq.UseDedicatedTaskRunner=false
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false
-Dorg.apache.activemq.store.kahadb.LOG_SLOW_ACCESS_TIME=1500
-Djava.util.logging.config.file=logging.properties
-Dactivemq.classpath=/apps/apache-activemq/conf
-Dactivemq.conf=/apps/apache-activemq/conf -jar
/apps/apache-activemq/bin/run.jar start

Here are the links to some Appdynamics graphs (a publishing activity
has started at 8:30am), as well as our ActiveMQ config and complete
console output log.

https://dl.dropbox.com/u/2739192/amq/activemq.xml
https://dl.dropbox.com/u/2739192/amq/broker.log

https://dl.dropbox.com/u/2739192/amq/io.png
https://dl.dropbox.com/u/2739192/amq/net_io.png
https://dl.dropbox.com/u/2739192/amq/jmx.png
https://dl.dropbox.com/u/2739192/amq/jvm.png
https://dl.dropbox.com/u/2739192/amq/mem.png

Here's the portion of ActiveMQ config for ease of use.


    <broker xmlns="http://activemq.apache.org/schema/core"
        brokerName="broker-publishing-legacy"
        useJmx="true"
        dataDirectory="/data/activemq/">

        <destinationInterceptors>
            <bean xmlns="http://www.springframework.org/schema/beans"
id="QueueDestinationInterceptor"
class="com.abcde.eps.interceptor.QueueDestinationInterceptor">
            </bean>
            <virtualDestinationInterceptor>
                <virtualDestinations>
                    <virtualTopic name="VirtualTopic.>" prefix="Consumer.*." />
                </virtualDestinations>
            </virtualDestinationInterceptor>
        </destinationInterceptors>

        <destinationPolicy>
            <policyMap>
                <policyEntries>
                    <policyEntry topic=">"
                                 memoryLimit="16 mb"
                                 producerFlowControl="true">
                    </policyEntry>
                    <policyEntry queue=">"
                                 memoryLimit="16 mb"
                                 optimizedDispatch="true"
                                 producerFlowControl="true">
                    </policyEntry>
                </policyEntries>
            </policyMap>
        </destinationPolicy>

        <managementContext>
            <managementContext connectorPort="1101"
                               rmiServerPort="1100"
                               jmxDomainName="org.apache.activemq" />
        </managementContext>

        <persistenceAdapter>
            <kahaDB directory="/data/activemq/"
                    enableIndexWriteAsync="true"
                    enableJournalDiskSyncs="false"
                    journalMaxFileLength="256mb" />
        </persistenceAdapter>

        <systemUsage>
            <systemUsage>
                <memoryUsage>
                    <memoryUsage limit="512 mb"/>
                </memoryUsage>
                <storeUsage>
                    <storeUsage limit="200 gb"/>
                </storeUsage>
                <tempUsage>
                    <tempUsage limit="1 gb"/>
                </tempUsage>
            </systemUsage>
        </systemUsage>

        <transportConnectors>
            <transportConnector name="nio"

uri="nio://0.0.0.0:1102?transport.closeAsync=false" />
        </transportConnectors>

    </broker>


Any suggestions would be appreciated.

-- 
Best regards, Dmitriy V.

Re: High swap usage and network timeouts during heavy publishing

Posted by Tim Bain <tb...@alumni.duke.edu>.
Have you confirmed (via techniques such as the ones described in
https://www.cyberciti.biz/faq/linux-which-process-is-using-swap/) that the
ActiveMQ process is the one whose pages are getting swapped?

Also, what does top show your buffer usage (i.e. Linux using memory to
buffer disk writes for performance reasons) during this period of time?

Linux has an OS setting (vm.swappiness) that will let you control how
willing the OS is to swap out pages. You could adjust this value to make
paging less likely (or forbid it entirely) if you wanted.

Tim

On Jun 21, 2017 12:53 PM, "Abhinav2510" <ab...@tcs.com> wrote:

> In ActiveMQ 5.13.2 .
>
>
>
> --
> View this message in context: http://activemq.2283324.n4.
> nabble.com/High-swap-usage-and-network-timeouts-during-heavy-publishing-
> tp4661946p4727692.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>

Re: High swap usage and network timeouts during heavy publishing

Posted by Abhinav2510 <ab...@tcs.com>.
In ActiveMQ 5.13.2 .



--
View this message in context: http://activemq.2283324.n4.nabble.com/High-swap-usage-and-network-timeouts-during-heavy-publishing-tp4661946p4727692.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: High swap usage and network timeouts during heavy publishing

Posted by Clebert Suconic <cl...@gmail.com>.
Is that activemq or activemq artemis? what version?

On Wed, Jun 21, 2017 at 9:16 AM, Abhinav2510
<ab...@tcs.com> wrote:
> Tim Bain wrote
>> How much memory is your ActiveMQ process using, compared to the amount of
>> RAM on your host? Could the swap simply result from the JVM being allowed
>> to grow larger than the amount of physical memory available?
>>
>> Tim
>>
>> On Tue, Jun 20, 2017 at 3:11 PM, Abhinav2510 &lt;
>
>> abhinav.suryawanshi@
>
>> &gt;
>> wrote:
>>
>>> I am facing this issue now.
>>> Did you come accross any solution for this?
>>>
>>>
>>>
>>> --
>>> View this message in context: http://activemq.2283324.n4.
>>> nabble.com/High-swap-usage-and-network-timeouts-during-heavy-publishing-
>>> tp4661946p4727591.html
>>> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>>>
>
> My JVM memory configurations are as below.
> -Xms1024M -Xmx6144M -XX:+UseG1GC -XX:NewRatio=4
> -XX:InitiatingHeapOccupancyPercent=45 -XX:+PrintGCDetails
> -XX:+PrintGCDateStamps. We have alreday limited Java to not go beyond 6Gb
> and our RAM is of 43 Gb still we are seeing gradual increase in used swap
> over time for which we have to restart AMQ every month to release Swap
> memory.
>
>
>
>
>
>
> --
> View this message in context: http://activemq.2283324.n4.nabble.com/High-swap-usage-and-network-timeouts-during-heavy-publishing-tp4661946p4727641.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.



-- 
Clebert Suconic

Re: High swap usage and network timeouts during heavy publishing

Posted by Abhinav2510 <ab...@tcs.com>.
Tim Bain wrote
> How much memory is your ActiveMQ process using, compared to the amount of
> RAM on your host? Could the swap simply result from the JVM being allowed
> to grow larger than the amount of physical memory available?
> 
> Tim
> 
> On Tue, Jun 20, 2017 at 3:11 PM, Abhinav2510 &lt;

> abhinav.suryawanshi@

> &gt;
> wrote:
> 
>> I am facing this issue now.
>> Did you come accross any solution for this?
>>
>>
>>
>> --
>> View this message in context: http://activemq.2283324.n4.
>> nabble.com/High-swap-usage-and-network-timeouts-during-heavy-publishing-
>> tp4661946p4727591.html
>> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>>

My JVM memory configurations are as below.
-Xms1024M -Xmx6144M -XX:+UseG1GC -XX:NewRatio=4
-XX:InitiatingHeapOccupancyPercent=45 -XX:+PrintGCDetails
-XX:+PrintGCDateStamps. We have alreday limited Java to not go beyond 6Gb
and our RAM is of 43 Gb still we are seeing gradual increase in used swap
over time for which we have to restart AMQ every month to release Swap
memory.






--
View this message in context: http://activemq.2283324.n4.nabble.com/High-swap-usage-and-network-timeouts-during-heavy-publishing-tp4661946p4727641.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: High swap usage and network timeouts during heavy publishing

Posted by Tim Bain <tb...@alumni.duke.edu>.
How much memory is your ActiveMQ process using, compared to the amount of
RAM on your host? Could the swap simply result from the JVM being allowed
to grow larger than the amount of physical memory available?

Tim

On Tue, Jun 20, 2017 at 3:11 PM, Abhinav2510 <ab...@tcs.com>
wrote:

> I am facing this issue now.
> Did you come accross any solution for this?
>
>
>
> --
> View this message in context: http://activemq.2283324.n4.
> nabble.com/High-swap-usage-and-network-timeouts-during-heavy-publishing-
> tp4661946p4727591.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>

Re: High swap usage and network timeouts during heavy publishing

Posted by Abhinav2510 <ab...@tcs.com>.
I am facing this issue now.
Did you come accross any solution for this?



--
View this message in context: http://activemq.2283324.n4.nabble.com/High-swap-usage-and-network-timeouts-during-heavy-publishing-tp4661946p4727591.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: High swap usage and network timeouts during heavy publishing

Posted by DV <vi...@gmail.com>.
Just following up... I wonder if anyone else has experienced high swap
usage and/or network timeouts in ActiveMQ and what steps they've taken
to fix that. Thanks!

- DV

On Sun, Jan 20, 2013 at 5:00 PM, DV <vi...@gmail.com> wrote:
> I forgot to mention one more thing. This sort of behavior (high swap
> usage and lots of timeouts) used to end with "Too many open files"
> exceptions flooding ActiveMQ's logs, until we upped the ulimit on the
> host from 1024 to max (transport.closeAsync=false did _not_ help).
> Here's a sample log entry:
>
> 2013-01-05 12:16:20,352 | ERROR | Could not accept connection :
> java.io.IOException: Too many open files |
> org.apache.activemq.broker.TransportConnector | ActiveMQ Transport
> Server: nio://0.0.0.0:1102?transport.closeAsync=false
>
> - DV
>
> On Fri, Jan 18, 2013 at 12:46 PM, DV <vi...@gmail.com> wrote:
>> Hi all,
>>
>> I'd like to post some symptoms we're seeing with our ActiveMQ
>> publishing broker and maybe get some ideas on what we need to tweak.
>>
>> We're using ActiveMQ to push publishing messages to a set of pre-prod
>> and prod consumers. During peak of publishing, ActiveMQ becomes
>> unresponsive (web UI timeout, JMX timeout, process hangs). On the
>> system side, we're seeing swap usage approach 50% of host's physical
>> memory (4GB out of 8GB). In logs there are three types of warnings:
>>
>> Slow KahaDB access:
>> 2013-01-17 23:49:58,616 |  INFO | Slow KahaDB access: Journal append
>> took: 0 ms, Index update took 1991 ms
>> 2013-01-17 23:50:26,649 |  INFO | Slow KahaDB access: Journal read took: 1818 ms
>>
>> Timeout of Zookeeper connection:
>> 2013-01-18 03:24:25,195 |  INFO | Client session timed out, have not
>> heard from server in 14251ms for sessionid 0x23c46ac51160442, closing
>> socket connection and attempting reconnect
>>
>> Timeout of consumer connections:
>> 2013-01-18 04:23:58,674 |  WARN | Transport Connection to:
>> tcp://10.129.18.33:48055 failed:
>> org.apache.activemq.transport.InactivityIOException: Channel was
>> inactive for too (>30000) long: tcp://10.129.18.33:48055
>> 2013-01-18 04:23:58,673 |  WARN | Transport Connection to:
>> tcp://10.14.26.60:48237 failed:
>> org.apache.activemq.transport.InactivityIOException: Channel was
>> inactive for too (>30000) long: tcp://10.14.26.60:48237
>> 2013-01-18 04:23:58,673 |  WARN | Transport Connection to:
>> tcp://10.14.26.61:56382 failed:
>> org.apache.activemq.transport.InactivityIOException: Channel was
>> inactive for too (>30000) long: tcp://10.14.26.61:56382
>>
>> We're running ActiveMQ 5.7.0 on RHEL 5.7 with Java 1.6.0_34. The host
>> has 8GB of RAM and has /data and /logs NFS-mounted. There's about 180
>> queues on the broker, and a publishing event usually goes out to about
>> 3-10 queues.
>>
>> We've already tested the underlying infrastructure (host, network,
>> etc.) as well as tweaked ActiveMQ settings to our best knowledge, yet
>> we continue seeing these issues. Recently, we've enabled PFC (producer
>> flow control), which helped bring down the number of occurrences.
>>
>> Here's the startup command:
>>
>> /usr/java/default/bin/java -DuseProfiler=true
>> -javaagent:/usr/local/appdynamics-agent-3.5.5/javaagent.jar
>> -Dappdynamics.agent.nodeName=hostname.fqdn
>> -Dappdynamics.agent.logs.dir=/logs/appdynamics/ -Xmx4096M -Xms4096M
>> -XX:MaxPermSize=128m
>> -Dorg.apache.activemq.UseDedicatedTaskRunner=false
>> -Dcom.sun.management.jmxremote
>> -Dcom.sun.management.jmxremote.ssl=false
>> -Dcom.sun.management.jmxremote.authenticate=false
>> -Dorg.apache.activemq.store.kahadb.LOG_SLOW_ACCESS_TIME=1500
>> -Djava.util.logging.config.file=logging.properties
>> -Dactivemq.classpath=/apps/apache-activemq/conf
>> -Dactivemq.conf=/apps/apache-activemq/conf -jar
>> /apps/apache-activemq/bin/run.jar start
>>
>> Here are the links to some Appdynamics graphs (a publishing activity
>> has started at 8:30am), as well as our ActiveMQ config and complete
>> console output log.
>>
>> https://dl.dropbox.com/u/2739192/amq/activemq.xml
>> https://dl.dropbox.com/u/2739192/amq/broker.log
>>
>> https://dl.dropbox.com/u/2739192/amq/io.png
>> https://dl.dropbox.com/u/2739192/amq/net_io.png
>> https://dl.dropbox.com/u/2739192/amq/jmx.png
>> https://dl.dropbox.com/u/2739192/amq/jvm.png
>> https://dl.dropbox.com/u/2739192/amq/mem.png
>>
>> Here's the portion of ActiveMQ config for ease of use.
>>
>>
>>     <broker xmlns="http://activemq.apache.org/schema/core"
>>         brokerName="broker-publishing-legacy"
>>         useJmx="true"
>>         dataDirectory="/data/activemq/">
>>
>>         <destinationInterceptors>
>>             <bean xmlns="http://www.springframework.org/schema/beans"
>> id="QueueDestinationInterceptor"
>> class="com.abcde.eps.interceptor.QueueDestinationInterceptor">
>>             </bean>
>>             <virtualDestinationInterceptor>
>>                 <virtualDestinations>
>>                     <virtualTopic name="VirtualTopic.>" prefix="Consumer.*." />
>>                 </virtualDestinations>
>>             </virtualDestinationInterceptor>
>>         </destinationInterceptors>
>>
>>         <destinationPolicy>
>>             <policyMap>
>>                 <policyEntries>
>>                     <policyEntry topic=">"
>>                                  memoryLimit="16 mb"
>>                                  producerFlowControl="true">
>>                     </policyEntry>
>>                     <policyEntry queue=">"
>>                                  memoryLimit="16 mb"
>>                                  optimizedDispatch="true"
>>                                  producerFlowControl="true">
>>                     </policyEntry>
>>                 </policyEntries>
>>             </policyMap>
>>         </destinationPolicy>
>>
>>         <managementContext>
>>             <managementContext connectorPort="1101"
>>                                rmiServerPort="1100"
>>                                jmxDomainName="org.apache.activemq" />
>>         </managementContext>
>>
>>         <persistenceAdapter>
>>             <kahaDB directory="/data/activemq/"
>>                     enableIndexWriteAsync="true"
>>                     enableJournalDiskSyncs="false"
>>                     journalMaxFileLength="256mb" />
>>         </persistenceAdapter>
>>
>>         <systemUsage>
>>             <systemUsage>
>>                 <memoryUsage>
>>                     <memoryUsage limit="512 mb"/>
>>                 </memoryUsage>
>>                 <storeUsage>
>>                     <storeUsage limit="200 gb"/>
>>                 </storeUsage>
>>                 <tempUsage>
>>                     <tempUsage limit="1 gb"/>
>>                 </tempUsage>
>>             </systemUsage>
>>         </systemUsage>
>>
>>         <transportConnectors>
>>             <transportConnector name="nio"
>>
>> uri="nio://0.0.0.0:1102?transport.closeAsync=false" />
>>         </transportConnectors>
>>
>>     </broker>
>>
>>
>> Any suggestions would be appreciated.
>>
>> --
>> Best regards, Dmitriy V.
>
>
>
> --
> Best regards, Dmitriy V.



-- 
Best regards, Dmitriy V.

Re: High swap usage and network timeouts during heavy publishing

Posted by DV <vi...@gmail.com>.
I forgot to mention one more thing. This sort of behavior (high swap
usage and lots of timeouts) used to end with "Too many open files"
exceptions flooding ActiveMQ's logs, until we upped the ulimit on the
host from 1024 to max (transport.closeAsync=false did _not_ help).
Here's a sample log entry:

2013-01-05 12:16:20,352 | ERROR | Could not accept connection :
java.io.IOException: Too many open files |
org.apache.activemq.broker.TransportConnector | ActiveMQ Transport
Server: nio://0.0.0.0:1102?transport.closeAsync=false

- DV

On Fri, Jan 18, 2013 at 12:46 PM, DV <vi...@gmail.com> wrote:
> Hi all,
>
> I'd like to post some symptoms we're seeing with our ActiveMQ
> publishing broker and maybe get some ideas on what we need to tweak.
>
> We're using ActiveMQ to push publishing messages to a set of pre-prod
> and prod consumers. During peak of publishing, ActiveMQ becomes
> unresponsive (web UI timeout, JMX timeout, process hangs). On the
> system side, we're seeing swap usage approach 50% of host's physical
> memory (4GB out of 8GB). In logs there are three types of warnings:
>
> Slow KahaDB access:
> 2013-01-17 23:49:58,616 |  INFO | Slow KahaDB access: Journal append
> took: 0 ms, Index update took 1991 ms
> 2013-01-17 23:50:26,649 |  INFO | Slow KahaDB access: Journal read took: 1818 ms
>
> Timeout of Zookeeper connection:
> 2013-01-18 03:24:25,195 |  INFO | Client session timed out, have not
> heard from server in 14251ms for sessionid 0x23c46ac51160442, closing
> socket connection and attempting reconnect
>
> Timeout of consumer connections:
> 2013-01-18 04:23:58,674 |  WARN | Transport Connection to:
> tcp://10.129.18.33:48055 failed:
> org.apache.activemq.transport.InactivityIOException: Channel was
> inactive for too (>30000) long: tcp://10.129.18.33:48055
> 2013-01-18 04:23:58,673 |  WARN | Transport Connection to:
> tcp://10.14.26.60:48237 failed:
> org.apache.activemq.transport.InactivityIOException: Channel was
> inactive for too (>30000) long: tcp://10.14.26.60:48237
> 2013-01-18 04:23:58,673 |  WARN | Transport Connection to:
> tcp://10.14.26.61:56382 failed:
> org.apache.activemq.transport.InactivityIOException: Channel was
> inactive for too (>30000) long: tcp://10.14.26.61:56382
>
> We're running ActiveMQ 5.7.0 on RHEL 5.7 with Java 1.6.0_34. The host
> has 8GB of RAM and has /data and /logs NFS-mounted. There's about 180
> queues on the broker, and a publishing event usually goes out to about
> 3-10 queues.
>
> We've already tested the underlying infrastructure (host, network,
> etc.) as well as tweaked ActiveMQ settings to our best knowledge, yet
> we continue seeing these issues. Recently, we've enabled PFC (producer
> flow control), which helped bring down the number of occurrences.
>
> Here's the startup command:
>
> /usr/java/default/bin/java -DuseProfiler=true
> -javaagent:/usr/local/appdynamics-agent-3.5.5/javaagent.jar
> -Dappdynamics.agent.nodeName=hostname.fqdn
> -Dappdynamics.agent.logs.dir=/logs/appdynamics/ -Xmx4096M -Xms4096M
> -XX:MaxPermSize=128m
> -Dorg.apache.activemq.UseDedicatedTaskRunner=false
> -Dcom.sun.management.jmxremote
> -Dcom.sun.management.jmxremote.ssl=false
> -Dcom.sun.management.jmxremote.authenticate=false
> -Dorg.apache.activemq.store.kahadb.LOG_SLOW_ACCESS_TIME=1500
> -Djava.util.logging.config.file=logging.properties
> -Dactivemq.classpath=/apps/apache-activemq/conf
> -Dactivemq.conf=/apps/apache-activemq/conf -jar
> /apps/apache-activemq/bin/run.jar start
>
> Here are the links to some Appdynamics graphs (a publishing activity
> has started at 8:30am), as well as our ActiveMQ config and complete
> console output log.
>
> https://dl.dropbox.com/u/2739192/amq/activemq.xml
> https://dl.dropbox.com/u/2739192/amq/broker.log
>
> https://dl.dropbox.com/u/2739192/amq/io.png
> https://dl.dropbox.com/u/2739192/amq/net_io.png
> https://dl.dropbox.com/u/2739192/amq/jmx.png
> https://dl.dropbox.com/u/2739192/amq/jvm.png
> https://dl.dropbox.com/u/2739192/amq/mem.png
>
> Here's the portion of ActiveMQ config for ease of use.
>
>
>     <broker xmlns="http://activemq.apache.org/schema/core"
>         brokerName="broker-publishing-legacy"
>         useJmx="true"
>         dataDirectory="/data/activemq/">
>
>         <destinationInterceptors>
>             <bean xmlns="http://www.springframework.org/schema/beans"
> id="QueueDestinationInterceptor"
> class="com.abcde.eps.interceptor.QueueDestinationInterceptor">
>             </bean>
>             <virtualDestinationInterceptor>
>                 <virtualDestinations>
>                     <virtualTopic name="VirtualTopic.>" prefix="Consumer.*." />
>                 </virtualDestinations>
>             </virtualDestinationInterceptor>
>         </destinationInterceptors>
>
>         <destinationPolicy>
>             <policyMap>
>                 <policyEntries>
>                     <policyEntry topic=">"
>                                  memoryLimit="16 mb"
>                                  producerFlowControl="true">
>                     </policyEntry>
>                     <policyEntry queue=">"
>                                  memoryLimit="16 mb"
>                                  optimizedDispatch="true"
>                                  producerFlowControl="true">
>                     </policyEntry>
>                 </policyEntries>
>             </policyMap>
>         </destinationPolicy>
>
>         <managementContext>
>             <managementContext connectorPort="1101"
>                                rmiServerPort="1100"
>                                jmxDomainName="org.apache.activemq" />
>         </managementContext>
>
>         <persistenceAdapter>
>             <kahaDB directory="/data/activemq/"
>                     enableIndexWriteAsync="true"
>                     enableJournalDiskSyncs="false"
>                     journalMaxFileLength="256mb" />
>         </persistenceAdapter>
>
>         <systemUsage>
>             <systemUsage>
>                 <memoryUsage>
>                     <memoryUsage limit="512 mb"/>
>                 </memoryUsage>
>                 <storeUsage>
>                     <storeUsage limit="200 gb"/>
>                 </storeUsage>
>                 <tempUsage>
>                     <tempUsage limit="1 gb"/>
>                 </tempUsage>
>             </systemUsage>
>         </systemUsage>
>
>         <transportConnectors>
>             <transportConnector name="nio"
>
> uri="nio://0.0.0.0:1102?transport.closeAsync=false" />
>         </transportConnectors>
>
>     </broker>
>
>
> Any suggestions would be appreciated.
>
> --
> Best regards, Dmitriy V.



-- 
Best regards, Dmitriy V.