You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@activemq.apache.org by Graham Leggett <mi...@sharp.fm> on 2013/10/08 14:18:45 UTC

Tuning activemq for reliability, not performance

Hi all,

We have a system that does heavy message processing where we have very few (tens, hundreds) of messages that take minutes to process each.

What we also have is periodic activemq v5.8.0 instability that causes the "java service wrapper" to proactively send a "kill -9" signal to the activemq process. This reverts the queue state right back to where the queue was started, and long since processed messages suddenly come back, triggering a very expensive message re-processing exercise.

What we want to do is ensure that all memory caching in activemq is switched off completely, and all changes are written through to disk at all times. We don't care about the performance implications, reliability is our number one requirement. If activemq leaks, crashes, we want the most recent state practical preserved.

Is this possible?

Regards,
Graham
--


Re: Tuning activemq for reliability, not performance

Posted by Graham Leggett <mi...@sharp.fm>.
On 08 Oct 2013, at 4:22 PM, Gary Tully <ga...@gmail.com> wrote:

> yes this is possible and should be the norm.
> 
> What version are you using and what persistence adapter?

currently v5.8.0 using levelDB.

We discovered that kahadb has a memory leak in it: https://issues.apache.org/jira/browse/AMQ-4789

> How are messages acknowledged?

Automatically.

> seems like you may be using optimizeAcknowledge mode?

Not on purpose, is this a default?

> some pointers:
> 
> use kahadb - concurrentStoreAndDispatchQueues=false - dispatch after
> persistence, enableJournalDiskSync=true (default mode) to force a disk
> sync.
> use persistent messages and alwaysSyncSend on your client. So sends
> only complete when the message is on disk.
> use transacted sessions in your consumers so that you have guarantees
> around message acknowledgement, again forcing a disk sync on commit.
> 
> Do your disks support fsync?

What we're keen to do is ensure that the server always does the right thing, is there a way to specify the server to behave like this without having to change every client?

Our existing config is below:

<!--
    Licensed to the Apache Software Foundation (ASF) under one or more
    contributor license agreements.  See the NOTICE file distributed with
    this work for additional information regarding copyright ownership.
    The ASF licenses this file to You under the Apache License, Version 2.0
    (the "License"); you may not use this file except in compliance with
    the License.  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

    Unless required by applicable law or agreed to in writing, software
    distributed under the License is distributed on an "AS IS" BASIS,
    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    See the License for the specific language governing permissions and
    limitations under the License.
-->
<!-- START SNIPPET: example -->
<beans
  xmlns="http://www.springframework.org/schema/beans"
  xmlns:amq="http://activemq.apache.org/schema/core"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
  http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">

    <!-- Allows us to use system properties as variables in this configuration file -->
    <bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
        <property name="locations">
            <value>file:${activemq.conf}/credentials.properties</value>
        </property>
    </bean>

    <!--
        The <broker> element is used to configure the ActiveMQ broker.
    -->
    <broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="${activemq.data}">

        <!--
            For better performances use VM cursor and small memory limit.
            For more information, see:

            http://activemq.apache.org/message-cursors.html

            Also, if your producer is "hanging", it's probably due to producer flow control.
            For more information, see:
            http://activemq.apache.org/producer-flow-control.html
        -->

        <destinationPolicy>
            <policyMap>
              <policyEntries>
                <policyEntry topic=">" producerFlowControl="true">
                    <!-- The constantPendingMessageLimitStrategy is used to prevent
                         slow topic consumers to block producers and affect other consumers
                         by limiting the number of messages that are retained
                         For more information, see:

                         http://activemq.apache.org/slow-consumer-handling.html

                    -->
                  <pendingMessageLimitStrategy>
                    <constantPendingMessageLimitStrategy limit="1000"/>
                  </pendingMessageLimitStrategy>
                </policyEntry>
                <policyEntry queue=">" producerFlowControl="true" memoryLimit="1mb">
                  <!-- Use VM cursor for better latency
                       For more information, see:

                       http://activemq.apache.org/message-cursors.html

                  -->
                  <pendingQueuePolicy>
                    <fileQueueCursor/>
                  </pendingQueuePolicy>
                </policyEntry>
              </policyEntries>
            </policyMap>
        </destinationPolicy>


        <!--
            The managementContext is used to configure how ActiveMQ is exposed in
            JMX. By default, ActiveMQ uses the MBean server that is started by
            the JVM. For more information, see:

            http://activemq.apache.org/jmx.html
        -->
        <managementContext>
            <managementContext createConnector="false"/>
        </managementContext>

        <!--
            Configure message persistence for the broker. The default persistence
            mechanism is the KahaDB store (identified by the kahaDB tag).
            For more information, see:

            http://activemq.apache.org/persistence.html
        -->
        <persistenceAdapter>
            <!--
            <kahaDB directory="${activemq.data}/kahadb"/>
            -->
            <levelDB directory="${activemq.data}/leveldb"/>
        </persistenceAdapter>


          <!--
            The systemUsage controls the maximum amount of space the broker will
            use before slowing down producers. For more information, see:
            http://activemq.apache.org/producer-flow-control.html
            If using ActiveMQ embedded - the following limits could safely be used:

        <systemUsage>
            <systemUsage>
                <memoryUsage>
                    <memoryUsage limit="20 mb"/>
                </memoryUsage>
                <storeUsage>
                    <storeUsage limit="1 gb"/>
                </storeUsage>
                <tempUsage>
                    <tempUsage limit="100 mb"/>
                </tempUsage>
            </systemUsage>
        </systemUsage>
        -->
          <systemUsage>
            <systemUsage>
                <memoryUsage>
                    <memoryUsage limit="64 mb"/>
                </memoryUsage>
                <storeUsage>
                    <storeUsage limit="100 gb"/>
                </storeUsage>
                <tempUsage>
                    <tempUsage limit="50 gb"/>
                </tempUsage>
            </systemUsage>
        </systemUsage>

        <!--
            The transport connectors expose ActiveMQ over a given protocol to
            clients and other brokers. For more information, see:

            http://activemq.apache.org/configuring-transports.html
        -->
        <transportConnectors>
            <!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
            <!--
            <transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&amp;wireformat.maxFrameSize=104857600"/>
            -->
            <!-- aqmp with SSL client certs -->
            <transportConnector name="amqp" uri="amqp+ssl://0.0.0.0:5672?maximumConnections=1000&amp;wireformat.maxFrameSize=104857600&amp;transport.transformer=jms&amp;needClientAuth=true&amp;transport.closeAsync=false"/>
        </transportConnectors>

        <!-- destroy the spring context on shutdown to stop jetty -->
        <shutdownHooks>
            <bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook" />
        </shutdownHooks>

    </broker>

    <!--
        Enable web consoles, REST and Ajax APIs and demos

        Take a look at ${ACTIVEMQ_HOME}/conf/jetty.xml for more details
    -->
    <import resource="jetty.xml"/>

</beans>
<!-- END SNIPPET: example -->

Regards,
Graham
--


Re: Tuning activemq for reliability, not performance

Posted by Gary Tully <ga...@gmail.com>.
yes this is possible and should be the norm.

What version are you using and what persistence adapter?
How are messages acknowledged?
seems like you may be using optimizeAcknowledge mode?

some pointers:

use kahadb - concurrentStoreAndDispatchQueues=false - dispatch after
persistence, enableJournalDiskSync=true (default mode) to force a disk
sync.
use persistent messages and alwaysSyncSend on your client. So sends
only complete when the message is on disk.
use transacted sessions in your consumers so that you have guarantees
around message acknowledgement, again forcing a disk sync on commit.

Do your disks support fsync?


On 8 October 2013 13:18, Graham Leggett <mi...@sharp.fm> wrote:
> Hi all,
>
> We have a system that does heavy message processing where we have very few (tens, hundreds) of messages that take minutes to process each.
>
> What we also have is periodic activemq v5.8.0 instability that causes the "java service wrapper" to proactively send a "kill -9" signal to the activemq process. This reverts the queue state right back to where the queue was started, and long since processed messages suddenly come back, triggering a very expensive message re-processing exercise.
>
> What we want to do is ensure that all memory caching in activemq is switched off completely, and all changes are written through to disk at all times. We don't care about the performance implications, reliability is our number one requirement. If activemq leaks, crashes, we want the most recent state practical preserved.
>
> Is this possible?
>
> Regards,
> Graham
> --
>



-- 
http://redhat.com
http://blog.garytully.com