You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@activemq.apache.org by "Phil Pickett (JIRA)" <ji...@apache.org> on 2010/03/19 01:29:45 UTC

[jira] Created: (AMQ-2658) Broker's JMX attribute "MemoryPercentageUsage" greater than 100%

Broker's JMX attribute "MemoryPercentageUsage" greater than 100%
----------------------------------------------------------------

                 Key: AMQ-2658
                 URL: https://issues.apache.org/activemq/browse/AMQ-2658
             Project: ActiveMQ
          Issue Type: Bug
          Components: Broker
    Affects Versions: 5.3.0
            Reporter: Phil Pickett


Using the producer and consumer samples where the build.xml was modified to use a topic with a durable subscriber and quite a few large messages, I see the MemoryPercentUsage grow to 225% after stopping and restarting the consumer multiple times.  Everything seems to run fine, but the large percentage makes me wonder how long this would be able to continue.   I don't have producerFlowControl enabled for topics.

The "problem" seems to occur after the long Index updates as shown in the messages below.  I'm attaching a jconsole screenshot showing the high percentage along with the example build.xml and the activemq.xml.  This is easily recreated.

phil@ubuntu910:~/apache-activemq-5.3.0/bin$ ./activemq xbean:file:../conf/memusage-activemq.xml
Java Runtime: Sun Microsystems Inc. 1.6.0_18 /opt/jdk1.6.0_18/jre
  Heap sizes: current=15552k  free=14290k  max=506816k
    JVM args: -Xmx512M -Dorg.apache.activemq.UseDedicatedTaskRunner=true -Djava.util.logging.config.file=logging.properties -Dcom.sun.management.jmxremote -Dactivemq.classpath=/home/phil/apache-activemq-5.3.0/conf; -Dactivemq.home=/home/phil/apache-activemq-5.3.0 -Dactivemq.base=/home/phil/apache-activemq-5.3.0
ACTIVEMQ_HOME: /home/phil/apache-activemq-5.3.0
ACTIVEMQ_BASE: /home/phil/apache-activemq-5.3.0 Loading message broker from: xbean:file:../conf/memusage-activemq.xml
 INFO | Using Persistence Adapter: org.apache.activemq.store.kahadb.KahaDBPersistenceAdapter@1e0f2f6
 INFO | ActiveMQ 5.3.0 JMS Message Broker (localhost) is starting  
 INFO | For help or more information please see: http://activemq.apache.org/ 
 INFO | Listening for connections at: tcp://ubuntu910:61616 
 INFO | Connector openwire Started 
 INFO | ActiveMQ JMS Message Broker (localhost, ID:ubuntu910-52519-1268950522231-0:0) started 
 INFO | Logging to org.slf4j.impl.JCLLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 
 INFO | jetty-6.1.9  INFO | ActiveMQ WebConsole initialized.
 INFO | Initializing Spring FrameworkServlet 'dispatcher'
 INFO | ActiveMQ Console at http://0.0.0.0:8161/admin 
 INFO | Initializing Spring root WebApplicationContext 
 INFO | Connector vm://localhost Started 
 INFO | Camel Console at http://0.0.0.0:8161/camel 
 INFO | ActiveMQ Web Demos at http://0.0.0.0:8161/demo 
 INFO | RESTful file access application at http://0.0.0.0:8161/fileserver 
 INFO | Started SelectChannelConnector@0.0.0.0:8161
 INFO | Slow KahaDB access: Journal append took: 0 ms, Index update took 5815 ms 
 INFO | Slow KahaDB access: cleanup took 1937 
 INFO | Slow KahaDB access: Journal append took: 49 ms, Index update took 4031 ms 
 INFO | Slow KahaDB access: cleanup took 1621 
 INFO | Slow KahaDB access: Journal append took: 70 ms, Index update took 2314 ms 
 INFO | Slow KahaDB access: cleanup took 721 
 INFO | Slow KahaDB access: cleanup took 681 
 INFO | Slow KahaDB access: Journal append took: 7 ms, Index update took 46591 ms 
 INFO | Slow KahaDB access: cleanup took 45788 
 INFO | Slow KahaDB access: cleanup took 1592 
 INFO | Slow KahaDB access: Journal append took: 55 ms, Index update took 452 ms 
 INFO | Slow KahaDB access: cleanup took 1673 
 INFO | Slow KahaDB access: cleanup took 1343 
 INFO | Slow KahaDB access: Journal append took: 102 ms, Index update took 620 ms 
 INFO | Slow KahaDB access: cleanup took 1215 
 INFO | Slow KahaDB access: cleanup took 816 
 INFO | Slow KahaDB access: cleanup took 1454 
 INFO | Slow KahaDB access: cleanup took 851



-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (AMQ-2658) Broker's JMX attribute "MemoryPercentageUsage" greater than 100%

Posted by "Phil Pickett (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/activemq/browse/AMQ-2658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=58594#action_58594 ] 

Phil Pickett commented on AMQ-2658:
-----------------------------------

I just tried this with 5.3.1 and yes, I see the same thing with the MemoryPercentUsage reaching 225.

> Broker's JMX attribute "MemoryPercentageUsage" greater than 100%
> ----------------------------------------------------------------
>
>                 Key: AMQ-2658
>                 URL: https://issues.apache.org/activemq/browse/AMQ-2658
>             Project: ActiveMQ
>          Issue Type: Bug
>          Components: Broker
>    Affects Versions: 5.3.0
>            Reporter: Phil Pickett
>         Attachments: activemq.xml, build.xml, MemPercentUsage225.png
>
>
> Using the producer and consumer samples where the build.xml was modified to use a topic with a durable subscriber and quite a few large messages, I see the MemoryPercentUsage grow to 225% after stopping and restarting the consumer multiple times.  Everything seems to run fine, but the large percentage makes me wonder how long this would be able to continue.   I don't have producerFlowControl enabled for topics.
> The "problem" seems to occur after the long Index updates as shown in the messages below.  I'm attaching a jconsole screenshot showing the high percentage along with the example build.xml and the activemq.xml.  This is easily recreated.
> phil@ubuntu910:~/apache-activemq-5.3.0/bin$ ./activemq xbean:file:../conf/memusage-activemq.xml
> Java Runtime: Sun Microsystems Inc. 1.6.0_18 /opt/jdk1.6.0_18/jre
>   Heap sizes: current=15552k  free=14290k  max=506816k
>     JVM args: -Xmx512M -Dorg.apache.activemq.UseDedicatedTaskRunner=true -Djava.util.logging.config.file=logging.properties -Dcom.sun.management.jmxremote -Dactivemq.classpath=/home/phil/apache-activemq-5.3.0/conf; -Dactivemq.home=/home/phil/apache-activemq-5.3.0 -Dactivemq.base=/home/phil/apache-activemq-5.3.0
> ACTIVEMQ_HOME: /home/phil/apache-activemq-5.3.0
> ACTIVEMQ_BASE: /home/phil/apache-activemq-5.3.0 Loading message broker from: xbean:file:../conf/memusage-activemq.xml
>  INFO | Using Persistence Adapter: org.apache.activemq.store.kahadb.KahaDBPersistenceAdapter@1e0f2f6
>  INFO | ActiveMQ 5.3.0 JMS Message Broker (localhost) is starting  
>  INFO | For help or more information please see: http://activemq.apache.org/ 
>  INFO | Listening for connections at: tcp://ubuntu910:61616 
>  INFO | Connector openwire Started 
>  INFO | ActiveMQ JMS Message Broker (localhost, ID:ubuntu910-52519-1268950522231-0:0) started 
>  INFO | Logging to org.slf4j.impl.JCLLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 
>  INFO | jetty-6.1.9  INFO | ActiveMQ WebConsole initialized.
>  INFO | Initializing Spring FrameworkServlet 'dispatcher'
>  INFO | ActiveMQ Console at http://0.0.0.0:8161/admin 
>  INFO | Initializing Spring root WebApplicationContext 
>  INFO | Connector vm://localhost Started 
>  INFO | Camel Console at http://0.0.0.0:8161/camel 
>  INFO | ActiveMQ Web Demos at http://0.0.0.0:8161/demo 
>  INFO | RESTful file access application at http://0.0.0.0:8161/fileserver 
>  INFO | Started SelectChannelConnector@0.0.0.0:8161
>  INFO | Slow KahaDB access: Journal append took: 0 ms, Index update took 5815 ms 
>  INFO | Slow KahaDB access: cleanup took 1937 
>  INFO | Slow KahaDB access: Journal append took: 49 ms, Index update took 4031 ms 
>  INFO | Slow KahaDB access: cleanup took 1621 
>  INFO | Slow KahaDB access: Journal append took: 70 ms, Index update took 2314 ms 
>  INFO | Slow KahaDB access: cleanup took 721 
>  INFO | Slow KahaDB access: cleanup took 681 
>  INFO | Slow KahaDB access: Journal append took: 7 ms, Index update took 46591 ms 
>  INFO | Slow KahaDB access: cleanup took 45788 
>  INFO | Slow KahaDB access: cleanup took 1592 
>  INFO | Slow KahaDB access: Journal append took: 55 ms, Index update took 452 ms 
>  INFO | Slow KahaDB access: cleanup took 1673 
>  INFO | Slow KahaDB access: cleanup took 1343 
>  INFO | Slow KahaDB access: Journal append took: 102 ms, Index update took 620 ms 
>  INFO | Slow KahaDB access: cleanup took 1215 
>  INFO | Slow KahaDB access: cleanup took 816 
>  INFO | Slow KahaDB access: cleanup took 1454 
>  INFO | Slow KahaDB access: cleanup took 851

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (AMQ-2658) Broker's JMX attribute "MemoryPercentageUsage" greater than 100%

Posted by "Phil Pickett (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/activemq/browse/AMQ-2658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Phil Pickett updated AMQ-2658:
------------------------------

    Attachment: MemPercentUsage225.png
                build.xml
                activemq.xml

> Broker's JMX attribute "MemoryPercentageUsage" greater than 100%
> ----------------------------------------------------------------
>
>                 Key: AMQ-2658
>                 URL: https://issues.apache.org/activemq/browse/AMQ-2658
>             Project: ActiveMQ
>          Issue Type: Bug
>          Components: Broker
>    Affects Versions: 5.3.0
>            Reporter: Phil Pickett
>         Attachments: activemq.xml, build.xml, MemPercentUsage225.png
>
>
> Using the producer and consumer samples where the build.xml was modified to use a topic with a durable subscriber and quite a few large messages, I see the MemoryPercentUsage grow to 225% after stopping and restarting the consumer multiple times.  Everything seems to run fine, but the large percentage makes me wonder how long this would be able to continue.   I don't have producerFlowControl enabled for topics.
> The "problem" seems to occur after the long Index updates as shown in the messages below.  I'm attaching a jconsole screenshot showing the high percentage along with the example build.xml and the activemq.xml.  This is easily recreated.
> phil@ubuntu910:~/apache-activemq-5.3.0/bin$ ./activemq xbean:file:../conf/memusage-activemq.xml
> Java Runtime: Sun Microsystems Inc. 1.6.0_18 /opt/jdk1.6.0_18/jre
>   Heap sizes: current=15552k  free=14290k  max=506816k
>     JVM args: -Xmx512M -Dorg.apache.activemq.UseDedicatedTaskRunner=true -Djava.util.logging.config.file=logging.properties -Dcom.sun.management.jmxremote -Dactivemq.classpath=/home/phil/apache-activemq-5.3.0/conf; -Dactivemq.home=/home/phil/apache-activemq-5.3.0 -Dactivemq.base=/home/phil/apache-activemq-5.3.0
> ACTIVEMQ_HOME: /home/phil/apache-activemq-5.3.0
> ACTIVEMQ_BASE: /home/phil/apache-activemq-5.3.0 Loading message broker from: xbean:file:../conf/memusage-activemq.xml
>  INFO | Using Persistence Adapter: org.apache.activemq.store.kahadb.KahaDBPersistenceAdapter@1e0f2f6
>  INFO | ActiveMQ 5.3.0 JMS Message Broker (localhost) is starting  
>  INFO | For help or more information please see: http://activemq.apache.org/ 
>  INFO | Listening for connections at: tcp://ubuntu910:61616 
>  INFO | Connector openwire Started 
>  INFO | ActiveMQ JMS Message Broker (localhost, ID:ubuntu910-52519-1268950522231-0:0) started 
>  INFO | Logging to org.slf4j.impl.JCLLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 
>  INFO | jetty-6.1.9  INFO | ActiveMQ WebConsole initialized.
>  INFO | Initializing Spring FrameworkServlet 'dispatcher'
>  INFO | ActiveMQ Console at http://0.0.0.0:8161/admin 
>  INFO | Initializing Spring root WebApplicationContext 
>  INFO | Connector vm://localhost Started 
>  INFO | Camel Console at http://0.0.0.0:8161/camel 
>  INFO | ActiveMQ Web Demos at http://0.0.0.0:8161/demo 
>  INFO | RESTful file access application at http://0.0.0.0:8161/fileserver 
>  INFO | Started SelectChannelConnector@0.0.0.0:8161
>  INFO | Slow KahaDB access: Journal append took: 0 ms, Index update took 5815 ms 
>  INFO | Slow KahaDB access: cleanup took 1937 
>  INFO | Slow KahaDB access: Journal append took: 49 ms, Index update took 4031 ms 
>  INFO | Slow KahaDB access: cleanup took 1621 
>  INFO | Slow KahaDB access: Journal append took: 70 ms, Index update took 2314 ms 
>  INFO | Slow KahaDB access: cleanup took 721 
>  INFO | Slow KahaDB access: cleanup took 681 
>  INFO | Slow KahaDB access: Journal append took: 7 ms, Index update took 46591 ms 
>  INFO | Slow KahaDB access: cleanup took 45788 
>  INFO | Slow KahaDB access: cleanup took 1592 
>  INFO | Slow KahaDB access: Journal append took: 55 ms, Index update took 452 ms 
>  INFO | Slow KahaDB access: cleanup took 1673 
>  INFO | Slow KahaDB access: cleanup took 1343 
>  INFO | Slow KahaDB access: Journal append took: 102 ms, Index update took 620 ms 
>  INFO | Slow KahaDB access: cleanup took 1215 
>  INFO | Slow KahaDB access: cleanup took 816 
>  INFO | Slow KahaDB access: cleanup took 1454 
>  INFO | Slow KahaDB access: cleanup took 851

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (AMQ-2658) Broker's JMX attribute "MemoryPercentageUsage" greater than 100%

Posted by "Rob Davies (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/activemq/browse/AMQ-2658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=58548#action_58548 ] 

Rob Davies commented on AMQ-2658:
---------------------------------

Curious if you see the same on 5.3.1 ?

> Broker's JMX attribute "MemoryPercentageUsage" greater than 100%
> ----------------------------------------------------------------
>
>                 Key: AMQ-2658
>                 URL: https://issues.apache.org/activemq/browse/AMQ-2658
>             Project: ActiveMQ
>          Issue Type: Bug
>          Components: Broker
>    Affects Versions: 5.3.0
>            Reporter: Phil Pickett
>         Attachments: activemq.xml, build.xml, MemPercentUsage225.png
>
>
> Using the producer and consumer samples where the build.xml was modified to use a topic with a durable subscriber and quite a few large messages, I see the MemoryPercentUsage grow to 225% after stopping and restarting the consumer multiple times.  Everything seems to run fine, but the large percentage makes me wonder how long this would be able to continue.   I don't have producerFlowControl enabled for topics.
> The "problem" seems to occur after the long Index updates as shown in the messages below.  I'm attaching a jconsole screenshot showing the high percentage along with the example build.xml and the activemq.xml.  This is easily recreated.
> phil@ubuntu910:~/apache-activemq-5.3.0/bin$ ./activemq xbean:file:../conf/memusage-activemq.xml
> Java Runtime: Sun Microsystems Inc. 1.6.0_18 /opt/jdk1.6.0_18/jre
>   Heap sizes: current=15552k  free=14290k  max=506816k
>     JVM args: -Xmx512M -Dorg.apache.activemq.UseDedicatedTaskRunner=true -Djava.util.logging.config.file=logging.properties -Dcom.sun.management.jmxremote -Dactivemq.classpath=/home/phil/apache-activemq-5.3.0/conf; -Dactivemq.home=/home/phil/apache-activemq-5.3.0 -Dactivemq.base=/home/phil/apache-activemq-5.3.0
> ACTIVEMQ_HOME: /home/phil/apache-activemq-5.3.0
> ACTIVEMQ_BASE: /home/phil/apache-activemq-5.3.0 Loading message broker from: xbean:file:../conf/memusage-activemq.xml
>  INFO | Using Persistence Adapter: org.apache.activemq.store.kahadb.KahaDBPersistenceAdapter@1e0f2f6
>  INFO | ActiveMQ 5.3.0 JMS Message Broker (localhost) is starting  
>  INFO | For help or more information please see: http://activemq.apache.org/ 
>  INFO | Listening for connections at: tcp://ubuntu910:61616 
>  INFO | Connector openwire Started 
>  INFO | ActiveMQ JMS Message Broker (localhost, ID:ubuntu910-52519-1268950522231-0:0) started 
>  INFO | Logging to org.slf4j.impl.JCLLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 
>  INFO | jetty-6.1.9  INFO | ActiveMQ WebConsole initialized.
>  INFO | Initializing Spring FrameworkServlet 'dispatcher'
>  INFO | ActiveMQ Console at http://0.0.0.0:8161/admin 
>  INFO | Initializing Spring root WebApplicationContext 
>  INFO | Connector vm://localhost Started 
>  INFO | Camel Console at http://0.0.0.0:8161/camel 
>  INFO | ActiveMQ Web Demos at http://0.0.0.0:8161/demo 
>  INFO | RESTful file access application at http://0.0.0.0:8161/fileserver 
>  INFO | Started SelectChannelConnector@0.0.0.0:8161
>  INFO | Slow KahaDB access: Journal append took: 0 ms, Index update took 5815 ms 
>  INFO | Slow KahaDB access: cleanup took 1937 
>  INFO | Slow KahaDB access: Journal append took: 49 ms, Index update took 4031 ms 
>  INFO | Slow KahaDB access: cleanup took 1621 
>  INFO | Slow KahaDB access: Journal append took: 70 ms, Index update took 2314 ms 
>  INFO | Slow KahaDB access: cleanup took 721 
>  INFO | Slow KahaDB access: cleanup took 681 
>  INFO | Slow KahaDB access: Journal append took: 7 ms, Index update took 46591 ms 
>  INFO | Slow KahaDB access: cleanup took 45788 
>  INFO | Slow KahaDB access: cleanup took 1592 
>  INFO | Slow KahaDB access: Journal append took: 55 ms, Index update took 452 ms 
>  INFO | Slow KahaDB access: cleanup took 1673 
>  INFO | Slow KahaDB access: cleanup took 1343 
>  INFO | Slow KahaDB access: Journal append took: 102 ms, Index update took 620 ms 
>  INFO | Slow KahaDB access: cleanup took 1215 
>  INFO | Slow KahaDB access: cleanup took 816 
>  INFO | Slow KahaDB access: cleanup took 1454 
>  INFO | Slow KahaDB access: cleanup took 851

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (AMQ-2658) Broker's JMX attribute "MemoryPercentageUsage" greater than 100%

Posted by "Phil Pickett (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/activemq/browse/AMQ-2658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=59992#action_59992 ] 

Phil Pickett commented on AMQ-2658:
-----------------------------------

BTW - I tried this using 5.3.2 and the current 5.4 snapshot and see similar behavior to that mentioned above.

> Broker's JMX attribute "MemoryPercentageUsage" greater than 100%
> ----------------------------------------------------------------
>
>                 Key: AMQ-2658
>                 URL: https://issues.apache.org/activemq/browse/AMQ-2658
>             Project: ActiveMQ
>          Issue Type: Bug
>          Components: Broker
>    Affects Versions: 5.3.0
>            Reporter: Phil Pickett
>         Attachments: activemq.xml, build.xml, MemPercentUsage225.png
>
>
> Using the producer and consumer samples where the build.xml was modified to use a topic with a durable subscriber and quite a few large messages, I see the MemoryPercentUsage grow to 225% after stopping and restarting the consumer multiple times.  Everything seems to run fine, but the large percentage makes me wonder how long this would be able to continue.   I don't have producerFlowControl enabled for topics.
> The "problem" seems to occur after the long Index updates as shown in the messages below.  I'm attaching a jconsole screenshot showing the high percentage along with the example build.xml and the activemq.xml.  This is easily recreated.
> phil@ubuntu910:~/apache-activemq-5.3.0/bin$ ./activemq xbean:file:../conf/memusage-activemq.xml
> Java Runtime: Sun Microsystems Inc. 1.6.0_18 /opt/jdk1.6.0_18/jre
>   Heap sizes: current=15552k  free=14290k  max=506816k
>     JVM args: -Xmx512M -Dorg.apache.activemq.UseDedicatedTaskRunner=true -Djava.util.logging.config.file=logging.properties -Dcom.sun.management.jmxremote -Dactivemq.classpath=/home/phil/apache-activemq-5.3.0/conf; -Dactivemq.home=/home/phil/apache-activemq-5.3.0 -Dactivemq.base=/home/phil/apache-activemq-5.3.0
> ACTIVEMQ_HOME: /home/phil/apache-activemq-5.3.0
> ACTIVEMQ_BASE: /home/phil/apache-activemq-5.3.0 Loading message broker from: xbean:file:../conf/memusage-activemq.xml
>  INFO | Using Persistence Adapter: org.apache.activemq.store.kahadb.KahaDBPersistenceAdapter@1e0f2f6
>  INFO | ActiveMQ 5.3.0 JMS Message Broker (localhost) is starting  
>  INFO | For help or more information please see: http://activemq.apache.org/ 
>  INFO | Listening for connections at: tcp://ubuntu910:61616 
>  INFO | Connector openwire Started 
>  INFO | ActiveMQ JMS Message Broker (localhost, ID:ubuntu910-52519-1268950522231-0:0) started 
>  INFO | Logging to org.slf4j.impl.JCLLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 
>  INFO | jetty-6.1.9  INFO | ActiveMQ WebConsole initialized.
>  INFO | Initializing Spring FrameworkServlet 'dispatcher'
>  INFO | ActiveMQ Console at http://0.0.0.0:8161/admin 
>  INFO | Initializing Spring root WebApplicationContext 
>  INFO | Connector vm://localhost Started 
>  INFO | Camel Console at http://0.0.0.0:8161/camel 
>  INFO | ActiveMQ Web Demos at http://0.0.0.0:8161/demo 
>  INFO | RESTful file access application at http://0.0.0.0:8161/fileserver 
>  INFO | Started SelectChannelConnector@0.0.0.0:8161
>  INFO | Slow KahaDB access: Journal append took: 0 ms, Index update took 5815 ms 
>  INFO | Slow KahaDB access: cleanup took 1937 
>  INFO | Slow KahaDB access: Journal append took: 49 ms, Index update took 4031 ms 
>  INFO | Slow KahaDB access: cleanup took 1621 
>  INFO | Slow KahaDB access: Journal append took: 70 ms, Index update took 2314 ms 
>  INFO | Slow KahaDB access: cleanup took 721 
>  INFO | Slow KahaDB access: cleanup took 681 
>  INFO | Slow KahaDB access: Journal append took: 7 ms, Index update took 46591 ms 
>  INFO | Slow KahaDB access: cleanup took 45788 
>  INFO | Slow KahaDB access: cleanup took 1592 
>  INFO | Slow KahaDB access: Journal append took: 55 ms, Index update took 452 ms 
>  INFO | Slow KahaDB access: cleanup took 1673 
>  INFO | Slow KahaDB access: cleanup took 1343 
>  INFO | Slow KahaDB access: Journal append took: 102 ms, Index update took 620 ms 
>  INFO | Slow KahaDB access: cleanup took 1215 
>  INFO | Slow KahaDB access: cleanup took 816 
>  INFO | Slow KahaDB access: cleanup took 1454 
>  INFO | Slow KahaDB access: cleanup took 851

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.