You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@activemq.apache.org by "Sree Panchajanyam D (JIRA)" <ji...@apache.org> on 2011/08/23 14:47:29 UTC

[jira] [Issue Comment Edited] (AMQ-3210) OutOfMemory error on ActiveMQ startup

    [ https://issues.apache.org/jira/browse/AMQ-3210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13089426#comment-13089426 ] 

Sree Panchajanyam D edited comment on AMQ-3210 at 8/23/11 12:46 PM:
--------------------------------------------------------------------

Corrupt journal files can be identified but not corrupt metadata.
You can ensure that the metadata is synced up regularly by setting the parameters "indexWriteBatchSize" and "checkpointInterval" to practically low values. Take a look at the documentation for these parameters at below links:
http://activemq.apache.org/kahadb.html
http://fusesource.com/docs/broker/5.5/persistence/index.html ( Optimizing the Metadata Cache)
Metadata is not synced with the cache during server crashes. 
Hence, the best thing to do is to prevent ActiveMQ from crashing. 
I see that in your xml you have used producer flow control I would advocate against it if you are not sure why you need it.
If you are using persistent messages use them with a time to live. Allocate store space with  following caluculation  "store space = no. of messages/second * avg message size  * time to live * 2".
For non-persistent messages the above calc. will not hold good.

PS: in activemq.xml
       <persistenceAdapter>
<kahaDB directory="${activemq.base}/data/kahadb" checkForCorruptJournalFiles="true" checksumJournalFiles="true" indexWriteBatchSize="1000" checkpointInterval="1000"/>
</persistenceAdapter>


      was (Author: sreepanchajanyam):
    Corrupt journal files can be identified but not corrupt metadata.
You can ensure that the metadata is synced up regularly by setting the parameters "indexWriteBatchSize" and "checkpointInterval" to practically low values. Take a look at the documentation for these parameters at below links:
http://activemq.apache.org/kahadb.html
http://fusesource.com/docs/broker/5.5/persistence/index.html ( Optimizing the Metadata Cache)
Metadata is not synced with the cache during server crashes. 
Hence, the best thing to do is to prevent ActiveMQ from crashing. 
I see that in your xml you have used producer flow control I would advocate against it if you are not sure why you need it.
If you are using persistent messages use them with a time to live. Allocate store space with  following caluculation  "store space = no. of messages/second * avg message size  * time to live * 2".
For non-persistent messages the above calc. will not hold good.

PS: in activemq.xml
        <persistenceAdapter>
            <kahaDB directory="${activemq.base}/data/kahadb"/>
			<kahaDB checkForCorruptJournalFiles="true"/>
			<kahaDB checksumJournalFiles="true"/>
			<kahaDB indexWriteBatchSize="1000"/>
                              <kahaDB checkpointInterval="1000"/>
        </persistenceAdapter>

  
> OutOfMemory error on ActiveMQ startup
> -------------------------------------
>
>                 Key: AMQ-3210
>                 URL: https://issues.apache.org/jira/browse/AMQ-3210
>             Project: ActiveMQ
>          Issue Type: Bug
>          Components: Message Store
>    Affects Versions: 5.4.2
>         Environment: # java -version
> java version "1.6.0_18"
> OpenJDK Runtime Environment (IcedTea6 1.8.3) (6b18-1.8.3-2~lenny1)
> OpenJDK Client VM (build 16.0-b13, mixed mode, sharing)
> # cat /etc/debian_version 
> 5.0.8
>            Reporter: Lior Okman
>            Priority: Critical
>         Attachments: activemq.xml, exception.log, kahadb.tar.bz2
>
>
> Probably due to some kind of message store corruption, when trying to start ActiveMQ, I get OutOfMemory errors and the startup simply fails.
> This can be solved by deleting /var/local/apache-activemq/kahadb, after which ActiveMQ starts with no issue.
> This issue doesn't always happen, and I'm not sure of a scenario that can reproduce this. I do have a corrupted kahadb directory that reproduces the problem.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira