You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@activemq.apache.org by "Noah Zucker (JIRA)" <ji...@apache.org> on 2007/11/26 22:54:26 UTC

[jira] Created: (AMQ-1503) OutOfMemoryError in ActiveMQ message broker when attempting to publish 2 Million messages - one publisher and four durable subscribes

OutOfMemoryError in ActiveMQ message broker when attempting to publish 2 Million messages - one publisher and four durable subscribes
-------------------------------------------------------------------------------------------------------------------------------------

                 Key: AMQ-1503
                 URL: https://issues.apache.org/activemq/browse/AMQ-1503
             Project: ActiveMQ
          Issue Type: Bug
          Components: Broker
    Affects Versions: 4.1.1
            Reporter: Noah Zucker


(reference original posting on Nabble: http://www.nabble.com/forum/ViewPost.jtp?post=12798657)

We have a one topic publisher that is attempting to publish 2 million messages with 4 durable subscribers.  Things go fine until we hit 1.7 million messages - then we get an OutOfMemoryError. 

ActiveMQ is setup to use 5 x 20M log  files, and Derby JDBC persistence.  We use the JVM memory settings -Xmx1024M and <usageManager id="memory-manager" limit="512 MB"/>

At the time of the out of memory, one of the log file is gone crazy and it's size is 415M. The Derby size is big as well.

We are using session client acknowledgment.  and prefetch size of 1 (we need to serialize message consumption).

each message is being acknowledged using javax.jms.Message.acknowledge();

Did not find anything on how to change checkpoint interval for persistence.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Assigned: (AMQ-1503) OutOfMemoryError in ActiveMQ message broker when attempting to publish 2 Million messages - one publisher and four durable subscribes

Posted by "Rob Davies (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/activemq/browse/AMQ-1503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Rob Davies reassigned AMQ-1503:
-------------------------------

    Assignee: Rob Davies

> OutOfMemoryError in ActiveMQ message broker when attempting to publish 2 Million messages - one publisher and four durable subscribes
> -------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: AMQ-1503
>                 URL: https://issues.apache.org/activemq/browse/AMQ-1503
>             Project: ActiveMQ
>          Issue Type: Bug
>          Components: Broker
>    Affects Versions: 4.1.1
>            Reporter: Noah Zucker
>            Assignee: Rob Davies
>
> (reference original posting on Nabble: http://www.nabble.com/forum/ViewPost.jtp?post=12798657)
> We have a one topic publisher that is attempting to publish 2 million messages with 4 durable subscribers.  Things go fine until we hit 1.7 million messages - then we get an OutOfMemoryError. 
> ActiveMQ is setup to use 5 x 20M log  files, and Derby JDBC persistence.  We use the JVM memory settings -Xmx1024M and <usageManager id="memory-manager" limit="512 MB"/>
> At the time of the out of memory, one of the log file is gone crazy and it's size is 415M. The Derby size is big as well.
> We are using session client acknowledgment.  and prefetch size of 1 (we need to serialize message consumption).
> each message is being acknowledged using javax.jms.Message.acknowledge();
> Did not find anything on how to change checkpoint interval for persistence.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Resolved: (AMQ-1503) OutOfMemoryError in ActiveMQ message broker when attempting to publish 2 Million messages - one publisher and four durable subscribes

Posted by "Rob Davies (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/activemq/browse/AMQ-1503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Rob Davies resolved AMQ-1503.
-----------------------------

       Resolution: Fixed
    Fix Version/s: 5.0.0

The ability to handle large message numbers was behind the architectural change for ActiveMQ 5.0 - see http://activemq.apache.org/message-cursors.html

> OutOfMemoryError in ActiveMQ message broker when attempting to publish 2 Million messages - one publisher and four durable subscribes
> -------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: AMQ-1503
>                 URL: https://issues.apache.org/activemq/browse/AMQ-1503
>             Project: ActiveMQ
>          Issue Type: Bug
>          Components: Broker
>    Affects Versions: 4.1.1
>            Reporter: Noah Zucker
>            Assignee: Rob Davies
>             Fix For: 5.0.0
>
>
> (reference original posting on Nabble: http://www.nabble.com/forum/ViewPost.jtp?post=12798657)
> We have a one topic publisher that is attempting to publish 2 million messages with 4 durable subscribers.  Things go fine until we hit 1.7 million messages - then we get an OutOfMemoryError. 
> ActiveMQ is setup to use 5 x 20M log  files, and Derby JDBC persistence.  We use the JVM memory settings -Xmx1024M and <usageManager id="memory-manager" limit="512 MB"/>
> At the time of the out of memory, one of the log file is gone crazy and it's size is 415M. The Derby size is big as well.
> We are using session client acknowledgment.  and prefetch size of 1 (we need to serialize message consumption).
> each message is being acknowledged using javax.jms.Message.acknowledge();
> Did not find anything on how to change checkpoint interval for persistence.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.