You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@activemq.apache.org by sambit <sa...@apple.com> on 2007/06/26 17:40:45 UTC

Re: Out of memory errors

Hi James,
   We are doing a stress test on ActiveMQ to see if it can be used for
production ready requirements. Our requirement is 3K mesages per second.
During the stess test we tried with small amount of messages. Hee is the
details. But after sending 130K messages it started giving outofmemory
errors. When we looked into the JMX console, we could see that the memory is
increasing gradually and never coming down. Is there a way we can force GC
to happen may be at an interval of 30 mins in the memory manager
configuration. Because the GC settings are already there in th JVM level as
-D options. But still the memory is growing to the amount we have set in the
memory manager element of broker-config.xml. Once it reaches the limit it
starts giving out of memory. Our m/cs are 8 GB 2 CPU Intel configurations
with heap size of 1024MB settings.  We are not setting any extra parameter
through the connection factory except copyOnMessageSend to false. It is
using default prefetch sizes. Persistent is on by default. DB is Oracle 10g
for pesistence without Journal. Shall we use Journal. If we use Journal can
we pesist the messages in  oracle database or it will always write the data
to a file storage. 

Could you please give some pointers to tackle this poblem. This is quite
important for us to take few strategic decision.  

Regards
-Sambit    

James.Strachan wrote:
> 
> Try turning off asyncSend. Also this might help when you've done that
> http://activemq.apache.org/my-producer-blocks.html
> 
> 
> 
> On 3/22/07, Po Cheung <po...@yahoo.com> wrote:
>>
>> Here is some more info if that would help.  We have four persistent
>> queues A,
>> B, C, and D.
>>
>> - Client 1 sends a message each to queue A, B, and C at an average rate
>> of
>> about 2 per second.
>> - Client 2 receives a message from queue A, processes it, and sends a
>> result
>> message to queue D.
>> - Client 3 receives a message from queue B, processes it, and sends  a
>> result message to queue D.
>> - Client 4 receives a message from queue C, processes it, and sends  a
>> result message to queue D.
>> - Client 1 receives and processes result messages from queue D on a
>> separate
>> thread.
>>
>> Client1: useAsyncSend=true
>>
>> Po
>>
>>
>> Po Cheung wrote:
>> >
>> >
>> > Po Cheung wrote:
>> >>
>> >> We got OutOfMemory errors on both the broker and client after
>> >> sending/receiving 600K messages to persistent queues with
>> transactions.
>> >> The memory graph below shows the heap usage of the broker growing
>> >> gradually.  Max heap size is 512MB.  UsageManager is also at 512MB
>> (will
>> >> that cause a problem?  should it be less than the max heap size?). 
>> When
>> >> JMS transaction is turned off, the heap usage never exceeds 10MB and
>> we
>> >> do not run out of memory.  There is no backlog in the queues so it
>> should
>> >> not be a fast producer, slow consumer issue.  Are there any known
>> issues
>> >> of memory leaks in ActiveMQ with transacted messages?
>> >>
>> >> Details:
>> >> - ActiveMQ 4.1.1 SNAPSHOT
>> >> - Kaha
>> >> - Default pre-fetch limit
>> >>
>> >> Po
>> >>
>> >  http://www.nabble.com/file/7328/ActiveMQHeap.gif
>> >
>>
>> --
>> View this message in context:
>> http://www.nabble.com/Out-of-memory-errors-tf3443750s2354.html#a9618502
>> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>>
>>
> 
> 
> -- 
> 
> James
> -------
> http://radio.weblogs.com/0112098/
> 
> 

-- 
View this message in context: http://www.nabble.com/Out-of-memory-errors-tf3443750s2354.html#a11307579
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: Out of memory errors

Posted by sambit <sa...@apple.com>.
Hi James,
  Thank you very much for a quick response.  We are  using ActiveMQ 4.1.1. I
think this is the latest build as of now. We are using Topics and fast
consumers.  

In case i go for Journal option, Could you please let me know how much time
it takes to persist the message from Journal files to DB for permanent
persistence.   Does this time can be set by any chance. Reason i'm saying is
to know the failover in case message is written to a journal but the broker
is down, in that case temporary journal files can not be distributed.   

One more thing just wanted to check, is there any plan in near future to
have an option of integrating the persistence mechanism to a distributed
caching framework like JDBC Adapater or Journal Adapter.

Regards
-Sambit 

James.Strachan wrote:
> 
> What version of ActiveMQ are you using? Using topics/queues? Do you
> have slow consumers?
> 
> When using the journal, it always writes messages to the journal then
> checkpoints them later on to persistent store. If you want a high
> throughput (say 3000 messages/second for persistent messages) then I'd
> recommend the journal.
> 
> 
> On 6/26/07, sambit <sa...@apple.com> wrote:
>>
>> Hi James,
>>    We are doing a stress test on ActiveMQ to see if it can be used for
>> production ready requirements. Our requirement is 3K mesages per second.
>> During the stess test we tried with small amount of messages. Hee is the
>> details. But after sending 130K messages it started giving outofmemory
>> errors. When we looked into the JMX console, we could see that the memory
>> is
>> increasing gradually and never coming down. Is there a way we can force
>> GC
>> to happen may be at an interval of 30 mins in the memory manager
>> configuration. Because the GC settings are already there in th JVM level
>> as
>> -D options. But still the memory is growing to the amount we have set in
>> the
>> memory manager element of broker-config.xml. Once it reaches the limit it
>> starts giving out of memory. Our m/cs are 8 GB 2 CPU Intel configurations
>> with heap size of 1024MB settings.  We are not setting any extra
>> parameter
>> through the connection factory except copyOnMessageSend to false. It is
>> using default prefetch sizes. Persistent is on by default. DB is Oracle
>> 10g
>> for pesistence without Journal. Shall we use Journal. If we use Journal
>> can
>> we pesist the messages in  oracle database or it will always write the
>> data
>> to a file storage.
>>
>> Could you please give some pointers to tackle this poblem. This is quite
>> important for us to take few strategic decision.
>>
>> Regards
>> -Sambit
>>
>> James.Strachan wrote:
>> >
>> > Try turning off asyncSend. Also this might help when you've done that
>> > http://activemq.apache.org/my-producer-blocks.html
>> >
>> >
>> >
>> > On 3/22/07, Po Cheung <po...@yahoo.com> wrote:
>> >>
>> >> Here is some more info if that would help.  We have four persistent
>> >> queues A,
>> >> B, C, and D.
>> >>
>> >> - Client 1 sends a message each to queue A, B, and C at an average
>> rate
>> >> of
>> >> about 2 per second.
>> >> - Client 2 receives a message from queue A, processes it, and sends a
>> >> result
>> >> message to queue D.
>> >> - Client 3 receives a message from queue B, processes it, and sends  a
>> >> result message to queue D.
>> >> - Client 4 receives a message from queue C, processes it, and sends  a
>> >> result message to queue D.
>> >> - Client 1 receives and processes result messages from queue D on a
>> >> separate
>> >> thread.
>> >>
>> >> Client1: useAsyncSend=true
>> >>
>> >> Po
>> >>
>> >>
>> >> Po Cheung wrote:
>> >> >
>> >> >
>> >> > Po Cheung wrote:
>> >> >>
>> >> >> We got OutOfMemory errors on both the broker and client after
>> >> >> sending/receiving 600K messages to persistent queues with
>> >> transactions.
>> >> >> The memory graph below shows the heap usage of the broker growing
>> >> >> gradually.  Max heap size is 512MB.  UsageManager is also at 512MB
>> >> (will
>> >> >> that cause a problem?  should it be less than the max heap size?).
>> >> When
>> >> >> JMS transaction is turned off, the heap usage never exceeds 10MB
>> and
>> >> we
>> >> >> do not run out of memory.  There is no backlog in the queues so it
>> >> should
>> >> >> not be a fast producer, slow consumer issue.  Are there any known
>> >> issues
>> >> >> of memory leaks in ActiveMQ with transacted messages?
>> >> >>
>> >> >> Details:
>> >> >> - ActiveMQ 4.1.1 SNAPSHOT
>> >> >> - Kaha
>> >> >> - Default pre-fetch limit
>> >> >>
>> >> >> Po
>> >> >>
>> >> >  http://www.nabble.com/file/7328/ActiveMQHeap.gif
>> >> >
>> >>
>> >> --
>> >> View this message in context:
>> >>
>> http://www.nabble.com/Out-of-memory-errors-tf3443750s2354.html#a9618502
>> >> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>> >>
>> >>
>> >
>> >
>> > --
>> >
>> > James
>> > -------
>> > http://radio.weblogs.com/0112098/
>> >
>> >
>>
>> --
>> View this message in context:
>> http://www.nabble.com/Out-of-memory-errors-tf3443750s2354.html#a11307579
>> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>>
>>
> 
> 
> -- 
> James
> -------
> http://macstrac.blogspot.com/
> 
> 

-- 
View this message in context: http://www.nabble.com/Out-of-memory-errors-tf3443750s2354.html#a11310077
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: Out of memory errors

Posted by James Strachan <ja...@gmail.com>.
What version of ActiveMQ are you using? Using topics/queues? Do you
have slow consumers?

When using the journal, it always writes messages to the journal then
checkpoints them later on to persistent store. If you want a high
throughput (say 3000 messages/second for persistent messages) then I'd
recommend the journal.


On 6/26/07, sambit <sa...@apple.com> wrote:
>
> Hi James,
>    We are doing a stress test on ActiveMQ to see if it can be used for
> production ready requirements. Our requirement is 3K mesages per second.
> During the stess test we tried with small amount of messages. Hee is the
> details. But after sending 130K messages it started giving outofmemory
> errors. When we looked into the JMX console, we could see that the memory is
> increasing gradually and never coming down. Is there a way we can force GC
> to happen may be at an interval of 30 mins in the memory manager
> configuration. Because the GC settings are already there in th JVM level as
> -D options. But still the memory is growing to the amount we have set in the
> memory manager element of broker-config.xml. Once it reaches the limit it
> starts giving out of memory. Our m/cs are 8 GB 2 CPU Intel configurations
> with heap size of 1024MB settings.  We are not setting any extra parameter
> through the connection factory except copyOnMessageSend to false. It is
> using default prefetch sizes. Persistent is on by default. DB is Oracle 10g
> for pesistence without Journal. Shall we use Journal. If we use Journal can
> we pesist the messages in  oracle database or it will always write the data
> to a file storage.
>
> Could you please give some pointers to tackle this poblem. This is quite
> important for us to take few strategic decision.
>
> Regards
> -Sambit
>
> James.Strachan wrote:
> >
> > Try turning off asyncSend. Also this might help when you've done that
> > http://activemq.apache.org/my-producer-blocks.html
> >
> >
> >
> > On 3/22/07, Po Cheung <po...@yahoo.com> wrote:
> >>
> >> Here is some more info if that would help.  We have four persistent
> >> queues A,
> >> B, C, and D.
> >>
> >> - Client 1 sends a message each to queue A, B, and C at an average rate
> >> of
> >> about 2 per second.
> >> - Client 2 receives a message from queue A, processes it, and sends a
> >> result
> >> message to queue D.
> >> - Client 3 receives a message from queue B, processes it, and sends  a
> >> result message to queue D.
> >> - Client 4 receives a message from queue C, processes it, and sends  a
> >> result message to queue D.
> >> - Client 1 receives and processes result messages from queue D on a
> >> separate
> >> thread.
> >>
> >> Client1: useAsyncSend=true
> >>
> >> Po
> >>
> >>
> >> Po Cheung wrote:
> >> >
> >> >
> >> > Po Cheung wrote:
> >> >>
> >> >> We got OutOfMemory errors on both the broker and client after
> >> >> sending/receiving 600K messages to persistent queues with
> >> transactions.
> >> >> The memory graph below shows the heap usage of the broker growing
> >> >> gradually.  Max heap size is 512MB.  UsageManager is also at 512MB
> >> (will
> >> >> that cause a problem?  should it be less than the max heap size?).
> >> When
> >> >> JMS transaction is turned off, the heap usage never exceeds 10MB and
> >> we
> >> >> do not run out of memory.  There is no backlog in the queues so it
> >> should
> >> >> not be a fast producer, slow consumer issue.  Are there any known
> >> issues
> >> >> of memory leaks in ActiveMQ with transacted messages?
> >> >>
> >> >> Details:
> >> >> - ActiveMQ 4.1.1 SNAPSHOT
> >> >> - Kaha
> >> >> - Default pre-fetch limit
> >> >>
> >> >> Po
> >> >>
> >> >  http://www.nabble.com/file/7328/ActiveMQHeap.gif
> >> >
> >>
> >> --
> >> View this message in context:
> >> http://www.nabble.com/Out-of-memory-errors-tf3443750s2354.html#a9618502
> >> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
> >>
> >>
> >
> >
> > --
> >
> > James
> > -------
> > http://radio.weblogs.com/0112098/
> >
> >
>
> --
> View this message in context: http://www.nabble.com/Out-of-memory-errors-tf3443750s2354.html#a11307579
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>
>


-- 
James
-------
http://macstrac.blogspot.com/