You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@activemq.apache.org by Michele <mi...@finconsgroup.com> on 2016/03/25 14:53:29 UTC

Active MQ - OutOfMemory in JbossFuse 6.2 context

Hi everyone,

I'm new in using ActiveMQ and according to business requirement I have a 
Camel Route optimized to read large size file (~40000) to split and store
each single row in AMQueue and then a pool consumer will process invoking a
RS.

ActiveMQ after about 500 messages go down in OutOfMemory - Heap Space. 

I use JBossFuse ESB 6.2 based on ActiveMQ 5.11  and according to
http://activemq.apache.org/javalangoutofmemory.html I modified activemq.xml
in installationJBFuseDir/etc but the problem continue:

 <policyEntry queue=">" producerFlowControl="false" 
optimizedDispatch="true" queuePrefetch="0">
            		<pendingMessageLimitStrategy>
                    	<constantPendingMessageLimitStrategy limit="1000"/>
                  	</pendingMessageLimitStrategy>
					<deadLetterStrategy>
						<individualDeadLetterStrategy queuePrefix="Test.DLQ."/>
					</deadLetterStrategy>
					<pendingQueuePolicy>
						<storeCursor />
					</pendingQueuePolicy>					
                </policyEntry>

<persistenceAdapter>
         
	<levelDB directory="${data}/leveldb"  />
</persistenceAdapter>

And I added jvm properties 
-Xmx512M -Dorg.apache.activemq.UseDedicatedTaskRunner=false

Errors:

PooledConnectionFactory - Expiring connection ActiveMQConnection
{id=ID:FGBAL201530-50934-1458820732064-7:3,clientId=ID:FGBAL201530-50934-1458820732064-6:2,started=false}
on IOException: Unexpected error occurred: java.lang.OutOfMemoryError: Java
heap space

or

Ignoring no space left exception, java.io.IOException: Java heap space
java.io.IOException: Java heap space
	at
org.apache.activemq.util.IOExceptionSupport.create(IOExceptionSupport.java:39)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
	at
org.apache.activemq.leveldb.LevelDBClient.might_fail(LevelDBClient.scala:552)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
	at
org.apache.activemq.leveldb.LevelDBClient.might_fail_using_index(LevelDBClient.scala:1044)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
	at
org.apache.activemq.leveldb.LevelDBClient.collectionCursor(LevelDBClient.scala:1357)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
	at
org.apache.activemq.leveldb.LevelDBClient.queueCursor(LevelDBClient.scala:1271)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
	at
org.apache.activemq.leveldb.DBManager.cursorMessages(DBManager.scala:735)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
	at
org.apache.activemq.leveldb.LevelDBStore$LevelDBMessageStore.recoverNextMessages(LevelDBStore.scala:860)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
	at
org.apache.activemq.broker.region.cursors.QueueStorePrefetch.doFillBatch(QueueStorePrefetch.java:109)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
	at
org.apache.activemq.broker.region.cursors.AbstractStoreCursor.fillBatch(AbstractStoreCursor.java:381)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
	at
org.apache.activemq.broker.region.cursors.AbstractStoreCursor.reset(AbstractStoreCursor.java:142)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
	at
org.apache.activemq.broker.region.cursors.StoreQueueCursor.reset(StoreQueueCursor.java:159)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
	at
org.apache.activemq.broker.region.Queue.doPageInForDispatch(Queue.java:1896)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
	at
org.apache.activemq.broker.region.Queue.pageInMessages(Queue.java:2107)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
	at
org.apache.activemq.broker.region.Queue.iterate(Queue.java:1583)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
	at
org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:133)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
	at
org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:48)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
	at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)[:1.7.0_79]
	at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)[:1.7.0_79]
	at java.lang.Thread.run(Thread.java:745)[:1.7.0_79]
Caused by: java.lang.OutOfMemoryError: Java heap space
	at org.fusesource.hawtbuf.Buffer.<init>(Buffer.java:42)
	at
org.apache.activemq.leveldb.RecordLog$LogReader.read(RecordLog.scala:380)
	at
org.apache.activemq.leveldb.RecordLog$$anonfun$read$2.apply(RecordLog.scala:654)
	at
org.apache.activemq.leveldb.RecordLog$$anonfun$read$2.apply(RecordLog.scala:654)
	at org.apache.activemq.leveldb.RecordLog.get_reader(RecordLog.scala:644)
	at org.apache.activemq.leveldb.RecordLog.read(RecordLog.scala:654)
	at
org.apache.activemq.leveldb.LevelDBClient.getMessage(LevelDBClient.scala:1335)
	at
org.apache.activemq.leveldb.LevelDBClient$$anonfun$queueCursor$1.apply(LevelDBClient.scala:1274)
	at
org.apache.activemq.leveldb.LevelDBClient$$anonfun$queueCursor$1.apply(LevelDBClient.scala:1271)
	at
org.apache.activemq.leveldb.LevelDBClient$$anonfun$collectionCursor$1$$anonfun$apply$mcV$sp$12.apply(LevelDBClient.scala:1359)
	at
org.apache.activemq.leveldb.LevelDBClient$$anonfun$collectionCursor$1$$anonfun$apply$mcV$sp$12.apply(LevelDBClient.scala:1358)
	at
org.apache.activemq.leveldb.LevelDBClient$RichDB.check$4(LevelDBClient.scala:323)
	at
org.apache.activemq.leveldb.LevelDBClient$RichDB.cursorRange(LevelDBClient.scala:325)
	at
org.apache.activemq.leveldb.LevelDBClient$$anonfun$collectionCursor$1.apply$mcV$sp(LevelDBClient.scala:1358)
	at
org.apache.activemq.leveldb.LevelDBClient$$anonfun$collectionCursor$1.apply(LevelDBClient.scala:1358)
	at
org.apache.activemq.leveldb.LevelDBClient$$anonfun$collectionCursor$1.apply(LevelDBClient.scala:1358)
	at
org.apache.activemq.leveldb.LevelDBClient.usingIndex(LevelDBClient.scala:1038)
	at
org.apache.activemq.leveldb.LevelDBClient$$anonfun$might_fail_using_index$1.apply(LevelDBClient.scala:1044)
	at
org.apache.activemq.leveldb.LevelDBClient.might_fail(LevelDBClient.scala:549)
	at
org.apache.activemq.leveldb.LevelDBClient.might_fail_using_index(LevelDBClient.scala:1044)
	at
org.apache.activemq.leveldb.LevelDBClient.collectionCursor(LevelDBClient.scala:1357)
	at
org.apache.activemq.leveldb.LevelDBClient.queueCursor(LevelDBClient.scala:1271)
	at
org.apache.activemq.leveldb.DBManager.cursorMessages(DBManager.scala:735)
	at
org.apache.activemq.leveldb.LevelDBStore$LevelDBMessageStore.recoverNextMessages(LevelDBStore.scala:860)
	at
org.apache.activemq.broker.region.cursors.QueueStorePrefetch.doFillBatch(QueueStorePrefetch.java:109)
	at
org.apache.activemq.broker.region.cursors.AbstractStoreCursor.fillBatch(AbstractStoreCursor.java:381)
	at
org.apache.activemq.broker.region.cursors.AbstractStoreCursor.reset(AbstractStoreCursor.java:142)
	at
org.apache.activemq.broker.region.cursors.StoreQueueCursor.reset(StoreQueueCursor.java:159)
	at
org.apache.activemq.broker.region.Queue.doPageInForDispatch(Queue.java:1896)
	at org.apache.activemq.broker.region.Queue.pageInMessages(Queue.java:2107)
	at org.apache.activemq.broker.region.Queue.iterate(Queue.java:1583)
	at
org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:133)

How to configure Broker to handle the load?

Thanks in advance

Best Regards

Michele





--
View this message in context: http://activemq.2283324.n4.nabble.com/Active-MQ-OutOfMemory-in-JbossFuse-6-2-context-tp4709960.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: Active MQ - OutOfMemory in JbossFuse 6.2 context

Posted by Michele <mi...@finconsgroup.com>.
Hi Tim,

thank you for your reply.

I resolved but I still work on this issue to improve perfermance of the
process (Read large number of line in File - Split, Process and Store in AMQ
- Invoking a Rest Service) and best balance between producer and consumer.

I resolved OOM Problem like this:

launching JBoss fuse with 
-Xms512M -Xmx1024M -XX:PermSize=256M -XX:MaxPermSize=512M

and changing activemq.xml policyEntry and SystemUsage

<policyEntry queue=">" producerFlowControl="true" memoryLimit="250mb"
optimizedDispatch="true">
</policyEntry>

 <systemUsage>
            <systemUsage>
                <memoryUsage>
                    <memoryUsage percentOfJvmHeap="80"/>
                </memoryUsage>
                <storeUsage>
                    <storeUsage limit="100 gb"/>
                </storeUsage>
                <tempUsage>
                    <tempUsage limit="50 gb"/>
                </tempUsage>
            </systemUsage>
</systemUsage>

I hope to work on right way.

Thanks so much again

Kind Greetings

Michele

Michele




--
View this message in context: http://activemq.2283324.n4.nabble.com/Active-MQ-OutOfMemory-in-JbossFuse-6-2-context-tp4709960p4710458.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: Active MQ - OutOfMemory in JbossFuse 6.2 context

Posted by Tim Bain <tb...@alumni.duke.edu>.
Did you ever resolve this question?  Sorry I never responded, though if I
had I'd have said that it seemed pretty inconceivable that you doubled your
heap size and still OOMed after exactly the same number of messages, so I
would suspect that you hadn't properly configured your heap size.  Was I
right?

Tim

On Mon, Mar 28, 2016 at 3:09 AM, Michele <mi...@finconsgroup.com>
wrote:

> Hi Tim
>
> thanks a lot for your reply.
>
> What are your systemLimits, especially your storeUsage and memoryUsage?
> Default configuration
> <systemUsage>
>         <memoryUsage>
>                 <memoryUsage limit="64 mb"/>
>         </memoryUsage>
>         <storeUsage>
>                 <storeUsage limit="100 gb"/>
>         </storeUsage>
>         <tempUsage>
>                 <tempUsage limit="50 gb"/>
>         </tempUsage>
> </systemUsage>
>
> Why is producerFlowControl="false"?  The sole purpose of PFC is to prevent
> the broker from OOMing when given more messages than it can store; you
> should have this on.
> Probably the false value is a test configuration.
>
> What are the units for the number 40000 that you quoted?  Bytes?  Lines?
> 40000 lines in a single file
>
> Is the OOM happening after 500 lines of a single file (or of a few files),
> or after around 500 files (each of which produces many messages)?  I think
> you said the former, but want to be sure.
> After about 500 lines. I tried without consumer in the camel route (Read
> and
> Store) and the result is the same OOM.
>
> How large is each line (and therefore how large is each message)?
> Simple csv with 15 fields for row
>
> What happens if you temporarily increase the heap to 1GB?  How many
> messages does it take in that scenario?
>
> Same result...and from Message History i see that the exception occured
> when
> the component put the message in Active MQ
> 15:21:25,982 | ERROR | inbound_Worker-1 | DefaultErrorHandler
> |
> 198 - org.apache.camel.camel-core - 2.15.1.redhat-620133 -
> IF_INGESTATE-Inbound-Context - IMP-IF-Ingestate-20160326-152100 | Failed
> delivery for (MessageId: ID-FGBAL201530-55754-1459002017865-1-275 on
> ExchangeId: ID-FGBAL201530-55754-1459002017865-1-276). Exhausted after
> delivery attempt: 1 caught:
> org.springframework.jms.UncategorizedJmsException: Uncategorized exception
> occured during JMS processing; nested exception is javax.jms.JMSException:
> Java heap space
>
> Message History
>
> ---------------------------------------------------------------------------------------------------------------------------------------
> RouteId              ProcessorId          Processor
> Elapsed (ms)
> [FileRetriever_Rout] [FileRetriever_Rout]
>
> [file://C:/CRT-2.0/IF-Ingestate/inbox?include=%5EEdenred_%5B0-9%5D%7B8%7D.csv&s]
> [     25929]
> [FileRetriever_Rout] [unmarshal5        ]
> [unmarshal[ref:IncomingFileDataFormat]
> ] [         0]
> [FileRetriever_Rout] [setHeader33       ] [setHeader[CamelSplitIndex]
> ] [         0]
> [FileRetriever_Rout] [setBody2          ] [setBody[simple{${body[0]}}]
> ] [        16]
> [FileRetriever_Rout] [choice4           ]
> [when[simple{${body[INGENICO_OPERATION_ID]} regex '[0-9]+'}]choice[]
> ] [      1045]
> [FileRetriever_Rout] [log41             ] [log
> ] [         0]
> [FileRetriever_Rout] [to14              ]
> [activemq:queue:IF_INGESTATE_Inbound
> ] [      1030]
>
> Exchange
>
> ---------------------------------------------------------------------------------------------------------------------------------------
> Exchange[
>         Id                  ID-FGBAL201530-55754-1459002017865-1-276
>         ExchangePattern     InOnly
>         Headers             {breadcrumbId=IMP-IF-Ingestate-20160326-152100,
> CamelFileAbsolute=true,
> CamelFileAbsolutePath=C:\CRT-2.0\IF-Ingestate\inbox\Edenred_15092015.csv,
> CamelFileContentType=application/vnd.ms-excel,
> CamelFileLastModified=1458774180380, CamelFileLength=6961789,
> CamelFileName=Edenred_15092015.csv,
> CamelFileNameConsumed=Edenred_15092015.csv,
> CamelFileNameOnly=Edenred_15092015.csv,
> CamelFileParent=C:\CRT-2.0\IF-Ingestate\inbox,
> CamelFilePath=C:\CRT-2.0\IF-Ingestate\inbox\Edenred_15092015.csv,
> CamelFileRelativePath=Edenred_15092015.csv, CamelRedelivered=false,
> CamelRedeliveryCounter=0, CamelSplitIndex=58,
> CrmRSActionPath=/tk_rt_ticket/ingestate/maintenance,
> ImportDateTime=20160326-152100,
> MsgCorrelationId=Inbound_INGESTATE_20160326-152100}
>         BodyType            java.util.HashMap
>         Body                {TICKET_ID=00108571,
> INGENICO_OPERATION_ID=1721709,
> TERMINAL_NUMBEROFCONTACTS=1, TICKET_STATUS=1,
> TERMINAL_LAST_CONTACT_DATE=2012-11-03 00:51:39.847,
> TERMINAL_TECHNOLOGY=TELIUM, TERMINAL_SN=0000107370942739,
> TERMINAL_INGESTATE_ID=ICT-TICKET-RESTAURANT:91229702,
> TICKET_REGISTRATION_DATE=2012-11-02 10:08:00.000, TERMINAL_PN=M40,
> TERMINAL_FIRST_CONTACT_DATE=2012-11-03 00:49:33.780, TERMINAL_ID=91229702,
> RECORDER=EDENRED}
> ]
>
> Stacktrace
>
> ---------------------------------------------------------------------------------------------------------------------------------------
> org.springframework.jms.UncategorizedJmsException: Uncategorized exception
> occured during JMS processing; nested exception is javax.jms.JMSException:
> Java heap space
>         at
>
> org.springframework.jms.support.JmsUtils.convertJmsAccessException(JmsUtils.java:316)[208:org.apache.servicemix.bundles.spring-jms:3.2.12.RELEASE_1]
>         at
>
> org.springframework.jms.support.JmsAccessor.convertJmsAccessException(JmsAccessor.java:168)[208:org.apache.servicemix.bundles.spring-jms:3.2.12.RELEASE_1]
>         at
>
> org.springframework.jms.core.JmsTemplate.execute(JmsTemplate.java:469)[208:org.apache.servicemix.bundles.spring-jms:3.2.12.RELEASE_1]
>         at
>
> org.apache.camel.component.jms.JmsConfiguration$CamelJmsTemplate.send(JmsConfiguration.java:235)[209:org.apache.camel.camel-jms:2.15.1.redhat-620133]
>         at
>
> org.apache.camel.component.jms.JmsProducer.doSend(JmsProducer.java:413)[209:org.apache.camel.camel-jms:2.15.1.redhat-620133]
>         at
>
> org.apache.camel.component.jms.JmsProducer.processInOnly(JmsProducer.java:367)[209:org.apache.camel.camel-jms:2.15.1.redhat-620133]
>         at
>
> org.apache.camel.component.jms.JmsProducer.process(JmsProducer.java:153)[209:org.apache.camel.camel-jms:2.15.1.redhat-620133]
>         at
>
> org.apache.camel.processor.SendProcessor.process(SendProcessor.java:129)[198:org.apache.camel.camel-core:2.15.1.redhat-620133]
>
> However, I'm working on both fronts in order to optimize Producer and
> Consumer.
> In this scenario, i prefer that the producer is fastest than the consumer
> in
> order to don't overload the destination endpoint.
>
> Thanks in advance.
>
> Best Regards
>
> Michele
>
>
>
>
>
> --
> View this message in context:
> http://activemq.2283324.n4.nabble.com/Active-MQ-OutOfMemory-in-JbossFuse-6-2-context-tp4709960p4709984.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>

Re: Active MQ - OutOfMemory in JbossFuse 6.2 context

Posted by Michele <mi...@finconsgroup.com>.
Hi Tim

thanks a lot for your reply. 

What are your systemLimits, especially your storeUsage and memoryUsage? 
Default configuration
<systemUsage>
	<memoryUsage>
		<memoryUsage limit="64 mb"/>
	</memoryUsage>
	<storeUsage>
		<storeUsage limit="100 gb"/>
	</storeUsage>
	<tempUsage>
		<tempUsage limit="50 gb"/>
	</tempUsage>
</systemUsage>

Why is producerFlowControl="false"?  The sole purpose of PFC is to prevent 
the broker from OOMing when given more messages than it can store; you 
should have this on. 
Probably the false value is a test configuration.

What are the units for the number 40000 that you quoted?  Bytes?  Lines?
40000 lines in a single file 

Is the OOM happening after 500 lines of a single file (or of a few files), 
or after around 500 files (each of which produces many messages)?  I think 
you said the former, but want to be sure. 
After about 500 lines. I tried without consumer in the camel route (Read and
Store) and the result is the same OOM.

How large is each line (and therefore how large is each message)? 
Simple csv with 15 fields for row

What happens if you temporarily increase the heap to 1GB?  How many 
messages does it take in that scenario? 

Same result...and from Message History i see that the exception occured when
the component put the message in Active MQ
15:21:25,982 | ERROR | inbound_Worker-1 | DefaultErrorHandler              |
198 - org.apache.camel.camel-core - 2.15.1.redhat-620133 -
IF_INGESTATE-Inbound-Context - IMP-IF-Ingestate-20160326-152100 | Failed
delivery for (MessageId: ID-FGBAL201530-55754-1459002017865-1-275 on
ExchangeId: ID-FGBAL201530-55754-1459002017865-1-276). Exhausted after
delivery attempt: 1 caught:
org.springframework.jms.UncategorizedJmsException: Uncategorized exception
occured during JMS processing; nested exception is javax.jms.JMSException:
Java heap space

Message History
---------------------------------------------------------------------------------------------------------------------------------------
RouteId              ProcessorId          Processor                                                                       
Elapsed (ms)
[FileRetriever_Rout] [FileRetriever_Rout]
[file://C:/CRT-2.0/IF-Ingestate/inbox?include=%5EEdenred_%5B0-9%5D%7B8%7D.csv&s]
[     25929]
[FileRetriever_Rout] [unmarshal5        ]
[unmarshal[ref:IncomingFileDataFormat]                                        
] [         0]
[FileRetriever_Rout] [setHeader33       ] [setHeader[CamelSplitIndex]                                                   
] [         0]
[FileRetriever_Rout] [setBody2          ] [setBody[simple{${body[0]}}]                                                  
] [        16]
[FileRetriever_Rout] [choice4           ]
[when[simple{${body[INGENICO_OPERATION_ID]} regex '[0-9]+'}]choice[]          
] [      1045]
[FileRetriever_Rout] [log41             ] [log                                                                          
] [         0]
[FileRetriever_Rout] [to14              ]
[activemq:queue:IF_INGESTATE_Inbound                                          
] [      1030]

Exchange
---------------------------------------------------------------------------------------------------------------------------------------
Exchange[
	Id                  ID-FGBAL201530-55754-1459002017865-1-276
	ExchangePattern     InOnly
	Headers             {breadcrumbId=IMP-IF-Ingestate-20160326-152100,
CamelFileAbsolute=true,
CamelFileAbsolutePath=C:\CRT-2.0\IF-Ingestate\inbox\Edenred_15092015.csv,
CamelFileContentType=application/vnd.ms-excel,
CamelFileLastModified=1458774180380, CamelFileLength=6961789,
CamelFileName=Edenred_15092015.csv,
CamelFileNameConsumed=Edenred_15092015.csv,
CamelFileNameOnly=Edenred_15092015.csv,
CamelFileParent=C:\CRT-2.0\IF-Ingestate\inbox,
CamelFilePath=C:\CRT-2.0\IF-Ingestate\inbox\Edenred_15092015.csv,
CamelFileRelativePath=Edenred_15092015.csv, CamelRedelivered=false,
CamelRedeliveryCounter=0, CamelSplitIndex=58,
CrmRSActionPath=/tk_rt_ticket/ingestate/maintenance,
ImportDateTime=20160326-152100,
MsgCorrelationId=Inbound_INGESTATE_20160326-152100}
	BodyType            java.util.HashMap
	Body                {TICKET_ID=00108571, INGENICO_OPERATION_ID=1721709,
TERMINAL_NUMBEROFCONTACTS=1, TICKET_STATUS=1,
TERMINAL_LAST_CONTACT_DATE=2012-11-03 00:51:39.847,
TERMINAL_TECHNOLOGY=TELIUM, TERMINAL_SN=0000107370942739,
TERMINAL_INGESTATE_ID=ICT-TICKET-RESTAURANT:91229702,
TICKET_REGISTRATION_DATE=2012-11-02 10:08:00.000, TERMINAL_PN=M40,
TERMINAL_FIRST_CONTACT_DATE=2012-11-03 00:49:33.780, TERMINAL_ID=91229702,
RECORDER=EDENRED}
]

Stacktrace
---------------------------------------------------------------------------------------------------------------------------------------
org.springframework.jms.UncategorizedJmsException: Uncategorized exception
occured during JMS processing; nested exception is javax.jms.JMSException:
Java heap space
	at
org.springframework.jms.support.JmsUtils.convertJmsAccessException(JmsUtils.java:316)[208:org.apache.servicemix.bundles.spring-jms:3.2.12.RELEASE_1]
	at
org.springframework.jms.support.JmsAccessor.convertJmsAccessException(JmsAccessor.java:168)[208:org.apache.servicemix.bundles.spring-jms:3.2.12.RELEASE_1]
	at
org.springframework.jms.core.JmsTemplate.execute(JmsTemplate.java:469)[208:org.apache.servicemix.bundles.spring-jms:3.2.12.RELEASE_1]
	at
org.apache.camel.component.jms.JmsConfiguration$CamelJmsTemplate.send(JmsConfiguration.java:235)[209:org.apache.camel.camel-jms:2.15.1.redhat-620133]
	at
org.apache.camel.component.jms.JmsProducer.doSend(JmsProducer.java:413)[209:org.apache.camel.camel-jms:2.15.1.redhat-620133]
	at
org.apache.camel.component.jms.JmsProducer.processInOnly(JmsProducer.java:367)[209:org.apache.camel.camel-jms:2.15.1.redhat-620133]
	at
org.apache.camel.component.jms.JmsProducer.process(JmsProducer.java:153)[209:org.apache.camel.camel-jms:2.15.1.redhat-620133]
	at
org.apache.camel.processor.SendProcessor.process(SendProcessor.java:129)[198:org.apache.camel.camel-core:2.15.1.redhat-620133]

However, I'm working on both fronts in order to optimize Producer and
Consumer.
In this scenario, i prefer that the producer is fastest than the consumer in
order to don't overload the destination endpoint.

Thanks in advance.

Best Regards

Michele





--
View this message in context: http://activemq.2283324.n4.nabble.com/Active-MQ-OutOfMemory-in-JbossFuse-6-2-context-tp4709960p4709984.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: Active MQ - OutOfMemory in JbossFuse 6.2 context

Posted by Tim Bain <tb...@alumni.duke.edu>.
Also, are you seeing the first error (involving PooledConnectionFactory) on
the broker or on the client?
On Mar 25, 2016 9:18 PM, "Tim Bain" <tb...@alumni.duke.edu> wrote:

> What are your systemLimits, especially your storeUsage and memoryUsage?
>
> Why is producerFlowControl="false"?  The sole purpose of PFC is to prevent
> the broker from OOMing when given more messages than it can store; you
> should have this on.
>
> What are the units for the number 40000 that you quoted?  Bytes?  Lines?
>
> Is the OOM happening after 500 lines of a single file (or of a few files),
> or after around 500 files (each of which produces many messages)?  I think
> you said the former, but want to be sure.
>
> How large is each line (and therefore how large is each message)?
>
> What happens if you temporarily increase the heap to 1GB?  How many
> messages does it take in that scenario?
>
> Tim
> On Mar 25, 2016 9:00 AM, "Michele" <mi...@finconsgroup.com>
> wrote:
>
>> Hi everyone,
>>
>> I'm new in using ActiveMQ and according to business requirement I have a
>> Camel Route optimized to read large size file (~40000) to split and store
>> each single row in AMQueue and then a pool consumer will process invoking
>> a
>> RS.
>>
>> ActiveMQ after about 500 messages go down in OutOfMemory - Heap Space.
>>
>> I use JBossFuse ESB 6.2 based on ActiveMQ 5.11  and according to
>> http://activemq.apache.org/javalangoutofmemory.html I modified
>> activemq.xml
>> in installationJBFuseDir/etc but the problem continue:
>>
>>  <policyEntry queue=">" producerFlowControl="false"
>> optimizedDispatch="true" queuePrefetch="0">
>>                         <pendingMessageLimitStrategy>
>>                         <constantPendingMessageLimitStrategy
>> limit="1000"/>
>>                         </pendingMessageLimitStrategy>
>>                                         <deadLetterStrategy>
>>
>> <individualDeadLetterStrategy queuePrefix="Test.DLQ."/>
>>                                         </deadLetterStrategy>
>>                                         <pendingQueuePolicy>
>>                                                 <storeCursor />
>>                                         </pendingQueuePolicy>
>>                 </policyEntry>
>>
>> <persistenceAdapter>
>>
>>         <levelDB directory="${data}/leveldb"  />
>> </persistenceAdapter>
>>
>> And I added jvm properties
>> -Xmx512M -Dorg.apache.activemq.UseDedicatedTaskRunner=false
>>
>> Errors:
>>
>> PooledConnectionFactory - Expiring connection ActiveMQConnection
>>
>> {id=ID:FGBAL201530-50934-1458820732064-7:3,clientId=ID:FGBAL201530-50934-1458820732064-6:2,started=false}
>> on IOException: Unexpected error occurred: java.lang.OutOfMemoryError:
>> Java
>> heap space
>>
>> or
>>
>> Ignoring no space left exception, java.io.IOException: Java heap space
>> java.io.IOException: Java heap space
>>         at
>>
>> org.apache.activemq.util.IOExceptionSupport.create(IOExceptionSupport.java:39)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
>>         at
>>
>> org.apache.activemq.leveldb.LevelDBClient.might_fail(LevelDBClient.scala:552)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
>>         at
>>
>> org.apache.activemq.leveldb.LevelDBClient.might_fail_using_index(LevelDBClient.scala:1044)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
>>         at
>>
>> org.apache.activemq.leveldb.LevelDBClient.collectionCursor(LevelDBClient.scala:1357)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
>>         at
>>
>> org.apache.activemq.leveldb.LevelDBClient.queueCursor(LevelDBClient.scala:1271)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
>>         at
>>
>> org.apache.activemq.leveldb.DBManager.cursorMessages(DBManager.scala:735)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
>>         at
>>
>> org.apache.activemq.leveldb.LevelDBStore$LevelDBMessageStore.recoverNextMessages(LevelDBStore.scala:860)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
>>         at
>>
>> org.apache.activemq.broker.region.cursors.QueueStorePrefetch.doFillBatch(QueueStorePrefetch.java:109)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
>>         at
>>
>> org.apache.activemq.broker.region.cursors.AbstractStoreCursor.fillBatch(AbstractStoreCursor.java:381)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
>>         at
>>
>> org.apache.activemq.broker.region.cursors.AbstractStoreCursor.reset(AbstractStoreCursor.java:142)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
>>         at
>>
>> org.apache.activemq.broker.region.cursors.StoreQueueCursor.reset(StoreQueueCursor.java:159)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
>>         at
>>
>> org.apache.activemq.broker.region.Queue.doPageInForDispatch(Queue.java:1896)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
>>         at
>>
>> org.apache.activemq.broker.region.Queue.pageInMessages(Queue.java:2107)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
>>         at
>>
>> org.apache.activemq.broker.region.Queue.iterate(Queue.java:1583)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
>>         at
>>
>> org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:133)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
>>         at
>>
>> org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:48)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
>>         at
>>
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)[:1.7.0_79]
>>         at
>>
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)[:1.7.0_79]
>>         at java.lang.Thread.run(Thread.java:745)[:1.7.0_79]
>> Caused by: java.lang.OutOfMemoryError: Java heap space
>>         at org.fusesource.hawtbuf.Buffer.<init>(Buffer.java:42)
>>         at
>> org.apache.activemq.leveldb.RecordLog$LogReader.read(RecordLog.scala:380)
>>         at
>>
>> org.apache.activemq.leveldb.RecordLog$$anonfun$read$2.apply(RecordLog.scala:654)
>>         at
>>
>> org.apache.activemq.leveldb.RecordLog$$anonfun$read$2.apply(RecordLog.scala:654)
>>         at
>> org.apache.activemq.leveldb.RecordLog.get_reader(RecordLog.scala:644)
>>         at org.apache.activemq.leveldb.RecordLog.read(RecordLog.scala:654)
>>         at
>>
>> org.apache.activemq.leveldb.LevelDBClient.getMessage(LevelDBClient.scala:1335)
>>         at
>>
>> org.apache.activemq.leveldb.LevelDBClient$$anonfun$queueCursor$1.apply(LevelDBClient.scala:1274)
>>         at
>>
>> org.apache.activemq.leveldb.LevelDBClient$$anonfun$queueCursor$1.apply(LevelDBClient.scala:1271)
>>         at
>>
>> org.apache.activemq.leveldb.LevelDBClient$$anonfun$collectionCursor$1$$anonfun$apply$mcV$sp$12.apply(LevelDBClient.scala:1359)
>>         at
>>
>> org.apache.activemq.leveldb.LevelDBClient$$anonfun$collectionCursor$1$$anonfun$apply$mcV$sp$12.apply(LevelDBClient.scala:1358)
>>         at
>>
>> org.apache.activemq.leveldb.LevelDBClient$RichDB.check$4(LevelDBClient.scala:323)
>>         at
>>
>> org.apache.activemq.leveldb.LevelDBClient$RichDB.cursorRange(LevelDBClient.scala:325)
>>         at
>>
>> org.apache.activemq.leveldb.LevelDBClient$$anonfun$collectionCursor$1.apply$mcV$sp(LevelDBClient.scala:1358)
>>         at
>>
>> org.apache.activemq.leveldb.LevelDBClient$$anonfun$collectionCursor$1.apply(LevelDBClient.scala:1358)
>>         at
>>
>> org.apache.activemq.leveldb.LevelDBClient$$anonfun$collectionCursor$1.apply(LevelDBClient.scala:1358)
>>         at
>>
>> org.apache.activemq.leveldb.LevelDBClient.usingIndex(LevelDBClient.scala:1038)
>>         at
>>
>> org.apache.activemq.leveldb.LevelDBClient$$anonfun$might_fail_using_index$1.apply(LevelDBClient.scala:1044)
>>         at
>>
>> org.apache.activemq.leveldb.LevelDBClient.might_fail(LevelDBClient.scala:549)
>>         at
>>
>> org.apache.activemq.leveldb.LevelDBClient.might_fail_using_index(LevelDBClient.scala:1044)
>>         at
>>
>> org.apache.activemq.leveldb.LevelDBClient.collectionCursor(LevelDBClient.scala:1357)
>>         at
>>
>> org.apache.activemq.leveldb.LevelDBClient.queueCursor(LevelDBClient.scala:1271)
>>         at
>> org.apache.activemq.leveldb.DBManager.cursorMessages(DBManager.scala:735)
>>         at
>>
>> org.apache.activemq.leveldb.LevelDBStore$LevelDBMessageStore.recoverNextMessages(LevelDBStore.scala:860)
>>         at
>>
>> org.apache.activemq.broker.region.cursors.QueueStorePrefetch.doFillBatch(QueueStorePrefetch.java:109)
>>         at
>>
>> org.apache.activemq.broker.region.cursors.AbstractStoreCursor.fillBatch(AbstractStoreCursor.java:381)
>>         at
>>
>> org.apache.activemq.broker.region.cursors.AbstractStoreCursor.reset(AbstractStoreCursor.java:142)
>>         at
>>
>> org.apache.activemq.broker.region.cursors.StoreQueueCursor.reset(StoreQueueCursor.java:159)
>>         at
>>
>> org.apache.activemq.broker.region.Queue.doPageInForDispatch(Queue.java:1896)
>>         at
>> org.apache.activemq.broker.region.Queue.pageInMessages(Queue.java:2107)
>>         at
>> org.apache.activemq.broker.region.Queue.iterate(Queue.java:1583)
>>         at
>>
>> org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:133)
>>
>> How to configure Broker to handle the load?
>>
>> Thanks in advance
>>
>> Best Regards
>>
>> Michele
>>
>>
>>
>>
>>
>> --
>> View this message in context:
>> http://activemq.2283324.n4.nabble.com/Active-MQ-OutOfMemory-in-JbossFuse-6-2-context-tp4709960.html
>> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>>
>

Re: Active MQ - OutOfMemory in JbossFuse 6.2 context

Posted by Tim Bain <tb...@alumni.duke.edu>.
What are your systemLimits, especially your storeUsage and memoryUsage?

Why is producerFlowControl="false"?  The sole purpose of PFC is to prevent
the broker from OOMing when given more messages than it can store; you
should have this on.

What are the units for the number 40000 that you quoted?  Bytes?  Lines?

Is the OOM happening after 500 lines of a single file (or of a few files),
or after around 500 files (each of which produces many messages)?  I think
you said the former, but want to be sure.

How large is each line (and therefore how large is each message)?

What happens if you temporarily increase the heap to 1GB?  How many
messages does it take in that scenario?

Tim
On Mar 25, 2016 9:00 AM, "Michele" <mi...@finconsgroup.com>
wrote:

> Hi everyone,
>
> I'm new in using ActiveMQ and according to business requirement I have a
> Camel Route optimized to read large size file (~40000) to split and store
> each single row in AMQueue and then a pool consumer will process invoking a
> RS.
>
> ActiveMQ after about 500 messages go down in OutOfMemory - Heap Space.
>
> I use JBossFuse ESB 6.2 based on ActiveMQ 5.11  and according to
> http://activemq.apache.org/javalangoutofmemory.html I modified
> activemq.xml
> in installationJBFuseDir/etc but the problem continue:
>
>  <policyEntry queue=">" producerFlowControl="false"
> optimizedDispatch="true" queuePrefetch="0">
>                         <pendingMessageLimitStrategy>
>                         <constantPendingMessageLimitStrategy limit="1000"/>
>                         </pendingMessageLimitStrategy>
>                                         <deadLetterStrategy>
>
> <individualDeadLetterStrategy queuePrefix="Test.DLQ."/>
>                                         </deadLetterStrategy>
>                                         <pendingQueuePolicy>
>                                                 <storeCursor />
>                                         </pendingQueuePolicy>
>                 </policyEntry>
>
> <persistenceAdapter>
>
>         <levelDB directory="${data}/leveldb"  />
> </persistenceAdapter>
>
> And I added jvm properties
> -Xmx512M -Dorg.apache.activemq.UseDedicatedTaskRunner=false
>
> Errors:
>
> PooledConnectionFactory - Expiring connection ActiveMQConnection
>
> {id=ID:FGBAL201530-50934-1458820732064-7:3,clientId=ID:FGBAL201530-50934-1458820732064-6:2,started=false}
> on IOException: Unexpected error occurred: java.lang.OutOfMemoryError: Java
> heap space
>
> or
>
> Ignoring no space left exception, java.io.IOException: Java heap space
> java.io.IOException: Java heap space
>         at
>
> org.apache.activemq.util.IOExceptionSupport.create(IOExceptionSupport.java:39)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
>         at
>
> org.apache.activemq.leveldb.LevelDBClient.might_fail(LevelDBClient.scala:552)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
>         at
>
> org.apache.activemq.leveldb.LevelDBClient.might_fail_using_index(LevelDBClient.scala:1044)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
>         at
>
> org.apache.activemq.leveldb.LevelDBClient.collectionCursor(LevelDBClient.scala:1357)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
>         at
>
> org.apache.activemq.leveldb.LevelDBClient.queueCursor(LevelDBClient.scala:1271)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
>         at
>
> org.apache.activemq.leveldb.DBManager.cursorMessages(DBManager.scala:735)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
>         at
>
> org.apache.activemq.leveldb.LevelDBStore$LevelDBMessageStore.recoverNextMessages(LevelDBStore.scala:860)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
>         at
>
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch.doFillBatch(QueueStorePrefetch.java:109)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
>         at
>
> org.apache.activemq.broker.region.cursors.AbstractStoreCursor.fillBatch(AbstractStoreCursor.java:381)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
>         at
>
> org.apache.activemq.broker.region.cursors.AbstractStoreCursor.reset(AbstractStoreCursor.java:142)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
>         at
>
> org.apache.activemq.broker.region.cursors.StoreQueueCursor.reset(StoreQueueCursor.java:159)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
>         at
>
> org.apache.activemq.broker.region.Queue.doPageInForDispatch(Queue.java:1896)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
>         at
>
> org.apache.activemq.broker.region.Queue.pageInMessages(Queue.java:2107)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
>         at
>
> org.apache.activemq.broker.region.Queue.iterate(Queue.java:1583)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
>         at
>
> org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:133)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
>         at
>
> org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:48)[184:org.apache.activemq.activemq-osgi:5.11.0.redhat-620133]
>         at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)[:1.7.0_79]
>         at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)[:1.7.0_79]
>         at java.lang.Thread.run(Thread.java:745)[:1.7.0_79]
> Caused by: java.lang.OutOfMemoryError: Java heap space
>         at org.fusesource.hawtbuf.Buffer.<init>(Buffer.java:42)
>         at
> org.apache.activemq.leveldb.RecordLog$LogReader.read(RecordLog.scala:380)
>         at
>
> org.apache.activemq.leveldb.RecordLog$$anonfun$read$2.apply(RecordLog.scala:654)
>         at
>
> org.apache.activemq.leveldb.RecordLog$$anonfun$read$2.apply(RecordLog.scala:654)
>         at
> org.apache.activemq.leveldb.RecordLog.get_reader(RecordLog.scala:644)
>         at org.apache.activemq.leveldb.RecordLog.read(RecordLog.scala:654)
>         at
>
> org.apache.activemq.leveldb.LevelDBClient.getMessage(LevelDBClient.scala:1335)
>         at
>
> org.apache.activemq.leveldb.LevelDBClient$$anonfun$queueCursor$1.apply(LevelDBClient.scala:1274)
>         at
>
> org.apache.activemq.leveldb.LevelDBClient$$anonfun$queueCursor$1.apply(LevelDBClient.scala:1271)
>         at
>
> org.apache.activemq.leveldb.LevelDBClient$$anonfun$collectionCursor$1$$anonfun$apply$mcV$sp$12.apply(LevelDBClient.scala:1359)
>         at
>
> org.apache.activemq.leveldb.LevelDBClient$$anonfun$collectionCursor$1$$anonfun$apply$mcV$sp$12.apply(LevelDBClient.scala:1358)
>         at
>
> org.apache.activemq.leveldb.LevelDBClient$RichDB.check$4(LevelDBClient.scala:323)
>         at
>
> org.apache.activemq.leveldb.LevelDBClient$RichDB.cursorRange(LevelDBClient.scala:325)
>         at
>
> org.apache.activemq.leveldb.LevelDBClient$$anonfun$collectionCursor$1.apply$mcV$sp(LevelDBClient.scala:1358)
>         at
>
> org.apache.activemq.leveldb.LevelDBClient$$anonfun$collectionCursor$1.apply(LevelDBClient.scala:1358)
>         at
>
> org.apache.activemq.leveldb.LevelDBClient$$anonfun$collectionCursor$1.apply(LevelDBClient.scala:1358)
>         at
>
> org.apache.activemq.leveldb.LevelDBClient.usingIndex(LevelDBClient.scala:1038)
>         at
>
> org.apache.activemq.leveldb.LevelDBClient$$anonfun$might_fail_using_index$1.apply(LevelDBClient.scala:1044)
>         at
>
> org.apache.activemq.leveldb.LevelDBClient.might_fail(LevelDBClient.scala:549)
>         at
>
> org.apache.activemq.leveldb.LevelDBClient.might_fail_using_index(LevelDBClient.scala:1044)
>         at
>
> org.apache.activemq.leveldb.LevelDBClient.collectionCursor(LevelDBClient.scala:1357)
>         at
>
> org.apache.activemq.leveldb.LevelDBClient.queueCursor(LevelDBClient.scala:1271)
>         at
> org.apache.activemq.leveldb.DBManager.cursorMessages(DBManager.scala:735)
>         at
>
> org.apache.activemq.leveldb.LevelDBStore$LevelDBMessageStore.recoverNextMessages(LevelDBStore.scala:860)
>         at
>
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch.doFillBatch(QueueStorePrefetch.java:109)
>         at
>
> org.apache.activemq.broker.region.cursors.AbstractStoreCursor.fillBatch(AbstractStoreCursor.java:381)
>         at
>
> org.apache.activemq.broker.region.cursors.AbstractStoreCursor.reset(AbstractStoreCursor.java:142)
>         at
>
> org.apache.activemq.broker.region.cursors.StoreQueueCursor.reset(StoreQueueCursor.java:159)
>         at
>
> org.apache.activemq.broker.region.Queue.doPageInForDispatch(Queue.java:1896)
>         at
> org.apache.activemq.broker.region.Queue.pageInMessages(Queue.java:2107)
>         at org.apache.activemq.broker.region.Queue.iterate(Queue.java:1583)
>         at
>
> org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:133)
>
> How to configure Broker to handle the load?
>
> Thanks in advance
>
> Best Regards
>
> Michele
>
>
>
>
>
> --
> View this message in context:
> http://activemq.2283324.n4.nabble.com/Active-MQ-OutOfMemory-in-JbossFuse-6-2-context-tp4709960.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>