You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@activemq.apache.org by Ned Wolpert <ne...@imemories.com> on 2013/11/04 18:20:17 UTC

5.3 question and server upgrade question...

Folks-

  I have a 5.3 installation that we're using, and I have 2 questions for it:

1) We have prefetch set to 1 for all of the message consumers on one queue,
where message handling is slow. But it still seems like messages aren't
really 'round robin' to the next available message consumer. I'll see a few
consumers are free but messages are waiting around. Is there a
configuration that can help?  (I should note that the server has been
running consistently for 9 months and it seems to be getting worse....
would a restart help?)

2) We are looking to upgrade to 5.9. I haven't started the process of
testing, but I wanted to see if this is a case where the 5.3 clients need
to be upgraded at the same time as the server, or if the clients can be
rolled over a few weeks to 5.9 after the server gets updated?

Thanks!

-- 
Virtually, Ned Wolpert

"Settle thy studies, Faustus, and begin..."   --Marlowe

Re: 5.3 question and server upgrade question...

Posted by Gary Tully <ga...@gmail.com>.
consumer.receive(poll timeout in milli)

On 15 November 2013 16:28, Ned Wolpert <ne...@imemories.com> wrote:
> With prefetch=0, the client polls the server then, right? Is polling
> frequency a settable value?  (Though as I write this, I'm assuming if so,
> it would be set on the client-side.)
>
>
> On Fri, Nov 15, 2013 at 4:55 AM, Gary Tully <ga...@gmail.com> wrote:
>
>> prefetch=0 will do it, so long as you don't use a messagelistener
>> directly. via spring the listener does a receive(...) under the hood
>> so it will be ok with prefetch=0
>>
>> On 14 November 2013 16:14, Ned Wolpert <ne...@imemories.com> wrote:
>> > After I say you wrote 'prefetchExtension=false' I looked it up and found
>> > this bug sounds exactly like what I'm hitting:
>> > https://issues.apache.org/jira/browse/AMQ-2651 which led me to you
>> talking
>> > on
>> >
>> http://grokbase.com/t/activemq/users/103bdh5cgx/prefetchextension-off-by-1-for-transacted-consumers-with-prefetchsize-0
>> >
>> > So... right now I have prefetch=1.... and I'm using 5.3.0. WIth the
>> grails
>> > jms (spring) plugin, its auto-ack for messages, and they are in a
>> > transaction. So it sounds like I'm hitting this. Does
>> > prefetchExtension=false exist in 5.3? (Looks like it was fixed in 5.4)
>> > Should I really be using prefetch=0?  In this one queue, I have 16
>> > listeners now, and messages are usually in groups < 10 but take a long
>> time
>> > to process. (hours)
>> >
>> >
>> > On Wed, Nov 13, 2013 at 4:27 PM, Gary Tully <ga...@gmail.com>
>> wrote:
>> >
>> >> can you try a different ack mode, like clientack or using transactions
>> >> - the prefetch will be deferred till the ack which will be later than
>> >> in the auto ack case. Also, in the transacted case, use the
>> >> destination policy prefetchExtension=false
>> >>
>> >> On 13 November 2013 14:54, Ned Wolpert <ne...@imemories.com>
>> wrote:
>> >> > Did anyone have an idea into what I could do different to route
>> messages
>> >> to
>> >> > idle consumers?  Just came into the same situation this morning where
>> a
>> >> > queue has 1 message processing on one consumer, one message waiting,
>> and
>> >> 15
>> >> > idle consumers.  (See notes below for my current configs)
>> >> >
>> >> >
>> >> > On Wed, Nov 6, 2013 at 9:40 AM, Ned Wolpert <
>> ned.wolpert@imemories.com
>> >> >wrote:
>> >> >
>> >> >> Forgot to add, broker url only has one query param....
>> >> >>
>> >> >> jms.prefetchPolicy.queuePrefetch=1
>> >> >>
>> >> >> which, as I mentioned above, does seem to work.
>> >> >>
>> >> >>
>> >> >> On Tue, Nov 5, 2013 at 10:56 AM, Ned Wolpert <
>> ned.wolpert@imemories.com
>> >> >wrote:
>> >> >>
>> >> >>> I can see the preFetch values being set in the console, and they are
>> >> all
>> >> >>> one. I've not set priorities.
>> >> >>>
>> >> >>> These are 'java' processes, using groovy/grails. The same executable
>> >> on 4
>> >> >>> boxes, each executable with 4 listeners, treaded. Using the grails
>> jms
>> >> >>> plugin, which wraps the Spring jms template configuration.
>> >> >>> (concurrentConsumers is set to 4 per instance)
>> >> >>>
>> >> >>> When I have 1000's of messages pending, all instances are working.
>> This
>> >> >>> issue is only really viewable when there is 10 messages working.
>> >> >>>
>> >> >>> The following is the (redacted) activemq.xml.  I'm assuming this
>> config
>> >> >>> could be better.  I should mention typical usage of our JMS server
>> has
>> >> a
>> >> >>> few consumers and tons of producers. Thirty queues. Most queues
>> process
>> >> >>> quickly and do not fill up. Two queues are for slow producers. The
>> >> goal is
>> >> >>> for the producers to send a message and break away, so we don't want
>> >> slow
>> >> >>> producers at all. Producers are very spiky.... from 10m/min to
>> bursts
>> >> of
>> >> >>> 100's/min.  We have growth concern as that number is increasing
>> >> steadily.
>> >> >>>
>> >> >>> <beans
>> >> >>>   xmlns="http://www.springframework.org/schema/beans"
>> >> >>>   xmlns:amq="http://activemq.apache.org/schema/core"
>> >> >>>   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
>> >> >>>   xsi:schemaLocation="http://www.springframework.org/schema/beans
>> >> >>> http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
>> >> >>>   http://activemq.apache.org/schema/core
>> >> >>> http://activemq.apache.org/schema/core/activemq-core.xsd">
>> >> >>>
>> >> >>>   <bean
>> >> >>>
>> >>
>> class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
>> >> >>>     <property name="locations">
>> >> >>>
>> <value>file:${activemq.base}/conf/credentials.properties</value>
>> >> >>>     </property>
>> >> >>>   </bean>
>> >> >>>
>> >> >>>   <broker xmlns="http://activemq.apache.org/schema/core"
>> >> >>>             brokerName="stagingMQ"
>> >> >>>             useJmx="true"
>> >> >>>             enableStatistics="true"
>> >> >>>             useLocalHostBrokerName="false"
>> >> >>>             useLoggingForShutdownErrors="true"
>> >> >>>             dataDirectory="XXXXX">
>> >> >>>
>> >> >>>         <managementContext>
>> >> >>>             <managementContext createConnector="true"
>> >> >>> connectorPort="XXXXX"/>
>> >> >>>         </managementContext>
>> >> >>>
>> >> >>>         <persistenceAdapter>
>> >> >>>            <journaledJDBC journalLogFiles="5"
>> >> >>>                           journalLogFileSize="20 Mb"
>> >> >>>   dataDirectory="XXXXXX"
>> >> >>>                           createTablesOnStartup="false"
>> >> >>>                           useDatabaseLock="false"
>> >> >>>                           dataSource="#XXXXX">
>> >> >>>            </journaledJDBC>
>> >> >>>         </persistenceAdapter>
>> >> >>>
>> >> >>>         <destinationPolicy>
>> >> >>>             <policyMap>
>> >> >>>               <policyEntries>
>> >> >>>                 <policyEntry topic=">" producerFlowControl="true"
>> >> >>> memoryLimit="1mb">
>> >> >>>                    <pendingSubscriberPolicy>
>> >> >>>                     <vmCursor />
>> >> >>>                   </pendingSubscriberPolicy>
>> >> >>>                 </policyEntry>
>> >> >>> <policyEntry queue=">" producerFlowControl="true"
>> memoryLimit="30mb">
>> >> >>>                   <pendingQueuePolicy>
>> >> >>>                     <vmQueueCursor/>
>> >> >>>                   </pendingQueuePolicy>
>> >> >>>                 </policyEntry>
>> >> >>>               </policyEntries>
>> >> >>>             </policyMap>
>> >> >>>         </destinationPolicy>
>> >> >>>
>> >> >>>         <transportConnectors>
>> >> >>>             <transportConnector name="openwire" uri="XXXX"/>
>> >> >>>             <transportConnector name="stomp" uri="XXXXX"/>
>> >> >>>         </transportConnectors>
>> >> >>>
>> >> >>>     </broker>
>> >> >>>
>> >> >>>     <import resource="jetty.xml"/>
>> >> >>>     <import resource="databaseconfig.xml"/>
>> >> >>> </beans>
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>> On Tue, Nov 5, 2013 at 9:27 AM, Paul Gale <pa...@gmail.com>
>> >> wrote:
>> >> >>>
>> >> >>>> Have you verified via broker logging that the prefetch values
>> you've
>> >> >>>> configured are being honored by the broker? Are consumer
>> priorities in
>> >> >>>> use? Are your consumers instances of the same executable or are
>> they
>> >> >>>> implemented individually?
>> >> >>>>
>> >> >>>> Can you post your broker configuration: activemq.xml?
>> >> >>>>
>> >> >>>> How are your clients implemented, e.g., technology: Ruby or Java
>> etc,
>> >> >>>> choice of client libraries? Just wondering.
>> >> >>>>
>> >> >>>>
>> >> >>>> Thanks,
>> >> >>>> Paul
>> >> >>>>
>> >> >>>> On Tue, Nov 5, 2013 at 10:28 AM, Ned Wolpert <
>> >> ned.wolpert@imemories.com>
>> >> >>>> wrote:
>> >> >>>> > Thanks for the response...
>> >> >>>> >
>> >> >>>> > Any idea on the round-robin not working? I have a queue with 16
>> >> >>>> consumers,
>> >> >>>> > all have pre-fetch set to 1. Five consumers are actively
>> processing
>> >> >>>> > requests and 3 requests are pending.... the 11 other consumers
>> are
>> >> >>>> idle.
>> >> >>>> > History has shown that a new request may go to one of the 11 idle
>> >> >>>> works,
>> >> >>>> > but its like those 3 requests are reserved for some of the
>> working
>> >> >>>> ones. I
>> >> >>>> > can't figure out what setting would help this, or if this just
>> was a
>> >> >>>> bug
>> >> >>>> > with 5.3....
>> >> >>>> >
>> >> >>>> >
>> >> >>>> > On Mon, Nov 4, 2013 at 4:37 PM, Christian Posta
>> >> >>>> > <ch...@gmail.com>wrote:
>> >> >>>> >
>> >> >>>> >> The clients should negotiate the correct open-wire (protocol
>> >> version)
>> >> >>>> >> so in theory the broker will be backward compatible with older
>> >> >>>> >> clients. Just make sure the activemq-openwire-legacy jar is on
>> the
>> >> >>>> >> classpath (should be by default).
>> >> >>>> >>
>> >> >>>> >> Of course I would test this out to make sure :)
>> >> >>>> >>
>> >> >>>> >> On Mon, Nov 4, 2013 at 10:20 AM, Ned Wolpert <
>> >> >>>> ned.wolpert@imemories.com>
>> >> >>>> >> wrote:
>> >> >>>> >> > Folks-
>> >> >>>> >> >
>> >> >>>> >> >   I have a 5.3 installation that we're using, and I have 2
>> >> >>>> questions for
>> >> >>>> >> it:
>> >> >>>> >> >
>> >> >>>> >> > 1) We have prefetch set to 1 for all of the message consumers
>> on
>> >> one
>> >> >>>> >> queue,
>> >> >>>> >> > where message handling is slow. But it still seems like
>> messages
>> >> >>>> aren't
>> >> >>>> >> > really 'round robin' to the next available message consumer.
>> I'll
>> >> >>>> see a
>> >> >>>> >> few
>> >> >>>> >> > consumers are free but messages are waiting around. Is there a
>> >> >>>> >> > configuration that can help?  (I should note that the server
>> has
>> >> >>>> been
>> >> >>>> >> > running consistently for 9 months and it seems to be getting
>> >> >>>> worse....
>> >> >>>> >> > would a restart help?)
>> >> >>>> >> >
>> >> >>>> >> > 2) We are looking to upgrade to 5.9. I haven't started the
>> >> process
>> >> >>>> of
>> >> >>>> >> > testing, but I wanted to see if this is a case where the 5.3
>> >> >>>> clients need
>> >> >>>> >> > to be upgraded at the same time as the server, or if the
>> clients
>> >> >>>> can be
>> >> >>>> >> > rolled over a few weeks to 5.9 after the server gets updated?
>> >> >>>> >> >
>> >> >>>> >> > Thanks!
>> >> >>>> >> >
>> >> >>>> >> > --
>> >> >>>> >> > Virtually, Ned Wolpert
>> >> >>>> >> >
>> >> >>>> >> > "Settle thy studies, Faustus, and begin..."   --Marlowe
>> >> >>>> >>
>> >> >>>> >>
>> >> >>>> >>
>> >> >>>> >> --
>> >> >>>> >> Christian Posta
>> >> >>>> >> http://www.christianposta.com/blog
>> >> >>>> >> twitter: @christianposta
>> >> >>>> >>
>> >> >>>> >
>> >> >>>> >
>> >> >>>> >
>> >> >>>> > --
>> >> >>>> > Virtually, Ned Wolpert
>> >> >>>> >
>> >> >>>> > "Settle thy studies, Faustus, and begin..."   --Marlowe
>> >> >>>>
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>> --
>> >> >>> Virtually, Ned Wolpert
>> >> >>>
>> >> >>> "Settle thy studies, Faustus, and begin..."   --Marlowe
>> >> >>>
>> >> >>
>> >> >>
>> >> >>
>> >> >> --
>> >> >> Virtually, Ned Wolpert
>> >> >>
>> >> >> "Settle thy studies, Faustus, and begin..."   --Marlowe
>> >> >>
>> >> >
>> >> >
>> >> >
>> >> > --
>> >> > Virtually, Ned Wolpert
>> >> >
>> >> > "Settle thy studies, Faustus, and begin..."   --Marlowe
>> >>
>> >>
>> >>
>> >> --
>> >> http://redhat.com
>> >> http://blog.garytully.com
>> >>
>> >
>> >
>> >
>> > --
>> > Virtually, Ned Wolpert
>> >
>> > "Settle thy studies, Faustus, and begin..."   --Marlowe
>>
>>
>>
>> --
>> http://redhat.com
>> http://blog.garytully.com
>>
>
>
>
> --
> Virtually, Ned Wolpert
>
> "Settle thy studies, Faustus, and begin..."   --Marlowe



-- 
http://redhat.com
http://blog.garytully.com

Re: 5.3 question and server upgrade question...

Posted by Ned Wolpert <ne...@imemories.com>.
With prefetch=0, the client polls the server then, right? Is polling
frequency a settable value?  (Though as I write this, I'm assuming if so,
it would be set on the client-side.)


On Fri, Nov 15, 2013 at 4:55 AM, Gary Tully <ga...@gmail.com> wrote:

> prefetch=0 will do it, so long as you don't use a messagelistener
> directly. via spring the listener does a receive(...) under the hood
> so it will be ok with prefetch=0
>
> On 14 November 2013 16:14, Ned Wolpert <ne...@imemories.com> wrote:
> > After I say you wrote 'prefetchExtension=false' I looked it up and found
> > this bug sounds exactly like what I'm hitting:
> > https://issues.apache.org/jira/browse/AMQ-2651 which led me to you
> talking
> > on
> >
> http://grokbase.com/t/activemq/users/103bdh5cgx/prefetchextension-off-by-1-for-transacted-consumers-with-prefetchsize-0
> >
> > So... right now I have prefetch=1.... and I'm using 5.3.0. WIth the
> grails
> > jms (spring) plugin, its auto-ack for messages, and they are in a
> > transaction. So it sounds like I'm hitting this. Does
> > prefetchExtension=false exist in 5.3? (Looks like it was fixed in 5.4)
> > Should I really be using prefetch=0?  In this one queue, I have 16
> > listeners now, and messages are usually in groups < 10 but take a long
> time
> > to process. (hours)
> >
> >
> > On Wed, Nov 13, 2013 at 4:27 PM, Gary Tully <ga...@gmail.com>
> wrote:
> >
> >> can you try a different ack mode, like clientack or using transactions
> >> - the prefetch will be deferred till the ack which will be later than
> >> in the auto ack case. Also, in the transacted case, use the
> >> destination policy prefetchExtension=false
> >>
> >> On 13 November 2013 14:54, Ned Wolpert <ne...@imemories.com>
> wrote:
> >> > Did anyone have an idea into what I could do different to route
> messages
> >> to
> >> > idle consumers?  Just came into the same situation this morning where
> a
> >> > queue has 1 message processing on one consumer, one message waiting,
> and
> >> 15
> >> > idle consumers.  (See notes below for my current configs)
> >> >
> >> >
> >> > On Wed, Nov 6, 2013 at 9:40 AM, Ned Wolpert <
> ned.wolpert@imemories.com
> >> >wrote:
> >> >
> >> >> Forgot to add, broker url only has one query param....
> >> >>
> >> >> jms.prefetchPolicy.queuePrefetch=1
> >> >>
> >> >> which, as I mentioned above, does seem to work.
> >> >>
> >> >>
> >> >> On Tue, Nov 5, 2013 at 10:56 AM, Ned Wolpert <
> ned.wolpert@imemories.com
> >> >wrote:
> >> >>
> >> >>> I can see the preFetch values being set in the console, and they are
> >> all
> >> >>> one. I've not set priorities.
> >> >>>
> >> >>> These are 'java' processes, using groovy/grails. The same executable
> >> on 4
> >> >>> boxes, each executable with 4 listeners, treaded. Using the grails
> jms
> >> >>> plugin, which wraps the Spring jms template configuration.
> >> >>> (concurrentConsumers is set to 4 per instance)
> >> >>>
> >> >>> When I have 1000's of messages pending, all instances are working.
> This
> >> >>> issue is only really viewable when there is 10 messages working.
> >> >>>
> >> >>> The following is the (redacted) activemq.xml.  I'm assuming this
> config
> >> >>> could be better.  I should mention typical usage of our JMS server
> has
> >> a
> >> >>> few consumers and tons of producers. Thirty queues. Most queues
> process
> >> >>> quickly and do not fill up. Two queues are for slow producers. The
> >> goal is
> >> >>> for the producers to send a message and break away, so we don't want
> >> slow
> >> >>> producers at all. Producers are very spiky.... from 10m/min to
> bursts
> >> of
> >> >>> 100's/min.  We have growth concern as that number is increasing
> >> steadily.
> >> >>>
> >> >>> <beans
> >> >>>   xmlns="http://www.springframework.org/schema/beans"
> >> >>>   xmlns:amq="http://activemq.apache.org/schema/core"
> >> >>>   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
> >> >>>   xsi:schemaLocation="http://www.springframework.org/schema/beans
> >> >>> http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
> >> >>>   http://activemq.apache.org/schema/core
> >> >>> http://activemq.apache.org/schema/core/activemq-core.xsd">
> >> >>>
> >> >>>   <bean
> >> >>>
> >>
> class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
> >> >>>     <property name="locations">
> >> >>>
> <value>file:${activemq.base}/conf/credentials.properties</value>
> >> >>>     </property>
> >> >>>   </bean>
> >> >>>
> >> >>>   <broker xmlns="http://activemq.apache.org/schema/core"
> >> >>>             brokerName="stagingMQ"
> >> >>>             useJmx="true"
> >> >>>             enableStatistics="true"
> >> >>>             useLocalHostBrokerName="false"
> >> >>>             useLoggingForShutdownErrors="true"
> >> >>>             dataDirectory="XXXXX">
> >> >>>
> >> >>>         <managementContext>
> >> >>>             <managementContext createConnector="true"
> >> >>> connectorPort="XXXXX"/>
> >> >>>         </managementContext>
> >> >>>
> >> >>>         <persistenceAdapter>
> >> >>>            <journaledJDBC journalLogFiles="5"
> >> >>>                           journalLogFileSize="20 Mb"
> >> >>>   dataDirectory="XXXXXX"
> >> >>>                           createTablesOnStartup="false"
> >> >>>                           useDatabaseLock="false"
> >> >>>                           dataSource="#XXXXX">
> >> >>>            </journaledJDBC>
> >> >>>         </persistenceAdapter>
> >> >>>
> >> >>>         <destinationPolicy>
> >> >>>             <policyMap>
> >> >>>               <policyEntries>
> >> >>>                 <policyEntry topic=">" producerFlowControl="true"
> >> >>> memoryLimit="1mb">
> >> >>>                    <pendingSubscriberPolicy>
> >> >>>                     <vmCursor />
> >> >>>                   </pendingSubscriberPolicy>
> >> >>>                 </policyEntry>
> >> >>> <policyEntry queue=">" producerFlowControl="true"
> memoryLimit="30mb">
> >> >>>                   <pendingQueuePolicy>
> >> >>>                     <vmQueueCursor/>
> >> >>>                   </pendingQueuePolicy>
> >> >>>                 </policyEntry>
> >> >>>               </policyEntries>
> >> >>>             </policyMap>
> >> >>>         </destinationPolicy>
> >> >>>
> >> >>>         <transportConnectors>
> >> >>>             <transportConnector name="openwire" uri="XXXX"/>
> >> >>>             <transportConnector name="stomp" uri="XXXXX"/>
> >> >>>         </transportConnectors>
> >> >>>
> >> >>>     </broker>
> >> >>>
> >> >>>     <import resource="jetty.xml"/>
> >> >>>     <import resource="databaseconfig.xml"/>
> >> >>> </beans>
> >> >>>
> >> >>>
> >> >>>
> >> >>>
> >> >>> On Tue, Nov 5, 2013 at 9:27 AM, Paul Gale <pa...@gmail.com>
> >> wrote:
> >> >>>
> >> >>>> Have you verified via broker logging that the prefetch values
> you've
> >> >>>> configured are being honored by the broker? Are consumer
> priorities in
> >> >>>> use? Are your consumers instances of the same executable or are
> they
> >> >>>> implemented individually?
> >> >>>>
> >> >>>> Can you post your broker configuration: activemq.xml?
> >> >>>>
> >> >>>> How are your clients implemented, e.g., technology: Ruby or Java
> etc,
> >> >>>> choice of client libraries? Just wondering.
> >> >>>>
> >> >>>>
> >> >>>> Thanks,
> >> >>>> Paul
> >> >>>>
> >> >>>> On Tue, Nov 5, 2013 at 10:28 AM, Ned Wolpert <
> >> ned.wolpert@imemories.com>
> >> >>>> wrote:
> >> >>>> > Thanks for the response...
> >> >>>> >
> >> >>>> > Any idea on the round-robin not working? I have a queue with 16
> >> >>>> consumers,
> >> >>>> > all have pre-fetch set to 1. Five consumers are actively
> processing
> >> >>>> > requests and 3 requests are pending.... the 11 other consumers
> are
> >> >>>> idle.
> >> >>>> > History has shown that a new request may go to one of the 11 idle
> >> >>>> works,
> >> >>>> > but its like those 3 requests are reserved for some of the
> working
> >> >>>> ones. I
> >> >>>> > can't figure out what setting would help this, or if this just
> was a
> >> >>>> bug
> >> >>>> > with 5.3....
> >> >>>> >
> >> >>>> >
> >> >>>> > On Mon, Nov 4, 2013 at 4:37 PM, Christian Posta
> >> >>>> > <ch...@gmail.com>wrote:
> >> >>>> >
> >> >>>> >> The clients should negotiate the correct open-wire (protocol
> >> version)
> >> >>>> >> so in theory the broker will be backward compatible with older
> >> >>>> >> clients. Just make sure the activemq-openwire-legacy jar is on
> the
> >> >>>> >> classpath (should be by default).
> >> >>>> >>
> >> >>>> >> Of course I would test this out to make sure :)
> >> >>>> >>
> >> >>>> >> On Mon, Nov 4, 2013 at 10:20 AM, Ned Wolpert <
> >> >>>> ned.wolpert@imemories.com>
> >> >>>> >> wrote:
> >> >>>> >> > Folks-
> >> >>>> >> >
> >> >>>> >> >   I have a 5.3 installation that we're using, and I have 2
> >> >>>> questions for
> >> >>>> >> it:
> >> >>>> >> >
> >> >>>> >> > 1) We have prefetch set to 1 for all of the message consumers
> on
> >> one
> >> >>>> >> queue,
> >> >>>> >> > where message handling is slow. But it still seems like
> messages
> >> >>>> aren't
> >> >>>> >> > really 'round robin' to the next available message consumer.
> I'll
> >> >>>> see a
> >> >>>> >> few
> >> >>>> >> > consumers are free but messages are waiting around. Is there a
> >> >>>> >> > configuration that can help?  (I should note that the server
> has
> >> >>>> been
> >> >>>> >> > running consistently for 9 months and it seems to be getting
> >> >>>> worse....
> >> >>>> >> > would a restart help?)
> >> >>>> >> >
> >> >>>> >> > 2) We are looking to upgrade to 5.9. I haven't started the
> >> process
> >> >>>> of
> >> >>>> >> > testing, but I wanted to see if this is a case where the 5.3
> >> >>>> clients need
> >> >>>> >> > to be upgraded at the same time as the server, or if the
> clients
> >> >>>> can be
> >> >>>> >> > rolled over a few weeks to 5.9 after the server gets updated?
> >> >>>> >> >
> >> >>>> >> > Thanks!
> >> >>>> >> >
> >> >>>> >> > --
> >> >>>> >> > Virtually, Ned Wolpert
> >> >>>> >> >
> >> >>>> >> > "Settle thy studies, Faustus, and begin..."   --Marlowe
> >> >>>> >>
> >> >>>> >>
> >> >>>> >>
> >> >>>> >> --
> >> >>>> >> Christian Posta
> >> >>>> >> http://www.christianposta.com/blog
> >> >>>> >> twitter: @christianposta
> >> >>>> >>
> >> >>>> >
> >> >>>> >
> >> >>>> >
> >> >>>> > --
> >> >>>> > Virtually, Ned Wolpert
> >> >>>> >
> >> >>>> > "Settle thy studies, Faustus, and begin..."   --Marlowe
> >> >>>>
> >> >>>
> >> >>>
> >> >>>
> >> >>> --
> >> >>> Virtually, Ned Wolpert
> >> >>>
> >> >>> "Settle thy studies, Faustus, and begin..."   --Marlowe
> >> >>>
> >> >>
> >> >>
> >> >>
> >> >> --
> >> >> Virtually, Ned Wolpert
> >> >>
> >> >> "Settle thy studies, Faustus, and begin..."   --Marlowe
> >> >>
> >> >
> >> >
> >> >
> >> > --
> >> > Virtually, Ned Wolpert
> >> >
> >> > "Settle thy studies, Faustus, and begin..."   --Marlowe
> >>
> >>
> >>
> >> --
> >> http://redhat.com
> >> http://blog.garytully.com
> >>
> >
> >
> >
> > --
> > Virtually, Ned Wolpert
> >
> > "Settle thy studies, Faustus, and begin..."   --Marlowe
>
>
>
> --
> http://redhat.com
> http://blog.garytully.com
>



-- 
Virtually, Ned Wolpert

"Settle thy studies, Faustus, and begin..."   --Marlowe

Re: 5.3 question and server upgrade question...

Posted by Gary Tully <ga...@gmail.com>.
prefetch=0 will do it, so long as you don't use a messagelistener
directly. via spring the listener does a receive(...) under the hood
so it will be ok with prefetch=0

On 14 November 2013 16:14, Ned Wolpert <ne...@imemories.com> wrote:
> After I say you wrote 'prefetchExtension=false' I looked it up and found
> this bug sounds exactly like what I'm hitting:
> https://issues.apache.org/jira/browse/AMQ-2651 which led me to you talking
> on
> http://grokbase.com/t/activemq/users/103bdh5cgx/prefetchextension-off-by-1-for-transacted-consumers-with-prefetchsize-0
>
> So... right now I have prefetch=1.... and I'm using 5.3.0. WIth the grails
> jms (spring) plugin, its auto-ack for messages, and they are in a
> transaction. So it sounds like I'm hitting this. Does
> prefetchExtension=false exist in 5.3? (Looks like it was fixed in 5.4)
> Should I really be using prefetch=0?  In this one queue, I have 16
> listeners now, and messages are usually in groups < 10 but take a long time
> to process. (hours)
>
>
> On Wed, Nov 13, 2013 at 4:27 PM, Gary Tully <ga...@gmail.com> wrote:
>
>> can you try a different ack mode, like clientack or using transactions
>> - the prefetch will be deferred till the ack which will be later than
>> in the auto ack case. Also, in the transacted case, use the
>> destination policy prefetchExtension=false
>>
>> On 13 November 2013 14:54, Ned Wolpert <ne...@imemories.com> wrote:
>> > Did anyone have an idea into what I could do different to route messages
>> to
>> > idle consumers?  Just came into the same situation this morning where a
>> > queue has 1 message processing on one consumer, one message waiting, and
>> 15
>> > idle consumers.  (See notes below for my current configs)
>> >
>> >
>> > On Wed, Nov 6, 2013 at 9:40 AM, Ned Wolpert <ned.wolpert@imemories.com
>> >wrote:
>> >
>> >> Forgot to add, broker url only has one query param....
>> >>
>> >> jms.prefetchPolicy.queuePrefetch=1
>> >>
>> >> which, as I mentioned above, does seem to work.
>> >>
>> >>
>> >> On Tue, Nov 5, 2013 at 10:56 AM, Ned Wolpert <ned.wolpert@imemories.com
>> >wrote:
>> >>
>> >>> I can see the preFetch values being set in the console, and they are
>> all
>> >>> one. I've not set priorities.
>> >>>
>> >>> These are 'java' processes, using groovy/grails. The same executable
>> on 4
>> >>> boxes, each executable with 4 listeners, treaded. Using the grails jms
>> >>> plugin, which wraps the Spring jms template configuration.
>> >>> (concurrentConsumers is set to 4 per instance)
>> >>>
>> >>> When I have 1000's of messages pending, all instances are working. This
>> >>> issue is only really viewable when there is 10 messages working.
>> >>>
>> >>> The following is the (redacted) activemq.xml.  I'm assuming this config
>> >>> could be better.  I should mention typical usage of our JMS server has
>> a
>> >>> few consumers and tons of producers. Thirty queues. Most queues process
>> >>> quickly and do not fill up. Two queues are for slow producers. The
>> goal is
>> >>> for the producers to send a message and break away, so we don't want
>> slow
>> >>> producers at all. Producers are very spiky.... from 10m/min to bursts
>> of
>> >>> 100's/min.  We have growth concern as that number is increasing
>> steadily.
>> >>>
>> >>> <beans
>> >>>   xmlns="http://www.springframework.org/schema/beans"
>> >>>   xmlns:amq="http://activemq.apache.org/schema/core"
>> >>>   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
>> >>>   xsi:schemaLocation="http://www.springframework.org/schema/beans
>> >>> http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
>> >>>   http://activemq.apache.org/schema/core
>> >>> http://activemq.apache.org/schema/core/activemq-core.xsd">
>> >>>
>> >>>   <bean
>> >>>
>> class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
>> >>>     <property name="locations">
>> >>>       <value>file:${activemq.base}/conf/credentials.properties</value>
>> >>>     </property>
>> >>>   </bean>
>> >>>
>> >>>   <broker xmlns="http://activemq.apache.org/schema/core"
>> >>>             brokerName="stagingMQ"
>> >>>             useJmx="true"
>> >>>             enableStatistics="true"
>> >>>             useLocalHostBrokerName="false"
>> >>>             useLoggingForShutdownErrors="true"
>> >>>             dataDirectory="XXXXX">
>> >>>
>> >>>         <managementContext>
>> >>>             <managementContext createConnector="true"
>> >>> connectorPort="XXXXX"/>
>> >>>         </managementContext>
>> >>>
>> >>>         <persistenceAdapter>
>> >>>            <journaledJDBC journalLogFiles="5"
>> >>>                           journalLogFileSize="20 Mb"
>> >>>   dataDirectory="XXXXXX"
>> >>>                           createTablesOnStartup="false"
>> >>>                           useDatabaseLock="false"
>> >>>                           dataSource="#XXXXX">
>> >>>            </journaledJDBC>
>> >>>         </persistenceAdapter>
>> >>>
>> >>>         <destinationPolicy>
>> >>>             <policyMap>
>> >>>               <policyEntries>
>> >>>                 <policyEntry topic=">" producerFlowControl="true"
>> >>> memoryLimit="1mb">
>> >>>                    <pendingSubscriberPolicy>
>> >>>                     <vmCursor />
>> >>>                   </pendingSubscriberPolicy>
>> >>>                 </policyEntry>
>> >>> <policyEntry queue=">" producerFlowControl="true" memoryLimit="30mb">
>> >>>                   <pendingQueuePolicy>
>> >>>                     <vmQueueCursor/>
>> >>>                   </pendingQueuePolicy>
>> >>>                 </policyEntry>
>> >>>               </policyEntries>
>> >>>             </policyMap>
>> >>>         </destinationPolicy>
>> >>>
>> >>>         <transportConnectors>
>> >>>             <transportConnector name="openwire" uri="XXXX"/>
>> >>>             <transportConnector name="stomp" uri="XXXXX"/>
>> >>>         </transportConnectors>
>> >>>
>> >>>     </broker>
>> >>>
>> >>>     <import resource="jetty.xml"/>
>> >>>     <import resource="databaseconfig.xml"/>
>> >>> </beans>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> On Tue, Nov 5, 2013 at 9:27 AM, Paul Gale <pa...@gmail.com>
>> wrote:
>> >>>
>> >>>> Have you verified via broker logging that the prefetch values you've
>> >>>> configured are being honored by the broker? Are consumer priorities in
>> >>>> use? Are your consumers instances of the same executable or are they
>> >>>> implemented individually?
>> >>>>
>> >>>> Can you post your broker configuration: activemq.xml?
>> >>>>
>> >>>> How are your clients implemented, e.g., technology: Ruby or Java etc,
>> >>>> choice of client libraries? Just wondering.
>> >>>>
>> >>>>
>> >>>> Thanks,
>> >>>> Paul
>> >>>>
>> >>>> On Tue, Nov 5, 2013 at 10:28 AM, Ned Wolpert <
>> ned.wolpert@imemories.com>
>> >>>> wrote:
>> >>>> > Thanks for the response...
>> >>>> >
>> >>>> > Any idea on the round-robin not working? I have a queue with 16
>> >>>> consumers,
>> >>>> > all have pre-fetch set to 1. Five consumers are actively processing
>> >>>> > requests and 3 requests are pending.... the 11 other consumers are
>> >>>> idle.
>> >>>> > History has shown that a new request may go to one of the 11 idle
>> >>>> works,
>> >>>> > but its like those 3 requests are reserved for some of the working
>> >>>> ones. I
>> >>>> > can't figure out what setting would help this, or if this just was a
>> >>>> bug
>> >>>> > with 5.3....
>> >>>> >
>> >>>> >
>> >>>> > On Mon, Nov 4, 2013 at 4:37 PM, Christian Posta
>> >>>> > <ch...@gmail.com>wrote:
>> >>>> >
>> >>>> >> The clients should negotiate the correct open-wire (protocol
>> version)
>> >>>> >> so in theory the broker will be backward compatible with older
>> >>>> >> clients. Just make sure the activemq-openwire-legacy jar is on the
>> >>>> >> classpath (should be by default).
>> >>>> >>
>> >>>> >> Of course I would test this out to make sure :)
>> >>>> >>
>> >>>> >> On Mon, Nov 4, 2013 at 10:20 AM, Ned Wolpert <
>> >>>> ned.wolpert@imemories.com>
>> >>>> >> wrote:
>> >>>> >> > Folks-
>> >>>> >> >
>> >>>> >> >   I have a 5.3 installation that we're using, and I have 2
>> >>>> questions for
>> >>>> >> it:
>> >>>> >> >
>> >>>> >> > 1) We have prefetch set to 1 for all of the message consumers on
>> one
>> >>>> >> queue,
>> >>>> >> > where message handling is slow. But it still seems like messages
>> >>>> aren't
>> >>>> >> > really 'round robin' to the next available message consumer. I'll
>> >>>> see a
>> >>>> >> few
>> >>>> >> > consumers are free but messages are waiting around. Is there a
>> >>>> >> > configuration that can help?  (I should note that the server has
>> >>>> been
>> >>>> >> > running consistently for 9 months and it seems to be getting
>> >>>> worse....
>> >>>> >> > would a restart help?)
>> >>>> >> >
>> >>>> >> > 2) We are looking to upgrade to 5.9. I haven't started the
>> process
>> >>>> of
>> >>>> >> > testing, but I wanted to see if this is a case where the 5.3
>> >>>> clients need
>> >>>> >> > to be upgraded at the same time as the server, or if the clients
>> >>>> can be
>> >>>> >> > rolled over a few weeks to 5.9 after the server gets updated?
>> >>>> >> >
>> >>>> >> > Thanks!
>> >>>> >> >
>> >>>> >> > --
>> >>>> >> > Virtually, Ned Wolpert
>> >>>> >> >
>> >>>> >> > "Settle thy studies, Faustus, and begin..."   --Marlowe
>> >>>> >>
>> >>>> >>
>> >>>> >>
>> >>>> >> --
>> >>>> >> Christian Posta
>> >>>> >> http://www.christianposta.com/blog
>> >>>> >> twitter: @christianposta
>> >>>> >>
>> >>>> >
>> >>>> >
>> >>>> >
>> >>>> > --
>> >>>> > Virtually, Ned Wolpert
>> >>>> >
>> >>>> > "Settle thy studies, Faustus, and begin..."   --Marlowe
>> >>>>
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> Virtually, Ned Wolpert
>> >>>
>> >>> "Settle thy studies, Faustus, and begin..."   --Marlowe
>> >>>
>> >>
>> >>
>> >>
>> >> --
>> >> Virtually, Ned Wolpert
>> >>
>> >> "Settle thy studies, Faustus, and begin..."   --Marlowe
>> >>
>> >
>> >
>> >
>> > --
>> > Virtually, Ned Wolpert
>> >
>> > "Settle thy studies, Faustus, and begin..."   --Marlowe
>>
>>
>>
>> --
>> http://redhat.com
>> http://blog.garytully.com
>>
>
>
>
> --
> Virtually, Ned Wolpert
>
> "Settle thy studies, Faustus, and begin..."   --Marlowe



-- 
http://redhat.com
http://blog.garytully.com

Re: 5.3 question and server upgrade question...

Posted by Ned Wolpert <ne...@imemories.com>.
After I say you wrote 'prefetchExtension=false' I looked it up and found
this bug sounds exactly like what I'm hitting:
https://issues.apache.org/jira/browse/AMQ-2651 which led me to you talking
on
http://grokbase.com/t/activemq/users/103bdh5cgx/prefetchextension-off-by-1-for-transacted-consumers-with-prefetchsize-0

So... right now I have prefetch=1.... and I'm using 5.3.0. WIth the grails
jms (spring) plugin, its auto-ack for messages, and they are in a
transaction. So it sounds like I'm hitting this. Does
prefetchExtension=false exist in 5.3? (Looks like it was fixed in 5.4)
Should I really be using prefetch=0?  In this one queue, I have 16
listeners now, and messages are usually in groups < 10 but take a long time
to process. (hours)


On Wed, Nov 13, 2013 at 4:27 PM, Gary Tully <ga...@gmail.com> wrote:

> can you try a different ack mode, like clientack or using transactions
> - the prefetch will be deferred till the ack which will be later than
> in the auto ack case. Also, in the transacted case, use the
> destination policy prefetchExtension=false
>
> On 13 November 2013 14:54, Ned Wolpert <ne...@imemories.com> wrote:
> > Did anyone have an idea into what I could do different to route messages
> to
> > idle consumers?  Just came into the same situation this morning where a
> > queue has 1 message processing on one consumer, one message waiting, and
> 15
> > idle consumers.  (See notes below for my current configs)
> >
> >
> > On Wed, Nov 6, 2013 at 9:40 AM, Ned Wolpert <ned.wolpert@imemories.com
> >wrote:
> >
> >> Forgot to add, broker url only has one query param....
> >>
> >> jms.prefetchPolicy.queuePrefetch=1
> >>
> >> which, as I mentioned above, does seem to work.
> >>
> >>
> >> On Tue, Nov 5, 2013 at 10:56 AM, Ned Wolpert <ned.wolpert@imemories.com
> >wrote:
> >>
> >>> I can see the preFetch values being set in the console, and they are
> all
> >>> one. I've not set priorities.
> >>>
> >>> These are 'java' processes, using groovy/grails. The same executable
> on 4
> >>> boxes, each executable with 4 listeners, treaded. Using the grails jms
> >>> plugin, which wraps the Spring jms template configuration.
> >>> (concurrentConsumers is set to 4 per instance)
> >>>
> >>> When I have 1000's of messages pending, all instances are working. This
> >>> issue is only really viewable when there is 10 messages working.
> >>>
> >>> The following is the (redacted) activemq.xml.  I'm assuming this config
> >>> could be better.  I should mention typical usage of our JMS server has
> a
> >>> few consumers and tons of producers. Thirty queues. Most queues process
> >>> quickly and do not fill up. Two queues are for slow producers. The
> goal is
> >>> for the producers to send a message and break away, so we don't want
> slow
> >>> producers at all. Producers are very spiky.... from 10m/min to bursts
> of
> >>> 100's/min.  We have growth concern as that number is increasing
> steadily.
> >>>
> >>> <beans
> >>>   xmlns="http://www.springframework.org/schema/beans"
> >>>   xmlns:amq="http://activemq.apache.org/schema/core"
> >>>   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
> >>>   xsi:schemaLocation="http://www.springframework.org/schema/beans
> >>> http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
> >>>   http://activemq.apache.org/schema/core
> >>> http://activemq.apache.org/schema/core/activemq-core.xsd">
> >>>
> >>>   <bean
> >>>
> class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
> >>>     <property name="locations">
> >>>       <value>file:${activemq.base}/conf/credentials.properties</value>
> >>>     </property>
> >>>   </bean>
> >>>
> >>>   <broker xmlns="http://activemq.apache.org/schema/core"
> >>>             brokerName="stagingMQ"
> >>>             useJmx="true"
> >>>             enableStatistics="true"
> >>>             useLocalHostBrokerName="false"
> >>>             useLoggingForShutdownErrors="true"
> >>>             dataDirectory="XXXXX">
> >>>
> >>>         <managementContext>
> >>>             <managementContext createConnector="true"
> >>> connectorPort="XXXXX"/>
> >>>         </managementContext>
> >>>
> >>>         <persistenceAdapter>
> >>>            <journaledJDBC journalLogFiles="5"
> >>>                           journalLogFileSize="20 Mb"
> >>>   dataDirectory="XXXXXX"
> >>>                           createTablesOnStartup="false"
> >>>                           useDatabaseLock="false"
> >>>                           dataSource="#XXXXX">
> >>>            </journaledJDBC>
> >>>         </persistenceAdapter>
> >>>
> >>>         <destinationPolicy>
> >>>             <policyMap>
> >>>               <policyEntries>
> >>>                 <policyEntry topic=">" producerFlowControl="true"
> >>> memoryLimit="1mb">
> >>>                    <pendingSubscriberPolicy>
> >>>                     <vmCursor />
> >>>                   </pendingSubscriberPolicy>
> >>>                 </policyEntry>
> >>> <policyEntry queue=">" producerFlowControl="true" memoryLimit="30mb">
> >>>                   <pendingQueuePolicy>
> >>>                     <vmQueueCursor/>
> >>>                   </pendingQueuePolicy>
> >>>                 </policyEntry>
> >>>               </policyEntries>
> >>>             </policyMap>
> >>>         </destinationPolicy>
> >>>
> >>>         <transportConnectors>
> >>>             <transportConnector name="openwire" uri="XXXX"/>
> >>>             <transportConnector name="stomp" uri="XXXXX"/>
> >>>         </transportConnectors>
> >>>
> >>>     </broker>
> >>>
> >>>     <import resource="jetty.xml"/>
> >>>     <import resource="databaseconfig.xml"/>
> >>> </beans>
> >>>
> >>>
> >>>
> >>>
> >>> On Tue, Nov 5, 2013 at 9:27 AM, Paul Gale <pa...@gmail.com>
> wrote:
> >>>
> >>>> Have you verified via broker logging that the prefetch values you've
> >>>> configured are being honored by the broker? Are consumer priorities in
> >>>> use? Are your consumers instances of the same executable or are they
> >>>> implemented individually?
> >>>>
> >>>> Can you post your broker configuration: activemq.xml?
> >>>>
> >>>> How are your clients implemented, e.g., technology: Ruby or Java etc,
> >>>> choice of client libraries? Just wondering.
> >>>>
> >>>>
> >>>> Thanks,
> >>>> Paul
> >>>>
> >>>> On Tue, Nov 5, 2013 at 10:28 AM, Ned Wolpert <
> ned.wolpert@imemories.com>
> >>>> wrote:
> >>>> > Thanks for the response...
> >>>> >
> >>>> > Any idea on the round-robin not working? I have a queue with 16
> >>>> consumers,
> >>>> > all have pre-fetch set to 1. Five consumers are actively processing
> >>>> > requests and 3 requests are pending.... the 11 other consumers are
> >>>> idle.
> >>>> > History has shown that a new request may go to one of the 11 idle
> >>>> works,
> >>>> > but its like those 3 requests are reserved for some of the working
> >>>> ones. I
> >>>> > can't figure out what setting would help this, or if this just was a
> >>>> bug
> >>>> > with 5.3....
> >>>> >
> >>>> >
> >>>> > On Mon, Nov 4, 2013 at 4:37 PM, Christian Posta
> >>>> > <ch...@gmail.com>wrote:
> >>>> >
> >>>> >> The clients should negotiate the correct open-wire (protocol
> version)
> >>>> >> so in theory the broker will be backward compatible with older
> >>>> >> clients. Just make sure the activemq-openwire-legacy jar is on the
> >>>> >> classpath (should be by default).
> >>>> >>
> >>>> >> Of course I would test this out to make sure :)
> >>>> >>
> >>>> >> On Mon, Nov 4, 2013 at 10:20 AM, Ned Wolpert <
> >>>> ned.wolpert@imemories.com>
> >>>> >> wrote:
> >>>> >> > Folks-
> >>>> >> >
> >>>> >> >   I have a 5.3 installation that we're using, and I have 2
> >>>> questions for
> >>>> >> it:
> >>>> >> >
> >>>> >> > 1) We have prefetch set to 1 for all of the message consumers on
> one
> >>>> >> queue,
> >>>> >> > where message handling is slow. But it still seems like messages
> >>>> aren't
> >>>> >> > really 'round robin' to the next available message consumer. I'll
> >>>> see a
> >>>> >> few
> >>>> >> > consumers are free but messages are waiting around. Is there a
> >>>> >> > configuration that can help?  (I should note that the server has
> >>>> been
> >>>> >> > running consistently for 9 months and it seems to be getting
> >>>> worse....
> >>>> >> > would a restart help?)
> >>>> >> >
> >>>> >> > 2) We are looking to upgrade to 5.9. I haven't started the
> process
> >>>> of
> >>>> >> > testing, but I wanted to see if this is a case where the 5.3
> >>>> clients need
> >>>> >> > to be upgraded at the same time as the server, or if the clients
> >>>> can be
> >>>> >> > rolled over a few weeks to 5.9 after the server gets updated?
> >>>> >> >
> >>>> >> > Thanks!
> >>>> >> >
> >>>> >> > --
> >>>> >> > Virtually, Ned Wolpert
> >>>> >> >
> >>>> >> > "Settle thy studies, Faustus, and begin..."   --Marlowe
> >>>> >>
> >>>> >>
> >>>> >>
> >>>> >> --
> >>>> >> Christian Posta
> >>>> >> http://www.christianposta.com/blog
> >>>> >> twitter: @christianposta
> >>>> >>
> >>>> >
> >>>> >
> >>>> >
> >>>> > --
> >>>> > Virtually, Ned Wolpert
> >>>> >
> >>>> > "Settle thy studies, Faustus, and begin..."   --Marlowe
> >>>>
> >>>
> >>>
> >>>
> >>> --
> >>> Virtually, Ned Wolpert
> >>>
> >>> "Settle thy studies, Faustus, and begin..."   --Marlowe
> >>>
> >>
> >>
> >>
> >> --
> >> Virtually, Ned Wolpert
> >>
> >> "Settle thy studies, Faustus, and begin..."   --Marlowe
> >>
> >
> >
> >
> > --
> > Virtually, Ned Wolpert
> >
> > "Settle thy studies, Faustus, and begin..."   --Marlowe
>
>
>
> --
> http://redhat.com
> http://blog.garytully.com
>



-- 
Virtually, Ned Wolpert

"Settle thy studies, Faustus, and begin..."   --Marlowe

Re: 5.3 question and server upgrade question...

Posted by Gary Tully <ga...@gmail.com>.
can you try a different ack mode, like clientack or using transactions
- the prefetch will be deferred till the ack which will be later than
in the auto ack case. Also, in the transacted case, use the
destination policy prefetchExtension=false

On 13 November 2013 14:54, Ned Wolpert <ne...@imemories.com> wrote:
> Did anyone have an idea into what I could do different to route messages to
> idle consumers?  Just came into the same situation this morning where a
> queue has 1 message processing on one consumer, one message waiting, and 15
> idle consumers.  (See notes below for my current configs)
>
>
> On Wed, Nov 6, 2013 at 9:40 AM, Ned Wolpert <ne...@imemories.com>wrote:
>
>> Forgot to add, broker url only has one query param....
>>
>> jms.prefetchPolicy.queuePrefetch=1
>>
>> which, as I mentioned above, does seem to work.
>>
>>
>> On Tue, Nov 5, 2013 at 10:56 AM, Ned Wolpert <ne...@imemories.com>wrote:
>>
>>> I can see the preFetch values being set in the console, and they are all
>>> one. I've not set priorities.
>>>
>>> These are 'java' processes, using groovy/grails. The same executable on 4
>>> boxes, each executable with 4 listeners, treaded. Using the grails jms
>>> plugin, which wraps the Spring jms template configuration.
>>> (concurrentConsumers is set to 4 per instance)
>>>
>>> When I have 1000's of messages pending, all instances are working. This
>>> issue is only really viewable when there is 10 messages working.
>>>
>>> The following is the (redacted) activemq.xml.  I'm assuming this config
>>> could be better.  I should mention typical usage of our JMS server has a
>>> few consumers and tons of producers. Thirty queues. Most queues process
>>> quickly and do not fill up. Two queues are for slow producers. The goal is
>>> for the producers to send a message and break away, so we don't want slow
>>> producers at all. Producers are very spiky.... from 10m/min to bursts of
>>> 100's/min.  We have growth concern as that number is increasing steadily.
>>>
>>> <beans
>>>   xmlns="http://www.springframework.org/schema/beans"
>>>   xmlns:amq="http://activemq.apache.org/schema/core"
>>>   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
>>>   xsi:schemaLocation="http://www.springframework.org/schema/beans
>>> http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
>>>   http://activemq.apache.org/schema/core
>>> http://activemq.apache.org/schema/core/activemq-core.xsd">
>>>
>>>   <bean
>>> class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
>>>     <property name="locations">
>>>       <value>file:${activemq.base}/conf/credentials.properties</value>
>>>     </property>
>>>   </bean>
>>>
>>>   <broker xmlns="http://activemq.apache.org/schema/core"
>>>             brokerName="stagingMQ"
>>>             useJmx="true"
>>>             enableStatistics="true"
>>>             useLocalHostBrokerName="false"
>>>             useLoggingForShutdownErrors="true"
>>>             dataDirectory="XXXXX">
>>>
>>>         <managementContext>
>>>             <managementContext createConnector="true"
>>> connectorPort="XXXXX"/>
>>>         </managementContext>
>>>
>>>         <persistenceAdapter>
>>>            <journaledJDBC journalLogFiles="5"
>>>                           journalLogFileSize="20 Mb"
>>>   dataDirectory="XXXXXX"
>>>                           createTablesOnStartup="false"
>>>                           useDatabaseLock="false"
>>>                           dataSource="#XXXXX">
>>>            </journaledJDBC>
>>>         </persistenceAdapter>
>>>
>>>         <destinationPolicy>
>>>             <policyMap>
>>>               <policyEntries>
>>>                 <policyEntry topic=">" producerFlowControl="true"
>>> memoryLimit="1mb">
>>>                    <pendingSubscriberPolicy>
>>>                     <vmCursor />
>>>                   </pendingSubscriberPolicy>
>>>                 </policyEntry>
>>> <policyEntry queue=">" producerFlowControl="true" memoryLimit="30mb">
>>>                   <pendingQueuePolicy>
>>>                     <vmQueueCursor/>
>>>                   </pendingQueuePolicy>
>>>                 </policyEntry>
>>>               </policyEntries>
>>>             </policyMap>
>>>         </destinationPolicy>
>>>
>>>         <transportConnectors>
>>>             <transportConnector name="openwire" uri="XXXX"/>
>>>             <transportConnector name="stomp" uri="XXXXX"/>
>>>         </transportConnectors>
>>>
>>>     </broker>
>>>
>>>     <import resource="jetty.xml"/>
>>>     <import resource="databaseconfig.xml"/>
>>> </beans>
>>>
>>>
>>>
>>>
>>> On Tue, Nov 5, 2013 at 9:27 AM, Paul Gale <pa...@gmail.com> wrote:
>>>
>>>> Have you verified via broker logging that the prefetch values you've
>>>> configured are being honored by the broker? Are consumer priorities in
>>>> use? Are your consumers instances of the same executable or are they
>>>> implemented individually?
>>>>
>>>> Can you post your broker configuration: activemq.xml?
>>>>
>>>> How are your clients implemented, e.g., technology: Ruby or Java etc,
>>>> choice of client libraries? Just wondering.
>>>>
>>>>
>>>> Thanks,
>>>> Paul
>>>>
>>>> On Tue, Nov 5, 2013 at 10:28 AM, Ned Wolpert <ne...@imemories.com>
>>>> wrote:
>>>> > Thanks for the response...
>>>> >
>>>> > Any idea on the round-robin not working? I have a queue with 16
>>>> consumers,
>>>> > all have pre-fetch set to 1. Five consumers are actively processing
>>>> > requests and 3 requests are pending.... the 11 other consumers are
>>>> idle.
>>>> > History has shown that a new request may go to one of the 11 idle
>>>> works,
>>>> > but its like those 3 requests are reserved for some of the working
>>>> ones. I
>>>> > can't figure out what setting would help this, or if this just was a
>>>> bug
>>>> > with 5.3....
>>>> >
>>>> >
>>>> > On Mon, Nov 4, 2013 at 4:37 PM, Christian Posta
>>>> > <ch...@gmail.com>wrote:
>>>> >
>>>> >> The clients should negotiate the correct open-wire (protocol version)
>>>> >> so in theory the broker will be backward compatible with older
>>>> >> clients. Just make sure the activemq-openwire-legacy jar is on the
>>>> >> classpath (should be by default).
>>>> >>
>>>> >> Of course I would test this out to make sure :)
>>>> >>
>>>> >> On Mon, Nov 4, 2013 at 10:20 AM, Ned Wolpert <
>>>> ned.wolpert@imemories.com>
>>>> >> wrote:
>>>> >> > Folks-
>>>> >> >
>>>> >> >   I have a 5.3 installation that we're using, and I have 2
>>>> questions for
>>>> >> it:
>>>> >> >
>>>> >> > 1) We have prefetch set to 1 for all of the message consumers on one
>>>> >> queue,
>>>> >> > where message handling is slow. But it still seems like messages
>>>> aren't
>>>> >> > really 'round robin' to the next available message consumer. I'll
>>>> see a
>>>> >> few
>>>> >> > consumers are free but messages are waiting around. Is there a
>>>> >> > configuration that can help?  (I should note that the server has
>>>> been
>>>> >> > running consistently for 9 months and it seems to be getting
>>>> worse....
>>>> >> > would a restart help?)
>>>> >> >
>>>> >> > 2) We are looking to upgrade to 5.9. I haven't started the process
>>>> of
>>>> >> > testing, but I wanted to see if this is a case where the 5.3
>>>> clients need
>>>> >> > to be upgraded at the same time as the server, or if the clients
>>>> can be
>>>> >> > rolled over a few weeks to 5.9 after the server gets updated?
>>>> >> >
>>>> >> > Thanks!
>>>> >> >
>>>> >> > --
>>>> >> > Virtually, Ned Wolpert
>>>> >> >
>>>> >> > "Settle thy studies, Faustus, and begin..."   --Marlowe
>>>> >>
>>>> >>
>>>> >>
>>>> >> --
>>>> >> Christian Posta
>>>> >> http://www.christianposta.com/blog
>>>> >> twitter: @christianposta
>>>> >>
>>>> >
>>>> >
>>>> >
>>>> > --
>>>> > Virtually, Ned Wolpert
>>>> >
>>>> > "Settle thy studies, Faustus, and begin..."   --Marlowe
>>>>
>>>
>>>
>>>
>>> --
>>> Virtually, Ned Wolpert
>>>
>>> "Settle thy studies, Faustus, and begin..."   --Marlowe
>>>
>>
>>
>>
>> --
>> Virtually, Ned Wolpert
>>
>> "Settle thy studies, Faustus, and begin..."   --Marlowe
>>
>
>
>
> --
> Virtually, Ned Wolpert
>
> "Settle thy studies, Faustus, and begin..."   --Marlowe



-- 
http://redhat.com
http://blog.garytully.com

Re: 5.3 question and server upgrade question...

Posted by Ned Wolpert <ne...@imemories.com>.
Did anyone have an idea into what I could do different to route messages to
idle consumers?  Just came into the same situation this morning where a
queue has 1 message processing on one consumer, one message waiting, and 15
idle consumers.  (See notes below for my current configs)


On Wed, Nov 6, 2013 at 9:40 AM, Ned Wolpert <ne...@imemories.com>wrote:

> Forgot to add, broker url only has one query param....
>
> jms.prefetchPolicy.queuePrefetch=1
>
> which, as I mentioned above, does seem to work.
>
>
> On Tue, Nov 5, 2013 at 10:56 AM, Ned Wolpert <ne...@imemories.com>wrote:
>
>> I can see the preFetch values being set in the console, and they are all
>> one. I've not set priorities.
>>
>> These are 'java' processes, using groovy/grails. The same executable on 4
>> boxes, each executable with 4 listeners, treaded. Using the grails jms
>> plugin, which wraps the Spring jms template configuration.
>> (concurrentConsumers is set to 4 per instance)
>>
>> When I have 1000's of messages pending, all instances are working. This
>> issue is only really viewable when there is 10 messages working.
>>
>> The following is the (redacted) activemq.xml.  I'm assuming this config
>> could be better.  I should mention typical usage of our JMS server has a
>> few consumers and tons of producers. Thirty queues. Most queues process
>> quickly and do not fill up. Two queues are for slow producers. The goal is
>> for the producers to send a message and break away, so we don't want slow
>> producers at all. Producers are very spiky.... from 10m/min to bursts of
>> 100's/min.  We have growth concern as that number is increasing steadily.
>>
>> <beans
>>   xmlns="http://www.springframework.org/schema/beans"
>>   xmlns:amq="http://activemq.apache.org/schema/core"
>>   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
>>   xsi:schemaLocation="http://www.springframework.org/schema/beans
>> http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
>>   http://activemq.apache.org/schema/core
>> http://activemq.apache.org/schema/core/activemq-core.xsd">
>>
>>   <bean
>> class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
>>     <property name="locations">
>>       <value>file:${activemq.base}/conf/credentials.properties</value>
>>     </property>
>>   </bean>
>>
>>   <broker xmlns="http://activemq.apache.org/schema/core"
>>             brokerName="stagingMQ"
>>             useJmx="true"
>>             enableStatistics="true"
>>             useLocalHostBrokerName="false"
>>             useLoggingForShutdownErrors="true"
>>             dataDirectory="XXXXX">
>>
>>         <managementContext>
>>             <managementContext createConnector="true"
>> connectorPort="XXXXX"/>
>>         </managementContext>
>>
>>         <persistenceAdapter>
>>            <journaledJDBC journalLogFiles="5"
>>                           journalLogFileSize="20 Mb"
>>   dataDirectory="XXXXXX"
>>                           createTablesOnStartup="false"
>>                           useDatabaseLock="false"
>>                           dataSource="#XXXXX">
>>            </journaledJDBC>
>>         </persistenceAdapter>
>>
>>         <destinationPolicy>
>>             <policyMap>
>>               <policyEntries>
>>                 <policyEntry topic=">" producerFlowControl="true"
>> memoryLimit="1mb">
>>                    <pendingSubscriberPolicy>
>>                     <vmCursor />
>>                   </pendingSubscriberPolicy>
>>                 </policyEntry>
>> <policyEntry queue=">" producerFlowControl="true" memoryLimit="30mb">
>>                   <pendingQueuePolicy>
>>                     <vmQueueCursor/>
>>                   </pendingQueuePolicy>
>>                 </policyEntry>
>>               </policyEntries>
>>             </policyMap>
>>         </destinationPolicy>
>>
>>         <transportConnectors>
>>             <transportConnector name="openwire" uri="XXXX"/>
>>             <transportConnector name="stomp" uri="XXXXX"/>
>>         </transportConnectors>
>>
>>     </broker>
>>
>>     <import resource="jetty.xml"/>
>>     <import resource="databaseconfig.xml"/>
>> </beans>
>>
>>
>>
>>
>> On Tue, Nov 5, 2013 at 9:27 AM, Paul Gale <pa...@gmail.com> wrote:
>>
>>> Have you verified via broker logging that the prefetch values you've
>>> configured are being honored by the broker? Are consumer priorities in
>>> use? Are your consumers instances of the same executable or are they
>>> implemented individually?
>>>
>>> Can you post your broker configuration: activemq.xml?
>>>
>>> How are your clients implemented, e.g., technology: Ruby or Java etc,
>>> choice of client libraries? Just wondering.
>>>
>>>
>>> Thanks,
>>> Paul
>>>
>>> On Tue, Nov 5, 2013 at 10:28 AM, Ned Wolpert <ne...@imemories.com>
>>> wrote:
>>> > Thanks for the response...
>>> >
>>> > Any idea on the round-robin not working? I have a queue with 16
>>> consumers,
>>> > all have pre-fetch set to 1. Five consumers are actively processing
>>> > requests and 3 requests are pending.... the 11 other consumers are
>>> idle.
>>> > History has shown that a new request may go to one of the 11 idle
>>> works,
>>> > but its like those 3 requests are reserved for some of the working
>>> ones. I
>>> > can't figure out what setting would help this, or if this just was a
>>> bug
>>> > with 5.3....
>>> >
>>> >
>>> > On Mon, Nov 4, 2013 at 4:37 PM, Christian Posta
>>> > <ch...@gmail.com>wrote:
>>> >
>>> >> The clients should negotiate the correct open-wire (protocol version)
>>> >> so in theory the broker will be backward compatible with older
>>> >> clients. Just make sure the activemq-openwire-legacy jar is on the
>>> >> classpath (should be by default).
>>> >>
>>> >> Of course I would test this out to make sure :)
>>> >>
>>> >> On Mon, Nov 4, 2013 at 10:20 AM, Ned Wolpert <
>>> ned.wolpert@imemories.com>
>>> >> wrote:
>>> >> > Folks-
>>> >> >
>>> >> >   I have a 5.3 installation that we're using, and I have 2
>>> questions for
>>> >> it:
>>> >> >
>>> >> > 1) We have prefetch set to 1 for all of the message consumers on one
>>> >> queue,
>>> >> > where message handling is slow. But it still seems like messages
>>> aren't
>>> >> > really 'round robin' to the next available message consumer. I'll
>>> see a
>>> >> few
>>> >> > consumers are free but messages are waiting around. Is there a
>>> >> > configuration that can help?  (I should note that the server has
>>> been
>>> >> > running consistently for 9 months and it seems to be getting
>>> worse....
>>> >> > would a restart help?)
>>> >> >
>>> >> > 2) We are looking to upgrade to 5.9. I haven't started the process
>>> of
>>> >> > testing, but I wanted to see if this is a case where the 5.3
>>> clients need
>>> >> > to be upgraded at the same time as the server, or if the clients
>>> can be
>>> >> > rolled over a few weeks to 5.9 after the server gets updated?
>>> >> >
>>> >> > Thanks!
>>> >> >
>>> >> > --
>>> >> > Virtually, Ned Wolpert
>>> >> >
>>> >> > "Settle thy studies, Faustus, and begin..."   --Marlowe
>>> >>
>>> >>
>>> >>
>>> >> --
>>> >> Christian Posta
>>> >> http://www.christianposta.com/blog
>>> >> twitter: @christianposta
>>> >>
>>> >
>>> >
>>> >
>>> > --
>>> > Virtually, Ned Wolpert
>>> >
>>> > "Settle thy studies, Faustus, and begin..."   --Marlowe
>>>
>>
>>
>>
>> --
>> Virtually, Ned Wolpert
>>
>> "Settle thy studies, Faustus, and begin..."   --Marlowe
>>
>
>
>
> --
> Virtually, Ned Wolpert
>
> "Settle thy studies, Faustus, and begin..."   --Marlowe
>



-- 
Virtually, Ned Wolpert

"Settle thy studies, Faustus, and begin..."   --Marlowe

Re: 5.3 question and server upgrade question...

Posted by Ned Wolpert <ne...@imemories.com>.
Forgot to add, broker url only has one query param....

jms.prefetchPolicy.queuePrefetch=1

which, as I mentioned above, does seem to work.


On Tue, Nov 5, 2013 at 10:56 AM, Ned Wolpert <ne...@imemories.com>wrote:

> I can see the preFetch values being set in the console, and they are all
> one. I've not set priorities.
>
> These are 'java' processes, using groovy/grails. The same executable on 4
> boxes, each executable with 4 listeners, treaded. Using the grails jms
> plugin, which wraps the Spring jms template configuration.
> (concurrentConsumers is set to 4 per instance)
>
> When I have 1000's of messages pending, all instances are working. This
> issue is only really viewable when there is 10 messages working.
>
> The following is the (redacted) activemq.xml.  I'm assuming this config
> could be better.  I should mention typical usage of our JMS server has a
> few consumers and tons of producers. Thirty queues. Most queues process
> quickly and do not fill up. Two queues are for slow producers. The goal is
> for the producers to send a message and break away, so we don't want slow
> producers at all. Producers are very spiky.... from 10m/min to bursts of
> 100's/min.  We have growth concern as that number is increasing steadily.
>
> <beans
>   xmlns="http://www.springframework.org/schema/beans"
>   xmlns:amq="http://activemq.apache.org/schema/core"
>   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
>   xsi:schemaLocation="http://www.springframework.org/schema/beans
> http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
>   http://activemq.apache.org/schema/core
> http://activemq.apache.org/schema/core/activemq-core.xsd">
>
>   <bean
> class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
>     <property name="locations">
>       <value>file:${activemq.base}/conf/credentials.properties</value>
>     </property>
>   </bean>
>
>   <broker xmlns="http://activemq.apache.org/schema/core"
>             brokerName="stagingMQ"
>             useJmx="true"
>             enableStatistics="true"
>             useLocalHostBrokerName="false"
>             useLoggingForShutdownErrors="true"
>             dataDirectory="XXXXX">
>
>         <managementContext>
>             <managementContext createConnector="true"
> connectorPort="XXXXX"/>
>         </managementContext>
>
>         <persistenceAdapter>
>            <journaledJDBC journalLogFiles="5"
>                           journalLogFileSize="20 Mb"
>   dataDirectory="XXXXXX"
>                           createTablesOnStartup="false"
>                           useDatabaseLock="false"
>                           dataSource="#XXXXX">
>            </journaledJDBC>
>         </persistenceAdapter>
>
>         <destinationPolicy>
>             <policyMap>
>               <policyEntries>
>                 <policyEntry topic=">" producerFlowControl="true"
> memoryLimit="1mb">
>                   <pendingSubscriberPolicy>
>                     <vmCursor />
>                   </pendingSubscriberPolicy>
>                 </policyEntry>
> <policyEntry queue=">" producerFlowControl="true" memoryLimit="30mb">
>                   <pendingQueuePolicy>
>                     <vmQueueCursor/>
>                   </pendingQueuePolicy>
>                 </policyEntry>
>               </policyEntries>
>             </policyMap>
>         </destinationPolicy>
>
>         <transportConnectors>
>             <transportConnector name="openwire" uri="XXXX"/>
>             <transportConnector name="stomp" uri="XXXXX"/>
>         </transportConnectors>
>
>     </broker>
>
>     <import resource="jetty.xml"/>
>     <import resource="databaseconfig.xml"/>
> </beans>
>
>
>
>
> On Tue, Nov 5, 2013 at 9:27 AM, Paul Gale <pa...@gmail.com> wrote:
>
>> Have you verified via broker logging that the prefetch values you've
>> configured are being honored by the broker? Are consumer priorities in
>> use? Are your consumers instances of the same executable or are they
>> implemented individually?
>>
>> Can you post your broker configuration: activemq.xml?
>>
>> How are your clients implemented, e.g., technology: Ruby or Java etc,
>> choice of client libraries? Just wondering.
>>
>>
>> Thanks,
>> Paul
>>
>> On Tue, Nov 5, 2013 at 10:28 AM, Ned Wolpert <ne...@imemories.com>
>> wrote:
>> > Thanks for the response...
>> >
>> > Any idea on the round-robin not working? I have a queue with 16
>> consumers,
>> > all have pre-fetch set to 1. Five consumers are actively processing
>> > requests and 3 requests are pending.... the 11 other consumers are idle.
>> > History has shown that a new request may go to one of the 11 idle works,
>> > but its like those 3 requests are reserved for some of the working
>> ones. I
>> > can't figure out what setting would help this, or if this just was a bug
>> > with 5.3....
>> >
>> >
>> > On Mon, Nov 4, 2013 at 4:37 PM, Christian Posta
>> > <ch...@gmail.com>wrote:
>> >
>> >> The clients should negotiate the correct open-wire (protocol version)
>> >> so in theory the broker will be backward compatible with older
>> >> clients. Just make sure the activemq-openwire-legacy jar is on the
>> >> classpath (should be by default).
>> >>
>> >> Of course I would test this out to make sure :)
>> >>
>> >> On Mon, Nov 4, 2013 at 10:20 AM, Ned Wolpert <
>> ned.wolpert@imemories.com>
>> >> wrote:
>> >> > Folks-
>> >> >
>> >> >   I have a 5.3 installation that we're using, and I have 2 questions
>> for
>> >> it:
>> >> >
>> >> > 1) We have prefetch set to 1 for all of the message consumers on one
>> >> queue,
>> >> > where message handling is slow. But it still seems like messages
>> aren't
>> >> > really 'round robin' to the next available message consumer. I'll
>> see a
>> >> few
>> >> > consumers are free but messages are waiting around. Is there a
>> >> > configuration that can help?  (I should note that the server has been
>> >> > running consistently for 9 months and it seems to be getting
>> worse....
>> >> > would a restart help?)
>> >> >
>> >> > 2) We are looking to upgrade to 5.9. I haven't started the process of
>> >> > testing, but I wanted to see if this is a case where the 5.3 clients
>> need
>> >> > to be upgraded at the same time as the server, or if the clients can
>> be
>> >> > rolled over a few weeks to 5.9 after the server gets updated?
>> >> >
>> >> > Thanks!
>> >> >
>> >> > --
>> >> > Virtually, Ned Wolpert
>> >> >
>> >> > "Settle thy studies, Faustus, and begin..."   --Marlowe
>> >>
>> >>
>> >>
>> >> --
>> >> Christian Posta
>> >> http://www.christianposta.com/blog
>> >> twitter: @christianposta
>> >>
>> >
>> >
>> >
>> > --
>> > Virtually, Ned Wolpert
>> >
>> > "Settle thy studies, Faustus, and begin..."   --Marlowe
>>
>
>
>
> --
> Virtually, Ned Wolpert
>
> "Settle thy studies, Faustus, and begin..."   --Marlowe
>



-- 
Virtually, Ned Wolpert

"Settle thy studies, Faustus, and begin..."   --Marlowe

Re: 5.3 question and server upgrade question...

Posted by Ned Wolpert <ne...@imemories.com>.
I can see the preFetch values being set in the console, and they are all
one. I've not set priorities.

These are 'java' processes, using groovy/grails. The same executable on 4
boxes, each executable with 4 listeners, treaded. Using the grails jms
plugin, which wraps the Spring jms template configuration.
(concurrentConsumers is set to 4 per instance)

When I have 1000's of messages pending, all instances are working. This
issue is only really viewable when there is 10 messages working.

The following is the (redacted) activemq.xml.  I'm assuming this config
could be better.  I should mention typical usage of our JMS server has a
few consumers and tons of producers. Thirty queues. Most queues process
quickly and do not fill up. Two queues are for slow producers. The goal is
for the producers to send a message and break away, so we don't want slow
producers at all. Producers are very spiky.... from 10m/min to bursts of
100's/min.  We have growth concern as that number is increasing steadily.

<beans
  xmlns="http://www.springframework.org/schema/beans"
  xmlns:amq="http://activemq.apache.org/schema/core"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
  http://activemq.apache.org/schema/core
http://activemq.apache.org/schema/core/activemq-core.xsd">

  <bean
class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
    <property name="locations">
      <value>file:${activemq.base}/conf/credentials.properties</value>
    </property>
  </bean>

  <broker xmlns="http://activemq.apache.org/schema/core"
            brokerName="stagingMQ"
            useJmx="true"
            enableStatistics="true"
            useLocalHostBrokerName="false"
            useLoggingForShutdownErrors="true"
            dataDirectory="XXXXX">

        <managementContext>
            <managementContext createConnector="true"
connectorPort="XXXXX"/>
        </managementContext>

        <persistenceAdapter>
           <journaledJDBC journalLogFiles="5"
                          journalLogFileSize="20 Mb"
  dataDirectory="XXXXXX"
                          createTablesOnStartup="false"
                          useDatabaseLock="false"
                          dataSource="#XXXXX">
           </journaledJDBC>
        </persistenceAdapter>

        <destinationPolicy>
            <policyMap>
              <policyEntries>
                <policyEntry topic=">" producerFlowControl="true"
memoryLimit="1mb">
                  <pendingSubscriberPolicy>
                    <vmCursor />
                  </pendingSubscriberPolicy>
                </policyEntry>
<policyEntry queue=">" producerFlowControl="true" memoryLimit="30mb">
                  <pendingQueuePolicy>
                    <vmQueueCursor/>
                  </pendingQueuePolicy>
                </policyEntry>
              </policyEntries>
            </policyMap>
        </destinationPolicy>

        <transportConnectors>
            <transportConnector name="openwire" uri="XXXX"/>
            <transportConnector name="stomp" uri="XXXXX"/>
        </transportConnectors>

    </broker>

    <import resource="jetty.xml"/>
    <import resource="databaseconfig.xml"/>
</beans>




On Tue, Nov 5, 2013 at 9:27 AM, Paul Gale <pa...@gmail.com> wrote:

> Have you verified via broker logging that the prefetch values you've
> configured are being honored by the broker? Are consumer priorities in
> use? Are your consumers instances of the same executable or are they
> implemented individually?
>
> Can you post your broker configuration: activemq.xml?
>
> How are your clients implemented, e.g., technology: Ruby or Java etc,
> choice of client libraries? Just wondering.
>
>
> Thanks,
> Paul
>
> On Tue, Nov 5, 2013 at 10:28 AM, Ned Wolpert <ne...@imemories.com>
> wrote:
> > Thanks for the response...
> >
> > Any idea on the round-robin not working? I have a queue with 16
> consumers,
> > all have pre-fetch set to 1. Five consumers are actively processing
> > requests and 3 requests are pending.... the 11 other consumers are idle.
> > History has shown that a new request may go to one of the 11 idle works,
> > but its like those 3 requests are reserved for some of the working ones.
> I
> > can't figure out what setting would help this, or if this just was a bug
> > with 5.3....
> >
> >
> > On Mon, Nov 4, 2013 at 4:37 PM, Christian Posta
> > <ch...@gmail.com>wrote:
> >
> >> The clients should negotiate the correct open-wire (protocol version)
> >> so in theory the broker will be backward compatible with older
> >> clients. Just make sure the activemq-openwire-legacy jar is on the
> >> classpath (should be by default).
> >>
> >> Of course I would test this out to make sure :)
> >>
> >> On Mon, Nov 4, 2013 at 10:20 AM, Ned Wolpert <ned.wolpert@imemories.com
> >
> >> wrote:
> >> > Folks-
> >> >
> >> >   I have a 5.3 installation that we're using, and I have 2 questions
> for
> >> it:
> >> >
> >> > 1) We have prefetch set to 1 for all of the message consumers on one
> >> queue,
> >> > where message handling is slow. But it still seems like messages
> aren't
> >> > really 'round robin' to the next available message consumer. I'll see
> a
> >> few
> >> > consumers are free but messages are waiting around. Is there a
> >> > configuration that can help?  (I should note that the server has been
> >> > running consistently for 9 months and it seems to be getting worse....
> >> > would a restart help?)
> >> >
> >> > 2) We are looking to upgrade to 5.9. I haven't started the process of
> >> > testing, but I wanted to see if this is a case where the 5.3 clients
> need
> >> > to be upgraded at the same time as the server, or if the clients can
> be
> >> > rolled over a few weeks to 5.9 after the server gets updated?
> >> >
> >> > Thanks!
> >> >
> >> > --
> >> > Virtually, Ned Wolpert
> >> >
> >> > "Settle thy studies, Faustus, and begin..."   --Marlowe
> >>
> >>
> >>
> >> --
> >> Christian Posta
> >> http://www.christianposta.com/blog
> >> twitter: @christianposta
> >>
> >
> >
> >
> > --
> > Virtually, Ned Wolpert
> >
> > "Settle thy studies, Faustus, and begin..."   --Marlowe
>



-- 
Virtually, Ned Wolpert

"Settle thy studies, Faustus, and begin..."   --Marlowe

Re: 5.3 question and server upgrade question...

Posted by Paul Gale <pa...@gmail.com>.
Have you verified via broker logging that the prefetch values you've
configured are being honored by the broker? Are consumer priorities in
use? Are your consumers instances of the same executable or are they
implemented individually?

Can you post your broker configuration: activemq.xml?

How are your clients implemented, e.g., technology: Ruby or Java etc,
choice of client libraries? Just wondering.


Thanks,
Paul

On Tue, Nov 5, 2013 at 10:28 AM, Ned Wolpert <ne...@imemories.com> wrote:
> Thanks for the response...
>
> Any idea on the round-robin not working? I have a queue with 16 consumers,
> all have pre-fetch set to 1. Five consumers are actively processing
> requests and 3 requests are pending.... the 11 other consumers are idle.
> History has shown that a new request may go to one of the 11 idle works,
> but its like those 3 requests are reserved for some of the working ones. I
> can't figure out what setting would help this, or if this just was a bug
> with 5.3....
>
>
> On Mon, Nov 4, 2013 at 4:37 PM, Christian Posta
> <ch...@gmail.com>wrote:
>
>> The clients should negotiate the correct open-wire (protocol version)
>> so in theory the broker will be backward compatible with older
>> clients. Just make sure the activemq-openwire-legacy jar is on the
>> classpath (should be by default).
>>
>> Of course I would test this out to make sure :)
>>
>> On Mon, Nov 4, 2013 at 10:20 AM, Ned Wolpert <ne...@imemories.com>
>> wrote:
>> > Folks-
>> >
>> >   I have a 5.3 installation that we're using, and I have 2 questions for
>> it:
>> >
>> > 1) We have prefetch set to 1 for all of the message consumers on one
>> queue,
>> > where message handling is slow. But it still seems like messages aren't
>> > really 'round robin' to the next available message consumer. I'll see a
>> few
>> > consumers are free but messages are waiting around. Is there a
>> > configuration that can help?  (I should note that the server has been
>> > running consistently for 9 months and it seems to be getting worse....
>> > would a restart help?)
>> >
>> > 2) We are looking to upgrade to 5.9. I haven't started the process of
>> > testing, but I wanted to see if this is a case where the 5.3 clients need
>> > to be upgraded at the same time as the server, or if the clients can be
>> > rolled over a few weeks to 5.9 after the server gets updated?
>> >
>> > Thanks!
>> >
>> > --
>> > Virtually, Ned Wolpert
>> >
>> > "Settle thy studies, Faustus, and begin..."   --Marlowe
>>
>>
>>
>> --
>> Christian Posta
>> http://www.christianposta.com/blog
>> twitter: @christianposta
>>
>
>
>
> --
> Virtually, Ned Wolpert
>
> "Settle thy studies, Faustus, and begin..."   --Marlowe

Re: 5.3 question and server upgrade question...

Posted by Ned Wolpert <ne...@imemories.com>.
Thanks for the response...

Any idea on the round-robin not working? I have a queue with 16 consumers,
all have pre-fetch set to 1. Five consumers are actively processing
requests and 3 requests are pending.... the 11 other consumers are idle.
History has shown that a new request may go to one of the 11 idle works,
but its like those 3 requests are reserved for some of the working ones. I
can't figure out what setting would help this, or if this just was a bug
with 5.3....


On Mon, Nov 4, 2013 at 4:37 PM, Christian Posta
<ch...@gmail.com>wrote:

> The clients should negotiate the correct open-wire (protocol version)
> so in theory the broker will be backward compatible with older
> clients. Just make sure the activemq-openwire-legacy jar is on the
> classpath (should be by default).
>
> Of course I would test this out to make sure :)
>
> On Mon, Nov 4, 2013 at 10:20 AM, Ned Wolpert <ne...@imemories.com>
> wrote:
> > Folks-
> >
> >   I have a 5.3 installation that we're using, and I have 2 questions for
> it:
> >
> > 1) We have prefetch set to 1 for all of the message consumers on one
> queue,
> > where message handling is slow. But it still seems like messages aren't
> > really 'round robin' to the next available message consumer. I'll see a
> few
> > consumers are free but messages are waiting around. Is there a
> > configuration that can help?  (I should note that the server has been
> > running consistently for 9 months and it seems to be getting worse....
> > would a restart help?)
> >
> > 2) We are looking to upgrade to 5.9. I haven't started the process of
> > testing, but I wanted to see if this is a case where the 5.3 clients need
> > to be upgraded at the same time as the server, or if the clients can be
> > rolled over a few weeks to 5.9 after the server gets updated?
> >
> > Thanks!
> >
> > --
> > Virtually, Ned Wolpert
> >
> > "Settle thy studies, Faustus, and begin..."   --Marlowe
>
>
>
> --
> Christian Posta
> http://www.christianposta.com/blog
> twitter: @christianposta
>



-- 
Virtually, Ned Wolpert

"Settle thy studies, Faustus, and begin..."   --Marlowe

Re: 5.3 question and server upgrade question...

Posted by Christian Posta <ch...@gmail.com>.
The clients should negotiate the correct open-wire (protocol version)
so in theory the broker will be backward compatible with older
clients. Just make sure the activemq-openwire-legacy jar is on the
classpath (should be by default).

Of course I would test this out to make sure :)

On Mon, Nov 4, 2013 at 10:20 AM, Ned Wolpert <ne...@imemories.com> wrote:
> Folks-
>
>   I have a 5.3 installation that we're using, and I have 2 questions for it:
>
> 1) We have prefetch set to 1 for all of the message consumers on one queue,
> where message handling is slow. But it still seems like messages aren't
> really 'round robin' to the next available message consumer. I'll see a few
> consumers are free but messages are waiting around. Is there a
> configuration that can help?  (I should note that the server has been
> running consistently for 9 months and it seems to be getting worse....
> would a restart help?)
>
> 2) We are looking to upgrade to 5.9. I haven't started the process of
> testing, but I wanted to see if this is a case where the 5.3 clients need
> to be upgraded at the same time as the server, or if the clients can be
> rolled over a few weeks to 5.9 after the server gets updated?
>
> Thanks!
>
> --
> Virtually, Ned Wolpert
>
> "Settle thy studies, Faustus, and begin..."   --Marlowe



-- 
Christian Posta
http://www.christianposta.com/blog
twitter: @christianposta