You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@activemq.apache.org by bbansal <bh...@groupon.com> on 2011/09/12 02:08:51 UTC

Backlog data causes producers to slow down.

Hello folks, 

I am evaluating ActiveMQ for some simple scenarios. The web-server will push
notifications to the queue/topic to be consumed by one or many consumers.
The one requirement is web-server should not get impacted or should be able
to write at their speed even if consumers goes down etc. 

ActiveMQ is performing very well with about 1500 QPS (8 producer thread,
persistence, kaha-db) Kahadb parameters being used are 

enableJournalDiskSyncs="false" indexWriteBatchSize="1000"
enableIndexWriteAsync="true

The system works great if consumers are all caught up, the issue is when I
am trying to test scenarios with backlogged data (keep running producer for
30 mins or so) and then start consumers. Consumer show good consumption rate
but the producers (8 threads same as before) cannot do more than 120 QPS.
This is a drop of more than 90% degradation. 

I ran profiler on the code (Jprofiler) and looks like the writers are
getting stuck for write locks while competing with the removeAsyncMessages()
or call to clear messages which got acknowledged from clients etc. 

I saw similar complaints for some other folks, Is there some settings we can
use to fix the problem ? I dont want to degrade any guarantee level (eg.
disable acks etc). 

Would be more than happy to run experiments with different settings if folks
have some suggestions. 



--
View this message in context: http://activemq.2283324.n4.nabble.com/Backlog-data-causes-producers-to-slow-down-tp3806018p3806018.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: Backlog data causes producers to slow down.

Posted by Gary Tully <ga...@gmail.com>.
for the queue case, with backlogs (when the consumers don't keep up)
you may want to experiment with <kahaDB
concurrentStoreAndDispatchQueues="false" />


On 12 September 2011 01:08, bbansal <bh...@groupon.com> wrote:
> Hello folks,
>
> I am evaluating ActiveMQ for some simple scenarios. The web-server will push
> notifications to the queue/topic to be consumed by one or many consumers.
> The one requirement is web-server should not get impacted or should be able
> to write at their speed even if consumers goes down etc.
>
> ActiveMQ is performing very well with about 1500 QPS (8 producer thread,
> persistence, kaha-db) Kahadb parameters being used are
>
> enableJournalDiskSyncs="false" indexWriteBatchSize="1000"
> enableIndexWriteAsync="true
>
> The system works great if consumers are all caught up, the issue is when I
> am trying to test scenarios with backlogged data (keep running producer for
> 30 mins or so) and then start consumers. Consumer show good consumption rate
> but the producers (8 threads same as before) cannot do more than 120 QPS.
> This is a drop of more than 90% degradation.
>
> I ran profiler on the code (Jprofiler) and looks like the writers are
> getting stuck for write locks while competing with the removeAsyncMessages()
> or call to clear messages which got acknowledged from clients etc.
>
> I saw similar complaints for some other folks, Is there some settings we can
> use to fix the problem ? I dont want to degrade any guarantee level (eg.
> disable acks etc).
>
> Would be more than happy to run experiments with different settings if folks
> have some suggestions.
>
>
>
> --
> View this message in context: http://activemq.2283324.n4.nabble.com/Backlog-data-causes-producers-to-slow-down-tp3806018p3806018.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>



-- 
http://fusesource.com
http://blog.garytully.com

Re: Backlog data causes producers to slow down.

Posted by Jason Whaley <ja...@gmail.com>.
This should be fine.  By default this will use a store cursor which can handle the overflow up to your storeLimit.  As long as you are using either a store cursor or file cursor you can overflow messages on the broker to the message store or temp disk storage - just take care to not use vm cursors for this scenario.  The producer flow control page has a small section on using a file cursor, but more details on how this works can be found at http://activemq.apache.org/message-cursors.html 

If you still want producer flow control on, you may consider upping the memoryLimit on your destinations.  Unless you explicitly specify it, the limit is 64 mb for both queues and topics I believe.


On Sep 11, 2011, at 6:25 PM, bbansal wrote:

> Thanks, 
> 
> I think I have disabled producer flow control in my config as
> 
>     <destinationPolicy>
>            <policyMap>
>              <policyEntries>
>                <policyEntry topic=">" producerFlowControl="false">
>                  <pendingSubscriberPolicy>
>                    <vmCursor />
>                  </pendingSubscriberPolicy>
>                </policyEntry>
>                <policyEntry queue=">" producerFlowControl="false">
>                </policyEntry>
>              </policyEntries>
>            </policyMap>
>        </destinationPolicy>
> 
> Is this sufficient or I need to add some more config to disable
> producer-flow-control for persistent queues.
> 
> 
> --
> View this message in context: http://activemq.2283324.n4.nabble.com/Backlog-data-causes-producers-to-slow-down-tp3806018p3806034.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: Backlog data causes producers to slow down.

Posted by bbansal <bh...@groupon.com>.
Thanks, 

I think I have disabled producer flow control in my config as

     <destinationPolicy>
            <policyMap>
              <policyEntries>
                <policyEntry topic=">" producerFlowControl="false">
                  <pendingSubscriberPolicy>
                    <vmCursor />
                  </pendingSubscriberPolicy>
                </policyEntry>
                <policyEntry queue=">" producerFlowControl="false">
                </policyEntry>
              </policyEntries>
            </policyMap>
        </destinationPolicy>

Is this sufficient or I need to add some more config to disable
producer-flow-control for persistent queues.


--
View this message in context: http://activemq.2283324.n4.nabble.com/Backlog-data-causes-producers-to-slow-down-tp3806018p3806034.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: Backlog data causes producers to slow down.

Posted by Johan Edstrom <se...@gmail.com>.
http://activemq.apache.org/producer-flow-control.html

On Sep 11, 2011, at 6:08 PM, bbansal wrote:

> Hello folks, 
> 
> I am evaluating ActiveMQ for some simple scenarios. The web-server will push
> notifications to the queue/topic to be consumed by one or many consumers.
> The one requirement is web-server should not get impacted or should be able
> to write at their speed even if consumers goes down etc. 
> 
> ActiveMQ is performing very well with about 1500 QPS (8 producer thread,
> persistence, kaha-db) Kahadb parameters being used are 
> 
> enableJournalDiskSyncs="false" indexWriteBatchSize="1000"
> enableIndexWriteAsync="true
> 
> The system works great if consumers are all caught up, the issue is when I
> am trying to test scenarios with backlogged data (keep running producer for
> 30 mins or so) and then start consumers. Consumer show good consumption rate
> but the producers (8 threads same as before) cannot do more than 120 QPS.
> This is a drop of more than 90% degradation. 
> 
> I ran profiler on the code (Jprofiler) and looks like the writers are
> getting stuck for write locks while competing with the removeAsyncMessages()
> or call to clear messages which got acknowledged from clients etc. 
> 
> I saw similar complaints for some other folks, Is there some settings we can
> use to fix the problem ? I dont want to degrade any guarantee level (eg.
> disable acks etc). 
> 
> Would be more than happy to run experiments with different settings if folks
> have some suggestions. 
> 
> 
> 
> --
> View this message in context: http://activemq.2283324.n4.nabble.com/Backlog-data-causes-producers-to-slow-down-tp3806018p3806018.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: Backlog data causes producers to slow down.

Posted by bbansal <bh...@groupon.com>.
Gary,

I moved on to HornetQ as our underlying transport technology after we were
not able to debug/fix this particular issue. I tried looking into the code
etc.

Best
Bhupesh


On Wed, Dec 21, 2011 at 2:20 AM, Gary Tully [via ActiveMQ] <
ml-node+s2283324n4221130h13@n4.nabble.com> wrote:

> @Bhupesh,
> the prefetch may be part of the problem, as by default the broker will
> try and dispatch 1000 messages to each consumer. If the consumer
> (stomp connection) is short lived, this is a waste of resources.
>
> A consumers acks, will contend message production to some extent, this
> is expected as they share a resource, the consumer dispatch queue.
> Batching acks either using client ack mode or transactions helps
> reduce the overhead.
>
> If you have not yet tried a 5.6-SNAPSHOT, can you verify it behaves the
> same.
>
>
> On 13 September 2011 08:54, bbansal <[hidden email]<http://user/SendEmail.jtp?type=node&node=4221130&i=0>>
> wrote:
>
> > Hey Folks,
> >
> > I tried the concurrentStoreAndDispatchQueues="false" and it didn't help.
> I
> > still see around 10X drop in producer throughput with backlog.
> >
> > 1 Queue , 8 producers, 2 consumers , No backlog : 1200 QPS (producer),
> 1200
> > QPS (consumer)
> > 1 Queue, 8 Producer, 2 consumer, 4GB backlog (2M events): 120 QPS
> > (producer), 1200 QPS (consumer)
> >
> > I am attaching the scripts I am using, unfortunately I am using stomp
> and a
> > perl based consumer/producer setup.
> >
> > Best
> > Bhupesh
> > http://activemq.2283324.n4.nabble.com/file/n3809392/testcase.tar.gz
> > testcase.tar.gz
> > http://activemq.2283324.n4.nabble.com/file/n3809392/activemq-stomp.xml
> > activemq-stomp.xml
> >
> >
> >
> > --
> > View this message in context:
> http://activemq.2283324.n4.nabble.com/Backlog-data-causes-producers-to-slow-down-tp3806018p3809392.html
>
> > Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>
>
>
> --
> http://fusesource.com
> http://blog.garytully.com
>
>
> ------------------------------
>  If you reply to this email, your message will be added to the discussion
> below:
>
> http://activemq.2283324.n4.nabble.com/Backlog-data-causes-producers-to-slow-down-tp3806018p4221130.html
>  To unsubscribe from Backlog data causes producers to slow down., click
> here<http://activemq.2283324.n4.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=3806018&code=Ymh1cGVzaEBncm91cG9uLmNvbXwzODA2MDE4fC0yMDk5OTE3NDIy>
> .
> NAML<http://activemq.2283324.n4.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.InstantMailNamespace&breadcrumbs=instant+emails%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>


--
View this message in context: http://activemq.2283324.n4.nabble.com/Backlog-data-causes-producers-to-slow-down-tp3806018p4222076.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: Backlog data causes producers to slow down.

Posted by Gary Tully <ga...@gmail.com>.
@Bhupesh,
the prefetch may be part of the problem, as by default the broker will
try and dispatch 1000 messages to each consumer. If the consumer
(stomp connection) is short lived, this is a waste of resources.

A consumers acks, will contend message production to some extent, this
is expected as they share a resource, the consumer dispatch queue.
Batching acks either using client ack mode or transactions helps
reduce the overhead.

If you have not yet tried a 5.6-SNAPSHOT, can you verify it behaves the same.


On 13 September 2011 08:54, bbansal <bh...@groupon.com> wrote:
> Hey Folks,
>
> I tried the concurrentStoreAndDispatchQueues="false" and it didn't help. I
> still see around 10X drop in producer throughput with backlog.
>
> 1 Queue , 8 producers, 2 consumers , No backlog : 1200 QPS (producer), 1200
> QPS (consumer)
> 1 Queue, 8 Producer, 2 consumer, 4GB backlog (2M events): 120 QPS
> (producer), 1200 QPS (consumer)
>
> I am attaching the scripts I am using, unfortunately I am using stomp and a
> perl based consumer/producer setup.
>
> Best
> Bhupesh
> http://activemq.2283324.n4.nabble.com/file/n3809392/testcase.tar.gz
> testcase.tar.gz
> http://activemq.2283324.n4.nabble.com/file/n3809392/activemq-stomp.xml
> activemq-stomp.xml
>
>
>
> --
> View this message in context: http://activemq.2283324.n4.nabble.com/Backlog-data-causes-producers-to-slow-down-tp3806018p3809392.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.



-- 
http://fusesource.com
http://blog.garytully.com

Re: Backlog data causes producers to slow down.

Posted by Gary Tully <ga...@gmail.com>.
@ harry143,  not easily. Do you have a test case you can share...
would like to get to the bottom of this but it would be great to have
some shared code that correctly captures the use case.
Something in Junit would be ideal.

On 21 December 2011 16:23, harry143 <ha...@yahoo-inc.com> wrote:
> Yea i tried With optimizedDispatch="false" as well as "true".But it really
> did not affect much.
> The problem of degradation in producer throughput remained.
> @gary : hey you mentioned in your previous post that producer and consumer
> share a common resource "consumer dispatch queue " . Is it also the case
> when producer flow control is false. If yes then is there any way to
> separate these producer and consumer to use different queue or something
> like that.
> T&R
> Harish Sharma
>
>
> --
> View this message in context: http://activemq.2283324.n4.nabble.com/Backlog-data-causes-producers-to-slow-down-tp3806018p4222279.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.



-- 
http://fusesource.com
http://blog.garytully.com

Re: Backlog data causes producers to slow down.

Posted by harry143 <ha...@yahoo-inc.com>.
Yea i tried With optimizedDispatch="false" as well as "true".But it really
did not affect much.
The problem of degradation in producer throughput remained.
@gary : hey you mentioned in your previous post that producer and consumer
share a common resource "consumer dispatch queue " . Is it also the case
when producer flow control is false. If yes then is there any way to
separate these producer and consumer to use different queue or something
like that.
T&R
Harish Sharma


--
View this message in context: http://activemq.2283324.n4.nabble.com/Backlog-data-causes-producers-to-slow-down-tp3806018p4222279.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: Backlog data causes producers to slow down.

Posted by harry143 <ha...@yahoo-inc.com>.
I am facing the similar problem.
Whenever my consumer goes down and there is a data back log the production
rate goes down significantly.
Moreover when my consumer is up again , the production rate goes down again
and this cycle goes on.
Some Info : 
I am using Producer flow control = FALSE , AsyncSend(true) , persistence =
true and ConcurrentQueueStoreAndDispatch = true
As i read in this post about cursors so i am using the default store cursor.

Here is snippet from broker.xml
destinationPolicy>
      <policyMap>
        <policyEntries>
          <policyEntry topic=">" optimizedDispatch="true" memoryLimit="1gb"
producerFlowControl="false"  />
          <policyEntry queue=">" optimizedDispatch="true" memoryLimit="1gb"
producerFlowControl="false" />
        </policyEntries>
      </policyMap>
    </destinationPolicy>

<systemUsage>
      <systemUsage sendFailIfNoSpace="true">
        <memoryUsage>
          <memoryUsage limit="20 gb" />
        </memoryUsage>
        <storeUsage>
          <storeUsage limit="500 gb" name="foo" />
        </storeUsage>
        <tempUsage>
          <tempUsage limit="1 gb" />
        </tempUsage>
      </systemUsage>
    </systemUsage>

I am using all the optimizations(i read from
http://fusesource.com/docs/broker/5.4/tuning/index.html )
for producer , consumer and broker.
These optimizations are able to increase message production rate a bit but
the main issue is the stability of the production rate.(why it goes down 10
times and then keep on going down with every time consumers goes down or
goes up again).
I am trying to figure out the cause but unable to pinpoint the code in
activeMQ which is causing this issue.
When i saw threads state in jconsole i found out that as soon as consumer
start running , producer thread goes into wait state ( State: WAITING on
java.util.concurrent.locks.AbstractQueuedSynchronizer ).I understand the
queue these producers write to is blocking queue.But i  don't understand why
consumers  are affecting producers while flow control is off and my store
size is 500 GB and memory is 20GB .Why not producers are producing at a
constant rate until my KahaDB store is full.

I really need help in this matter and deeply appreciate any help from you
guys.




--
View this message in context: http://activemq.2283324.n4.nabble.com/Backlog-data-causes-producers-to-slow-down-tp3806018p4217057.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: Backlog data causes producers to slow down.

Posted by bbansal <bh...@groupon.com>.
Hey Folks, 

I tried the concurrentStoreAndDispatchQueues="false" and it didn't help. I
still see around 10X drop in producer throughput with backlog.

1 Queue , 8 producers, 2 consumers , No backlog : 1200 QPS (producer), 1200
QPS (consumer)
1 Queue, 8 Producer, 2 consumer, 4GB backlog (2M events): 120 QPS
(producer), 1200 QPS (consumer)

I am attaching the scripts I am using, unfortunately I am using stomp and a
perl based consumer/producer setup. 

Best
Bhupesh
http://activemq.2283324.n4.nabble.com/file/n3809392/testcase.tar.gz
testcase.tar.gz 
http://activemq.2283324.n4.nabble.com/file/n3809392/activemq-stomp.xml
activemq-stomp.xml 



--
View this message in context: http://activemq.2283324.n4.nabble.com/Backlog-data-causes-producers-to-slow-down-tp3806018p3809392.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: Backlog data causes producers to slow down.

Posted by kaustubh khasnis <ka...@gmail.com>.
Hi Gary,
We also have observed this problem, when the backlog piles up (e.g. for some
reason consumers are disconnected, like network outage) producers as well
slows down, even when producer flow control is disabled, send is
asynchronous.

Thanks and regards
Kaustubh

On Tue, Sep 13, 2011 at 3:18 AM, bbansal <bh...@groupon.com> wrote:

> Hey Gary,
>
> I will try to write a testcase but based on my Jprofile it looks to me
> contention is for write lock due to removeMessages() calls after they
> receive the ack from the client side and the incoming producer messages.
>
> I am going to play with producer-flow-control settings and other
> configurations mentioned in this thread and report back if I see some
> significant difference.
>
> Best
> Bhupesh
>
>
> On Mon, Sep 12, 2011 at 9:17 AM, Gary Tully [via ActiveMQ] <
> ml-node+s2283324n3807858h81@n4.nabble.com> wrote:
>
> > on the results of your jprobe profiling, it would be good to identify
> > if there is a real contention problem there.
> > If you can generate a simple junit test case that demonstrates the
> > behavior you are seeing, please open a jira issue and we can
> > investigate some more.
> > A test case will help focus the analysis.
> >
> > On 12 September 2011 01:08, bbansal <[hidden email]<
> http://user/SendEmail.jtp?type=node&node=3807858&i=0>>
> > wrote:
> >
> > > Hello folks,
> > >
> > > I am evaluating ActiveMQ for some simple scenarios. The web-server will
> > push
> > > notifications to the queue/topic to be consumed by one or many
> consumers.
> >
> > > The one requirement is web-server should not get impacted or should be
> > able
> > > to write at their speed even if consumers goes down etc.
> > >
> > > ActiveMQ is performing very well with about 1500 QPS (8 producer
> thread,
> > > persistence, kaha-db) Kahadb parameters being used are
> > >
> > > enableJournalDiskSyncs="false" indexWriteBatchSize="1000"
> > > enableIndexWriteAsync="true
> > >
> > > The system works great if consumers are all caught up, the issue is
> when
> > I
> > > am trying to test scenarios with backlogged data (keep running producer
> > for
> > > 30 mins or so) and then start consumers. Consumer show good consumption
> > rate
> > > but the producers (8 threads same as before) cannot do more than 120
> QPS.
> >
> > > This is a drop of more than 90% degradation.
> > >
> > > I ran profiler on the code (Jprofiler) and looks like the writers are
> > > getting stuck for write locks while competing with the
> > removeAsyncMessages()
> > > or call to clear messages which got acknowledged from clients etc.
> > >
> > > I saw similar complaints for some other folks, Is there some settings
> we
> > can
> > > use to fix the problem ? I dont want to degrade any guarantee level
> (eg.
> > > disable acks etc).
> > >
> > > Would be more than happy to run experiments with different settings if
> > folks
> > > have some suggestions.
> > >
> > >
> > >
> > > --
> > > View this message in context:
> >
> http://activemq.2283324.n4.nabble.com/Backlog-data-causes-producers-to-slow-down-tp3806018p3806018.html
> > > Sent from the ActiveMQ - User mailing list archive at Nabble.com.
> > >
> >
> >
> >
> > --
> > http://fusesource.com
> > http://blog.garytully.com
> >
> >
> > ------------------------------
> >  If you reply to this email, your message will be added to the discussion
> > below:
> >
> >
> http://activemq.2283324.n4.nabble.com/Backlog-data-causes-producers-to-slow-down-tp3806018p3807858.html
> >  To unsubscribe from Backlog data causes producers to slow down., click
> > here<
> http://activemq.2283324.n4.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=3806018&code=Ymh1cGVzaEBncm91cG9uLmNvbXwzODA2MDE4fC0yMDk5OTE3NDIy
> >.
> >
> >
>
>
> --
> View this message in context:
> http://activemq.2283324.n4.nabble.com/Backlog-data-causes-producers-to-slow-down-tp3806018p3808739.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>

Re: Backlog data causes producers to slow down.

Posted by bbansal <bh...@groupon.com>.
Hey Gary,

I will try to write a testcase but based on my Jprofile it looks to me
contention is for write lock due to removeMessages() calls after they
receive the ack from the client side and the incoming producer messages.

I am going to play with producer-flow-control settings and other
configurations mentioned in this thread and report back if I see some
significant difference.

Best
Bhupesh


On Mon, Sep 12, 2011 at 9:17 AM, Gary Tully [via ActiveMQ] <
ml-node+s2283324n3807858h81@n4.nabble.com> wrote:

> on the results of your jprobe profiling, it would be good to identify
> if there is a real contention problem there.
> If you can generate a simple junit test case that demonstrates the
> behavior you are seeing, please open a jira issue and we can
> investigate some more.
> A test case will help focus the analysis.
>
> On 12 September 2011 01:08, bbansal <[hidden email]<http://user/SendEmail.jtp?type=node&node=3807858&i=0>>
> wrote:
>
> > Hello folks,
> >
> > I am evaluating ActiveMQ for some simple scenarios. The web-server will
> push
> > notifications to the queue/topic to be consumed by one or many consumers.
>
> > The one requirement is web-server should not get impacted or should be
> able
> > to write at their speed even if consumers goes down etc.
> >
> > ActiveMQ is performing very well with about 1500 QPS (8 producer thread,
> > persistence, kaha-db) Kahadb parameters being used are
> >
> > enableJournalDiskSyncs="false" indexWriteBatchSize="1000"
> > enableIndexWriteAsync="true
> >
> > The system works great if consumers are all caught up, the issue is when
> I
> > am trying to test scenarios with backlogged data (keep running producer
> for
> > 30 mins or so) and then start consumers. Consumer show good consumption
> rate
> > but the producers (8 threads same as before) cannot do more than 120 QPS.
>
> > This is a drop of more than 90% degradation.
> >
> > I ran profiler on the code (Jprofiler) and looks like the writers are
> > getting stuck for write locks while competing with the
> removeAsyncMessages()
> > or call to clear messages which got acknowledged from clients etc.
> >
> > I saw similar complaints for some other folks, Is there some settings we
> can
> > use to fix the problem ? I dont want to degrade any guarantee level (eg.
> > disable acks etc).
> >
> > Would be more than happy to run experiments with different settings if
> folks
> > have some suggestions.
> >
> >
> >
> > --
> > View this message in context:
> http://activemq.2283324.n4.nabble.com/Backlog-data-causes-producers-to-slow-down-tp3806018p3806018.html
> > Sent from the ActiveMQ - User mailing list archive at Nabble.com.
> >
>
>
>
> --
> http://fusesource.com
> http://blog.garytully.com
>
>
> ------------------------------
>  If you reply to this email, your message will be added to the discussion
> below:
>
> http://activemq.2283324.n4.nabble.com/Backlog-data-causes-producers-to-slow-down-tp3806018p3807858.html
>  To unsubscribe from Backlog data causes producers to slow down., click
> here<http://activemq.2283324.n4.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=3806018&code=Ymh1cGVzaEBncm91cG9uLmNvbXwzODA2MDE4fC0yMDk5OTE3NDIy>.
>
>


--
View this message in context: http://activemq.2283324.n4.nabble.com/Backlog-data-causes-producers-to-slow-down-tp3806018p3808739.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: Backlog data causes producers to slow down.

Posted by da...@ontrenet.com.
I also noticed this problem. When there is high throughput and consumers
get bogged down working in between messages, they eventually get dropped
and must re-open a connection or they will stop receiving messages.

The problem with that is that consumers will have to actively monitor
their connections in the application code. The connection code itself
doesn't seem to solve this dilemma on its own.

On Mon, 12 Sep 2011 17:16:50 +0100, Gary Tully <ga...@gmail.com>
wrote:
> on the results of your jprobe profiling, it would be good to identify
> if there is a real contention problem there.
> If you can generate a simple junit test case that demonstrates the
> behavior you are seeing, please open a jira issue and we can
> investigate some more.
> A test case will help focus the analysis.
> 
> On 12 September 2011 01:08, bbansal <bh...@groupon.com> wrote:
>> Hello folks,
>>
>> I am evaluating ActiveMQ for some simple scenarios. The web-server will
>> push
>> notifications to the queue/topic to be consumed by one or many
consumers.
>> The one requirement is web-server should not get impacted or should be
>> able
>> to write at their speed even if consumers goes down etc.
>>
>> ActiveMQ is performing very well with about 1500 QPS (8 producer
thread,
>> persistence, kaha-db) Kahadb parameters being used are
>>
>> enableJournalDiskSyncs="false" indexWriteBatchSize="1000"
>> enableIndexWriteAsync="true
>>
>> The system works great if consumers are all caught up, the issue is
when
>> I
>> am trying to test scenarios with backlogged data (keep running producer
>> for
>> 30 mins or so) and then start consumers. Consumer show good consumption
>> rate
>> but the producers (8 threads same as before) cannot do more than 120
QPS.
>> This is a drop of more than 90% degradation.
>>
>> I ran profiler on the code (Jprofiler) and looks like the writers are
>> getting stuck for write locks while competing with the
>> removeAsyncMessages()
>> or call to clear messages which got acknowledged from clients etc.
>>
>> I saw similar complaints for some other folks, Is there some settings
we
>> can
>> use to fix the problem ? I dont want to degrade any guarantee level
(eg.
>> disable acks etc).
>>
>> Would be more than happy to run experiments with different settings if
>> folks
>> have some suggestions.
>>
>>
>>
>> --
>> View this message in context:
>>
http://activemq.2283324.n4.nabble.com/Backlog-data-causes-producers-to-slow-down-tp3806018p3806018.html
>> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>>

Re: Backlog data causes producers to slow down.

Posted by Gary Tully <ga...@gmail.com>.
on the results of your jprobe profiling, it would be good to identify
if there is a real contention problem there.
If you can generate a simple junit test case that demonstrates the
behavior you are seeing, please open a jira issue and we can
investigate some more.
A test case will help focus the analysis.

On 12 September 2011 01:08, bbansal <bh...@groupon.com> wrote:
> Hello folks,
>
> I am evaluating ActiveMQ for some simple scenarios. The web-server will push
> notifications to the queue/topic to be consumed by one or many consumers.
> The one requirement is web-server should not get impacted or should be able
> to write at their speed even if consumers goes down etc.
>
> ActiveMQ is performing very well with about 1500 QPS (8 producer thread,
> persistence, kaha-db) Kahadb parameters being used are
>
> enableJournalDiskSyncs="false" indexWriteBatchSize="1000"
> enableIndexWriteAsync="true
>
> The system works great if consumers are all caught up, the issue is when I
> am trying to test scenarios with backlogged data (keep running producer for
> 30 mins or so) and then start consumers. Consumer show good consumption rate
> but the producers (8 threads same as before) cannot do more than 120 QPS.
> This is a drop of more than 90% degradation.
>
> I ran profiler on the code (Jprofiler) and looks like the writers are
> getting stuck for write locks while competing with the removeAsyncMessages()
> or call to clear messages which got acknowledged from clients etc.
>
> I saw similar complaints for some other folks, Is there some settings we can
> use to fix the problem ? I dont want to degrade any guarantee level (eg.
> disable acks etc).
>
> Would be more than happy to run experiments with different settings if folks
> have some suggestions.
>
>
>
> --
> View this message in context: http://activemq.2283324.n4.nabble.com/Backlog-data-causes-producers-to-slow-down-tp3806018p3806018.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>



-- 
http://fusesource.com
http://blog.garytully.com