You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@activemq.apache.org by Josh Carlson <jc...@e-dialog.com> on 2011/04/29 00:41:57 UTC

Scaleability problems with queue subscriptors

We are using a shared file system Master/Slave for the broker. Version 5.4.2. Our clients use the STOMP protocol. We use client acknowledgements and communicate synchronously with the broker (using receipts). We set prefetch to 1 in our subscriptions. Our clients iterate over several queues, subscribing, checking for messages (timeout of 50ms), and if one isn't available it un-subscribes and goes to the next queue. There are almost always cases where there are no messages in the queues. We ran into a problem where our application slowed down to a crawl when we deployed additional clients and I've narrowed it down to the fact that most of the time when we subscribed to the queue and then asked if a message was ready it wouldn't be even though there were messages in the queue. My assumption is that it is taking some time to dispatch the message.

Is there some configuration parameters I might want to set to help with this problem? Or is this type of use just not going to scale?

Here is some benchmark data. Each test, creates N consumers, but before they are allowed to start it enqueues 50*N messages for the consumer into *one* queue. The first set of metrics is for the case where the consumers are iterating over 6 different queues (even though there is only data in one). The second set of metrics we ONLY have 1 queue ... in this case the client only subscribes and un-subscribes once except in the case where a message 'isn't ready' in that 50ms (in which case it re-subscribes to the same queue). The metrics capture the entire process. getNextMessage, iterates over the queues, doing the subscribes/un-subscribes, receipts etc ...

Note that in the 6 queue case time degrades once you have 100 consumers. In the other case it degrades after 100 but we never see a Median greater than 206ms.

TEST Case 6 Queues ... 5 of which are empty (note that in this first case since 5 queues are empty one expects at least 250ms to poll those 5 empty queues). Times are in seconds.

Number of Consumers 1. Muliple Queues
Min: 0.349334999918938
Max: 0.368788999971002
Mean: 0.350222800001502
Median: 0.349644500005525 
Std Dev: 0.00271797410606451
Starting test for consumer count 10

Number of Consumers 10. Muliple Queues
Min: 0.349282000097446
Max: 0.394184999982826
Mean: 0.353602201999165
Median: 0.352992500003892 
Std Dev: 0.00542072612850504
Starting test for consumer count 50

Number of Consumers 50. Muliple Queues
Min: 0.315161000005901
Max: 0.425882000010461
Mean: 0.360078899599938
Median: 0.359610499988775 
Std Dev: 0.00788422976924438
Starting test for consumer count 75

Number of Consumers 75. Muliple Queues
Min: 0.342441000044346
Max: 0.66088400001172
Mean: 0.401721995466513
Median: 0.396242500049994 
Std Dev: 0.0404559664668615
Starting test for consumer count 100

Number of Consumers 100. Muliple Queues
Min: 0.352722999989055
Max: 3.99510599998757
Mean: 0.563622044800525
Median: 0.494796500017401 
Std Dev: 0.413950797976057
Starting test for consumer count 300

Number of Consumers 300. Muliple Queues
Min: 0.361888999934308
Max: 5.53048999991734
Mean: 1.91027370266765
Median: 1.8000390000525 
Std Dev: 0.489824211293863
Starting test for consumer count 600

Number of Consumers 600. Muliple Queues
Min: 0.335149999940768
Max: 10.6164910000516
Mean: 4.52802392866641
Median: 4.35808100004215 
Std Dev: 0.840368954779232
Starting test for consumer count 900

Number of Consumers 900. Muliple Queues
Min: 0.639438000041991
Max: 18.2733670000453
Mean: 8.00563488822206
Median: 7.6759294999647 
Std Dev: 1.38340937172684
Starting test for consumer count 1200

Number of Consumers 1200. Muliple Queues
Min: 0.474138000048697
Max: 31.5018520000158
Mean: 12.8169781057334
Median: 12.2411614999873 
Std Dev: 2.45701978986895
Starting test for consumer count 1500

Number of Consumers 1500. Muliple Queues
Min: 3.1234959999565
Max: 48.7995179999853
Mean: 18.8858608815866
Median: 17.5380175000173 
Std Dev: 4.1516799330252
Starting test for consumer count 1800

Number of Consumers 1800. Muliple Queues
Min: 4.99818900006358
Max: 73.2436839999864
Mean: 27.1358068585671
Median: 25.4123435000074 
Std Dev: 6.30049000845097
Starting test for consumer count 2400

Number of Consumers 2400. Muliple Queues
Min: 0.319424999994226
Max: 114.78910699999
Mean: 46.0846290592237
Median: 44.3440699999919 
Std Dev: 10.2871979782358

TEST Case only 1 queue

Number of Consumers 1. Only One Queue
Min: 0.0413880000123754
Max: 0.0445370000088587
Mean: 0.0416983800008893
Median: 0.041657000023406 
Std Dev: 0.00042437742781418
Starting test for consumer count 10

Number of Consumers 10. Only One Queue
Min: 0.0409169999184087
Max: 0.0494429999962449
Mean: 0.0419903019983321
Median: 0.0417659999802709 
Std Dev: 0.000839524388489985
Starting test for consumer count 50

Number of Consumers 50. Only One Queue
Min: 0.00652100006118417
Max: 0.0843779999995604
Mean: 0.0431237947992515
Median: 0.0423434999538586 
Std Dev: 0.00470470800328101
Starting test for consumer count 75

Number of Consumers 75. Only One Queue
Min: 0.00334199995268136
Max: 0.120109000010416
Mean: 0.0456681223996294
Median: 0.0435704999836161 
Std Dev: 0.00729394094656864
Starting test for consumer count 100

Number of Consumers 100. Only One Queue
Min: 0.00263900007121265
Max: 0.206331999972463
Mean: 0.051723164400761
Median: 0.0513750000391155 
Std Dev: 0.0225837245735077
Starting test for consumer count 300

Number of Consumers 300. Only One Queue
Min: 0.00258900003973395
Max: 1.01170199993066
Mean: 0.138241231733017
Median: 0.136385999969207 
Std Dev: 0.0863229692434055
Starting test for consumer count 600

Number of Consumers 600. Only One Queue
Min: 0.00214999995660037
Max: 3.27785699989181
Mean: 0.274939405133063
Median: 0.256795499997679 
Std Dev: 0.237695097382708
Starting test for consumer count 900

Number of Consumers 900. Only One Queue
Min: 0.00206800003070384
Max: 31.7313950000098
Mean: 0.5553230254
Median: 0.338199999998324 
Std Dev: 1.14882073602057
Starting test for consumer count 1200

Number of Consumers 1200. Only One Queue
Min: 0.00257100001908839
Max: 49.8629720000317
Mean: 0.912980378683317
Median: 0.393762999970932 
Std Dev: 2.87091387484458
Starting test for consumer count 1500

Number of Consumers 1500. Only One Queue
Min: 0.00201100006233901
Max: 74.3607440000633
Mean: 1.19311908142647
Median: 0.205018000095152 
Std Dev: 4.4037236439348
Starting test for consumer count 1800

Number of Consumers 1800. Only One Queue
Min: 0.00196300004608929
Max: 84.4792379999999
Mean: 1.29789674880008
Median: 0.117239500046707 
Std Dev: 5.19232074252423
Starting test for consumer count 2400

Number of Consumers 2400. Only One Queue
Min: 0.00200599990785122
Max: 124.155756999971
Mean: 1.77886690554984
Median: 0.101840000017546 
Std Dev: 8.38169615533614


RE: Scaleability problems with queue subscriptors

Posted by Josh Carlson <jc...@e-dialog.com>.
Yes. What I meant is it does not support prefetch=0.

> -----Original Message-----
> From: James Green [mailto:james.mk.green@gmail.com]
> Sent: Friday, May 06, 2011 4:31 AM
> To: users@activemq.apache.org
> Subject: Re: Scaleability problems with queue subscriptors
> 
> It does support prefetch.
> 
> activemq.prefetchSize = n during the subscription stage.
> 
> From http://activemq.apache.org/stomp.html
> 
> On 5 May 2011 21:20, Josh Carlson <jc...@e-dialog.com> wrote:
> 
> > We are using the STOMP protocol which doesn't support that. I was
> curious
> > if there might be any settings server side which would help with the
> > scalability of concurrent subscribes?
> >
> > > -----Original Message-----
> > > From: Gary Tully [mailto:gary.tully@gmail.com]
> > > Sent: Wednesday, May 04, 2011 5:39 PM
> > > To: users@activemq.apache.org
> > > Subject: Re: Scaleability problems with queue subscriptors
> > >
> > > have you tried to use prefetch=0 on the work queue, so the next
> > > message will not be dispatched till you issue another receive call
> > > rather than when the ack occurs.
> >

Re: Scaleability problems with queue subscriptors

Posted by James Green <ja...@gmail.com>.
It does support prefetch.

activemq.prefetchSize = n during the subscription stage.

>From http://activemq.apache.org/stomp.html

On 5 May 2011 21:20, Josh Carlson <jc...@e-dialog.com> wrote:

> We are using the STOMP protocol which doesn't support that. I was curious
> if there might be any settings server side which would help with the
> scalability of concurrent subscribes?
>
> > -----Original Message-----
> > From: Gary Tully [mailto:gary.tully@gmail.com]
> > Sent: Wednesday, May 04, 2011 5:39 PM
> > To: users@activemq.apache.org
> > Subject: Re: Scaleability problems with queue subscriptors
> >
> > have you tried to use prefetch=0 on the work queue, so the next
> > message will not be dispatched till you issue another receive call
> > rather than when the ack occurs.
>

RE: Scaleability problems with queue subscriptors

Posted by Josh Carlson <jc...@e-dialog.com>.
We are using the STOMP protocol which doesn't support that. I was curious if there might be any settings server side which would help with the scalability of concurrent subscribes?

> -----Original Message-----
> From: Gary Tully [mailto:gary.tully@gmail.com]
> Sent: Wednesday, May 04, 2011 5:39 PM
> To: users@activemq.apache.org
> Subject: Re: Scaleability problems with queue subscriptors
> 
> have you tried to use prefetch=0 on the work queue, so the next
> message will not be dispatched till you issue another receive call
> rather than when the ack occurs.
> 
> On 4 May 2011 21:29, Josh Carlson <jc...@e-dialog.com> wrote:
> > Hi Gary,
> >
> > Thanks for the response. We've decided it would be easy for us to
> partition our consumers such that they each consumer operates on only
> one queue. However, the model we are using retrieves a message from one
> queue (the job queue), then grabs something to do from another queue
> (the work queue), once it retrieves the message from the work queue it
> acknowledges the job queue and goes and does its work. However, since
> another message is dispatched once the ack is done and the work can take
> a long time (potentially infinite) we unsubscribe.  Subsequently, once
> the work is done the consumer needs to go subscribe and retrieve another
> message.
> >
> > Switching to one queue helps when there is no or few messages.
> However, it is not scaling when there are plenty of messages due to the
> way we need to subscribe/unsubscribe. Do you have any suggestions on how
> we might be able to scale this given our current architecture?
> >
> > -Josh
> >
> >> -----Original Message-----
> >> From: Gary Tully [mailto:gary.tully@gmail.com]
> >> Sent: Friday, April 29, 2011 6:27 PM
> >> To: users@activemq.apache.org
> >> Subject: Re: Scaleability problems with queue subscriptors
> >>
> >> setting up a consumer is a little expensive, have a look at using a
> >> composite destination so that you can subscribe to all destinations
> at
> >> once.
> >> Also, there is a delay between new consumer registration and async
> >> dispatch, so waiting a few seconds before unsubscribe is necessary.
> >>
> >> http://activemq.apache.org/composite-destinations.html
> >>
> >> On 28 April 2011 23:41, Josh Carlson <jc...@e-dialog.com> wrote:
> >> > We are using a shared file system Master/Slave for the broker.
> Version
> >> 5.4.2. Our clients use the STOMP protocol. We use client
> >> acknowledgements and communicate synchronously with the broker (using
> >> receipts). We set prefetch to 1 in our subscriptions. Our clients
> >> iterate over several queues, subscribing, checking for messages
> (timeout
> >> of 50ms), and if one isn't available it un-subscribes and goes to the
> >> next queue. There are almost always cases where there are no messages
> in
> >> the queues. We ran into a problem where our application slowed down
> to a
> >> crawl when we deployed additional clients and I've narrowed it down
> to
> >> the fact that most of the time when we subscribed to the queue and
> then
> >> asked if a message was ready it wouldn't be even though there were
> >> messages in the queue. My assumption is that it is taking some time
> to
> >> dispatch the message.
> >> >
> >> > Is there some configuration parameters I might want to set to help
> >> with this problem? Or is this type of use just not going to scale?
> >> >
> >> > Here is some benchmark data. Each test, creates N consumers, but
> >> before they are allowed to start it enqueues 50*N messages for the
> >> consumer into *one* queue. The first set of metrics is for the case
> >> where the consumers are iterating over 6 different queues (even
> though
> >> there is only data in one). The second set of metrics we ONLY have 1
> >> queue ... in this case the client only subscribes and un-subscribes
> once
> >> except in the case where a message 'isn't ready' in that 50ms (in
> which
> >> case it re-subscribes to the same queue). The metrics capture the
> entire
> >> process. getNextMessage, iterates over the queues, doing the
> >> subscribes/un-subscribes, receipts etc ...
> >> >
> >> > Note that in the 6 queue case time degrades once you have 100
> >> consumers. In the other case it degrades after 100 but we never see a
> >> Median greater than 206ms.
> >> >
> >> > TEST Case 6 Queues ... 5 of which are empty (note that in this
> first
> >> case since 5 queues are empty one expects at least 250ms to poll
> those 5
> >> empty queues). Times are in seconds.
> >> >
> >> > Number of Consumers 1. Muliple Queues
> >> > Min: 0.349334999918938
> >> > Max: 0.368788999971002
> >> > Mean: 0.350222800001502
> >> > Median: 0.349644500005525
> >> > Std Dev: 0.00271797410606451
> >> > Starting test for consumer count 10
> >> >
> >> > Number of Consumers 10. Muliple Queues
> >> > Min: 0.349282000097446
> >> > Max: 0.394184999982826
> >> > Mean: 0.353602201999165
> >> > Median: 0.352992500003892
> >> > Std Dev: 0.00542072612850504
> >> > Starting test for consumer count 50
> >> >
> >> > Number of Consumers 50. Muliple Queues
> >> > Min: 0.315161000005901
> >> > Max: 0.425882000010461
> >> > Mean: 0.360078899599938
> >> > Median: 0.359610499988775
> >> > Std Dev: 0.00788422976924438
> >> > Starting test for consumer count 75
> >> >
> >> > Number of Consumers 75. Muliple Queues
> >> > Min: 0.342441000044346
> >> > Max: 0.66088400001172
> >> > Mean: 0.401721995466513
> >> > Median: 0.396242500049994
> >> > Std Dev: 0.0404559664668615
> >> > Starting test for consumer count 100
> >> >
> >> > Number of Consumers 100. Muliple Queues
> >> > Min: 0.352722999989055
> >> > Max: 3.99510599998757
> >> > Mean: 0.563622044800525
> >> > Median: 0.494796500017401
> >> > Std Dev: 0.413950797976057
> >> > Starting test for consumer count 300
> >> >
> >> > Number of Consumers 300. Muliple Queues
> >> > Min: 0.361888999934308
> >> > Max: 5.53048999991734
> >> > Mean: 1.91027370266765
> >> > Median: 1.8000390000525
> >> > Std Dev: 0.489824211293863
> >> > Starting test for consumer count 600
> >> >
> >> > Number of Consumers 600. Muliple Queues
> >> > Min: 0.335149999940768
> >> > Max: 10.6164910000516
> >> > Mean: 4.52802392866641
> >> > Median: 4.35808100004215
> >> > Std Dev: 0.840368954779232
> >> > Starting test for consumer count 900
> >> >
> >> > Number of Consumers 900. Muliple Queues
> >> > Min: 0.639438000041991
> >> > Max: 18.2733670000453
> >> > Mean: 8.00563488822206
> >> > Median: 7.6759294999647
> >> > Std Dev: 1.38340937172684
> >> > Starting test for consumer count 1200
> >> >
> >> > Number of Consumers 1200. Muliple Queues
> >> > Min: 0.474138000048697
> >> > Max: 31.5018520000158
> >> > Mean: 12.8169781057334
> >> > Median: 12.2411614999873
> >> > Std Dev: 2.45701978986895
> >> > Starting test for consumer count 1500
> >> >
> >> > Number of Consumers 1500. Muliple Queues
> >> > Min: 3.1234959999565
> >> > Max: 48.7995179999853
> >> > Mean: 18.8858608815866
> >> > Median: 17.5380175000173
> >> > Std Dev: 4.1516799330252
> >> > Starting test for consumer count 1800
> >> >
> >> > Number of Consumers 1800. Muliple Queues
> >> > Min: 4.99818900006358
> >> > Max: 73.2436839999864
> >> > Mean: 27.1358068585671
> >> > Median: 25.4123435000074
> >> > Std Dev: 6.30049000845097
> >> > Starting test for consumer count 2400
> >> >
> >> > Number of Consumers 2400. Muliple Queues
> >> > Min: 0.319424999994226
> >> > Max: 114.78910699999
> >> > Mean: 46.0846290592237
> >> > Median: 44.3440699999919
> >> > Std Dev: 10.2871979782358
> >> >
> >> > TEST Case only 1 queue
> >> >
> >> > Number of Consumers 1. Only One Queue
> >> > Min: 0.0413880000123754
> >> > Max: 0.0445370000088587
> >> > Mean: 0.0416983800008893
> >> > Median: 0.041657000023406
> >> > Std Dev: 0.00042437742781418
> >> > Starting test for consumer count 10
> >> >
> >> > Number of Consumers 10. Only One Queue
> >> > Min: 0.0409169999184087
> >> > Max: 0.0494429999962449
> >> > Mean: 0.0419903019983321
> >> > Median: 0.0417659999802709
> >> > Std Dev: 0.000839524388489985
> >> > Starting test for consumer count 50
> >> >
> >> > Number of Consumers 50. Only One Queue
> >> > Min: 0.00652100006118417
> >> > Max: 0.0843779999995604
> >> > Mean: 0.0431237947992515
> >> > Median: 0.0423434999538586
> >> > Std Dev: 0.00470470800328101
> >> > Starting test for consumer count 75
> >> >
> >> > Number of Consumers 75. Only One Queue
> >> > Min: 0.00334199995268136
> >> > Max: 0.120109000010416
> >> > Mean: 0.0456681223996294
> >> > Median: 0.0435704999836161
> >> > Std Dev: 0.00729394094656864
> >> > Starting test for consumer count 100
> >> >
> >> > Number of Consumers 100. Only One Queue
> >> > Min: 0.00263900007121265
> >> > Max: 0.206331999972463
> >> > Mean: 0.051723164400761
> >> > Median: 0.0513750000391155
> >> > Std Dev: 0.0225837245735077
> >> > Starting test for consumer count 300
> >> >
> >> > Number of Consumers 300. Only One Queue
> >> > Min: 0.00258900003973395
> >> > Max: 1.01170199993066
> >> > Mean: 0.138241231733017
> >> > Median: 0.136385999969207
> >> > Std Dev: 0.0863229692434055
> >> > Starting test for consumer count 600
> >> >
> >> > Number of Consumers 600. Only One Queue
> >> > Min: 0.00214999995660037
> >> > Max: 3.27785699989181
> >> > Mean: 0.274939405133063
> >> > Median: 0.256795499997679
> >> > Std Dev: 0.237695097382708
> >> > Starting test for consumer count 900
> >> >
> >> > Number of Consumers 900. Only One Queue
> >> > Min: 0.00206800003070384
> >> > Max: 31.7313950000098
> >> > Mean: 0.5553230254
> >> > Median: 0.338199999998324
> >> > Std Dev: 1.14882073602057
> >> > Starting test for consumer count 1200
> >> >
> >> > Number of Consumers 1200. Only One Queue
> >> > Min: 0.00257100001908839
> >> > Max: 49.8629720000317
> >> > Mean: 0.912980378683317
> >> > Median: 0.393762999970932
> >> > Std Dev: 2.87091387484458
> >> > Starting test for consumer count 1500
> >> >
> >> > Number of Consumers 1500. Only One Queue
> >> > Min: 0.00201100006233901
> >> > Max: 74.3607440000633
> >> > Mean: 1.19311908142647
> >> > Median: 0.205018000095152
> >> > Std Dev: 4.4037236439348
> >> > Starting test for consumer count 1800
> >> >
> >> > Number of Consumers 1800. Only One Queue
> >> > Min: 0.00196300004608929
> >> > Max: 84.4792379999999
> >> > Mean: 1.29789674880008
> >> > Median: 0.117239500046707
> >> > Std Dev: 5.19232074252423
> >> > Starting test for consumer count 2400
> >> >
> >> > Number of Consumers 2400. Only One Queue
> >> > Min: 0.00200599990785122
> >> > Max: 124.155756999971
> >> > Mean: 1.77886690554984
> >> > Median: 0.101840000017546
> >> > Std Dev: 8.38169615533614
> >> >
> >> >
> >>
> >>
> >>
> >> --
> >> http://blog.garytully.com
> >> http://fusesource.com
> >
> 
> 
> 
> --
> http://blog.garytully.com
> http://fusesource.com

Re: Scaleability problems with queue subscriptors

Posted by Gary Tully <ga...@gmail.com>.
have you tried to use prefetch=0 on the work queue, so the next
message will not be dispatched till you issue another receive call
rather than when the ack occurs.

On 4 May 2011 21:29, Josh Carlson <jc...@e-dialog.com> wrote:
> Hi Gary,
>
> Thanks for the response. We've decided it would be easy for us to partition our consumers such that they each consumer operates on only one queue. However, the model we are using retrieves a message from one queue (the job queue), then grabs something to do from another queue (the work queue), once it retrieves the message from the work queue it acknowledges the job queue and goes and does its work. However, since another message is dispatched once the ack is done and the work can take a long time (potentially infinite) we unsubscribe.  Subsequently, once the work is done the consumer needs to go subscribe and retrieve another message.
>
> Switching to one queue helps when there is no or few messages. However, it is not scaling when there are plenty of messages due to the way we need to subscribe/unsubscribe. Do you have any suggestions on how we might be able to scale this given our current architecture?
>
> -Josh
>
>> -----Original Message-----
>> From: Gary Tully [mailto:gary.tully@gmail.com]
>> Sent: Friday, April 29, 2011 6:27 PM
>> To: users@activemq.apache.org
>> Subject: Re: Scaleability problems with queue subscriptors
>>
>> setting up a consumer is a little expensive, have a look at using a
>> composite destination so that you can subscribe to all destinations at
>> once.
>> Also, there is a delay between new consumer registration and async
>> dispatch, so waiting a few seconds before unsubscribe is necessary.
>>
>> http://activemq.apache.org/composite-destinations.html
>>
>> On 28 April 2011 23:41, Josh Carlson <jc...@e-dialog.com> wrote:
>> > We are using a shared file system Master/Slave for the broker. Version
>> 5.4.2. Our clients use the STOMP protocol. We use client
>> acknowledgements and communicate synchronously with the broker (using
>> receipts). We set prefetch to 1 in our subscriptions. Our clients
>> iterate over several queues, subscribing, checking for messages (timeout
>> of 50ms), and if one isn't available it un-subscribes and goes to the
>> next queue. There are almost always cases where there are no messages in
>> the queues. We ran into a problem where our application slowed down to a
>> crawl when we deployed additional clients and I've narrowed it down to
>> the fact that most of the time when we subscribed to the queue and then
>> asked if a message was ready it wouldn't be even though there were
>> messages in the queue. My assumption is that it is taking some time to
>> dispatch the message.
>> >
>> > Is there some configuration parameters I might want to set to help
>> with this problem? Or is this type of use just not going to scale?
>> >
>> > Here is some benchmark data. Each test, creates N consumers, but
>> before they are allowed to start it enqueues 50*N messages for the
>> consumer into *one* queue. The first set of metrics is for the case
>> where the consumers are iterating over 6 different queues (even though
>> there is only data in one). The second set of metrics we ONLY have 1
>> queue ... in this case the client only subscribes and un-subscribes once
>> except in the case where a message 'isn't ready' in that 50ms (in which
>> case it re-subscribes to the same queue). The metrics capture the entire
>> process. getNextMessage, iterates over the queues, doing the
>> subscribes/un-subscribes, receipts etc ...
>> >
>> > Note that in the 6 queue case time degrades once you have 100
>> consumers. In the other case it degrades after 100 but we never see a
>> Median greater than 206ms.
>> >
>> > TEST Case 6 Queues ... 5 of which are empty (note that in this first
>> case since 5 queues are empty one expects at least 250ms to poll those 5
>> empty queues). Times are in seconds.
>> >
>> > Number of Consumers 1. Muliple Queues
>> > Min: 0.349334999918938
>> > Max: 0.368788999971002
>> > Mean: 0.350222800001502
>> > Median: 0.349644500005525
>> > Std Dev: 0.00271797410606451
>> > Starting test for consumer count 10
>> >
>> > Number of Consumers 10. Muliple Queues
>> > Min: 0.349282000097446
>> > Max: 0.394184999982826
>> > Mean: 0.353602201999165
>> > Median: 0.352992500003892
>> > Std Dev: 0.00542072612850504
>> > Starting test for consumer count 50
>> >
>> > Number of Consumers 50. Muliple Queues
>> > Min: 0.315161000005901
>> > Max: 0.425882000010461
>> > Mean: 0.360078899599938
>> > Median: 0.359610499988775
>> > Std Dev: 0.00788422976924438
>> > Starting test for consumer count 75
>> >
>> > Number of Consumers 75. Muliple Queues
>> > Min: 0.342441000044346
>> > Max: 0.66088400001172
>> > Mean: 0.401721995466513
>> > Median: 0.396242500049994
>> > Std Dev: 0.0404559664668615
>> > Starting test for consumer count 100
>> >
>> > Number of Consumers 100. Muliple Queues
>> > Min: 0.352722999989055
>> > Max: 3.99510599998757
>> > Mean: 0.563622044800525
>> > Median: 0.494796500017401
>> > Std Dev: 0.413950797976057
>> > Starting test for consumer count 300
>> >
>> > Number of Consumers 300. Muliple Queues
>> > Min: 0.361888999934308
>> > Max: 5.53048999991734
>> > Mean: 1.91027370266765
>> > Median: 1.8000390000525
>> > Std Dev: 0.489824211293863
>> > Starting test for consumer count 600
>> >
>> > Number of Consumers 600. Muliple Queues
>> > Min: 0.335149999940768
>> > Max: 10.6164910000516
>> > Mean: 4.52802392866641
>> > Median: 4.35808100004215
>> > Std Dev: 0.840368954779232
>> > Starting test for consumer count 900
>> >
>> > Number of Consumers 900. Muliple Queues
>> > Min: 0.639438000041991
>> > Max: 18.2733670000453
>> > Mean: 8.00563488822206
>> > Median: 7.6759294999647
>> > Std Dev: 1.38340937172684
>> > Starting test for consumer count 1200
>> >
>> > Number of Consumers 1200. Muliple Queues
>> > Min: 0.474138000048697
>> > Max: 31.5018520000158
>> > Mean: 12.8169781057334
>> > Median: 12.2411614999873
>> > Std Dev: 2.45701978986895
>> > Starting test for consumer count 1500
>> >
>> > Number of Consumers 1500. Muliple Queues
>> > Min: 3.1234959999565
>> > Max: 48.7995179999853
>> > Mean: 18.8858608815866
>> > Median: 17.5380175000173
>> > Std Dev: 4.1516799330252
>> > Starting test for consumer count 1800
>> >
>> > Number of Consumers 1800. Muliple Queues
>> > Min: 4.99818900006358
>> > Max: 73.2436839999864
>> > Mean: 27.1358068585671
>> > Median: 25.4123435000074
>> > Std Dev: 6.30049000845097
>> > Starting test for consumer count 2400
>> >
>> > Number of Consumers 2400. Muliple Queues
>> > Min: 0.319424999994226
>> > Max: 114.78910699999
>> > Mean: 46.0846290592237
>> > Median: 44.3440699999919
>> > Std Dev: 10.2871979782358
>> >
>> > TEST Case only 1 queue
>> >
>> > Number of Consumers 1. Only One Queue
>> > Min: 0.0413880000123754
>> > Max: 0.0445370000088587
>> > Mean: 0.0416983800008893
>> > Median: 0.041657000023406
>> > Std Dev: 0.00042437742781418
>> > Starting test for consumer count 10
>> >
>> > Number of Consumers 10. Only One Queue
>> > Min: 0.0409169999184087
>> > Max: 0.0494429999962449
>> > Mean: 0.0419903019983321
>> > Median: 0.0417659999802709
>> > Std Dev: 0.000839524388489985
>> > Starting test for consumer count 50
>> >
>> > Number of Consumers 50. Only One Queue
>> > Min: 0.00652100006118417
>> > Max: 0.0843779999995604
>> > Mean: 0.0431237947992515
>> > Median: 0.0423434999538586
>> > Std Dev: 0.00470470800328101
>> > Starting test for consumer count 75
>> >
>> > Number of Consumers 75. Only One Queue
>> > Min: 0.00334199995268136
>> > Max: 0.120109000010416
>> > Mean: 0.0456681223996294
>> > Median: 0.0435704999836161
>> > Std Dev: 0.00729394094656864
>> > Starting test for consumer count 100
>> >
>> > Number of Consumers 100. Only One Queue
>> > Min: 0.00263900007121265
>> > Max: 0.206331999972463
>> > Mean: 0.051723164400761
>> > Median: 0.0513750000391155
>> > Std Dev: 0.0225837245735077
>> > Starting test for consumer count 300
>> >
>> > Number of Consumers 300. Only One Queue
>> > Min: 0.00258900003973395
>> > Max: 1.01170199993066
>> > Mean: 0.138241231733017
>> > Median: 0.136385999969207
>> > Std Dev: 0.0863229692434055
>> > Starting test for consumer count 600
>> >
>> > Number of Consumers 600. Only One Queue
>> > Min: 0.00214999995660037
>> > Max: 3.27785699989181
>> > Mean: 0.274939405133063
>> > Median: 0.256795499997679
>> > Std Dev: 0.237695097382708
>> > Starting test for consumer count 900
>> >
>> > Number of Consumers 900. Only One Queue
>> > Min: 0.00206800003070384
>> > Max: 31.7313950000098
>> > Mean: 0.5553230254
>> > Median: 0.338199999998324
>> > Std Dev: 1.14882073602057
>> > Starting test for consumer count 1200
>> >
>> > Number of Consumers 1200. Only One Queue
>> > Min: 0.00257100001908839
>> > Max: 49.8629720000317
>> > Mean: 0.912980378683317
>> > Median: 0.393762999970932
>> > Std Dev: 2.87091387484458
>> > Starting test for consumer count 1500
>> >
>> > Number of Consumers 1500. Only One Queue
>> > Min: 0.00201100006233901
>> > Max: 74.3607440000633
>> > Mean: 1.19311908142647
>> > Median: 0.205018000095152
>> > Std Dev: 4.4037236439348
>> > Starting test for consumer count 1800
>> >
>> > Number of Consumers 1800. Only One Queue
>> > Min: 0.00196300004608929
>> > Max: 84.4792379999999
>> > Mean: 1.29789674880008
>> > Median: 0.117239500046707
>> > Std Dev: 5.19232074252423
>> > Starting test for consumer count 2400
>> >
>> > Number of Consumers 2400. Only One Queue
>> > Min: 0.00200599990785122
>> > Max: 124.155756999971
>> > Mean: 1.77886690554984
>> > Median: 0.101840000017546
>> > Std Dev: 8.38169615533614
>> >
>> >
>>
>>
>>
>> --
>> http://blog.garytully.com
>> http://fusesource.com
>



-- 
http://blog.garytully.com
http://fusesource.com

RE: Scaleability problems with queue subscriptors

Posted by Josh Carlson <jc...@e-dialog.com>.
Hi Gary,

Thanks for the response. We've decided it would be easy for us to partition our consumers such that they each consumer operates on only one queue. However, the model we are using retrieves a message from one queue (the job queue), then grabs something to do from another queue (the work queue), once it retrieves the message from the work queue it acknowledges the job queue and goes and does its work. However, since another message is dispatched once the ack is done and the work can take a long time (potentially infinite) we unsubscribe.  Subsequently, once the work is done the consumer needs to go subscribe and retrieve another message.

Switching to one queue helps when there is no or few messages. However, it is not scaling when there are plenty of messages due to the way we need to subscribe/unsubscribe. Do you have any suggestions on how we might be able to scale this given our current architecture?

-Josh

> -----Original Message-----
> From: Gary Tully [mailto:gary.tully@gmail.com]
> Sent: Friday, April 29, 2011 6:27 PM
> To: users@activemq.apache.org
> Subject: Re: Scaleability problems with queue subscriptors
> 
> setting up a consumer is a little expensive, have a look at using a
> composite destination so that you can subscribe to all destinations at
> once.
> Also, there is a delay between new consumer registration and async
> dispatch, so waiting a few seconds before unsubscribe is necessary.
> 
> http://activemq.apache.org/composite-destinations.html
> 
> On 28 April 2011 23:41, Josh Carlson <jc...@e-dialog.com> wrote:
> > We are using a shared file system Master/Slave for the broker. Version
> 5.4.2. Our clients use the STOMP protocol. We use client
> acknowledgements and communicate synchronously with the broker (using
> receipts). We set prefetch to 1 in our subscriptions. Our clients
> iterate over several queues, subscribing, checking for messages (timeout
> of 50ms), and if one isn't available it un-subscribes and goes to the
> next queue. There are almost always cases where there are no messages in
> the queues. We ran into a problem where our application slowed down to a
> crawl when we deployed additional clients and I've narrowed it down to
> the fact that most of the time when we subscribed to the queue and then
> asked if a message was ready it wouldn't be even though there were
> messages in the queue. My assumption is that it is taking some time to
> dispatch the message.
> >
> > Is there some configuration parameters I might want to set to help
> with this problem? Or is this type of use just not going to scale?
> >
> > Here is some benchmark data. Each test, creates N consumers, but
> before they are allowed to start it enqueues 50*N messages for the
> consumer into *one* queue. The first set of metrics is for the case
> where the consumers are iterating over 6 different queues (even though
> there is only data in one). The second set of metrics we ONLY have 1
> queue ... in this case the client only subscribes and un-subscribes once
> except in the case where a message 'isn't ready' in that 50ms (in which
> case it re-subscribes to the same queue). The metrics capture the entire
> process. getNextMessage, iterates over the queues, doing the
> subscribes/un-subscribes, receipts etc ...
> >
> > Note that in the 6 queue case time degrades once you have 100
> consumers. In the other case it degrades after 100 but we never see a
> Median greater than 206ms.
> >
> > TEST Case 6 Queues ... 5 of which are empty (note that in this first
> case since 5 queues are empty one expects at least 250ms to poll those 5
> empty queues). Times are in seconds.
> >
> > Number of Consumers 1. Muliple Queues
> > Min: 0.349334999918938
> > Max: 0.368788999971002
> > Mean: 0.350222800001502
> > Median: 0.349644500005525
> > Std Dev: 0.00271797410606451
> > Starting test for consumer count 10
> >
> > Number of Consumers 10. Muliple Queues
> > Min: 0.349282000097446
> > Max: 0.394184999982826
> > Mean: 0.353602201999165
> > Median: 0.352992500003892
> > Std Dev: 0.00542072612850504
> > Starting test for consumer count 50
> >
> > Number of Consumers 50. Muliple Queues
> > Min: 0.315161000005901
> > Max: 0.425882000010461
> > Mean: 0.360078899599938
> > Median: 0.359610499988775
> > Std Dev: 0.00788422976924438
> > Starting test for consumer count 75
> >
> > Number of Consumers 75. Muliple Queues
> > Min: 0.342441000044346
> > Max: 0.66088400001172
> > Mean: 0.401721995466513
> > Median: 0.396242500049994
> > Std Dev: 0.0404559664668615
> > Starting test for consumer count 100
> >
> > Number of Consumers 100. Muliple Queues
> > Min: 0.352722999989055
> > Max: 3.99510599998757
> > Mean: 0.563622044800525
> > Median: 0.494796500017401
> > Std Dev: 0.413950797976057
> > Starting test for consumer count 300
> >
> > Number of Consumers 300. Muliple Queues
> > Min: 0.361888999934308
> > Max: 5.53048999991734
> > Mean: 1.91027370266765
> > Median: 1.8000390000525
> > Std Dev: 0.489824211293863
> > Starting test for consumer count 600
> >
> > Number of Consumers 600. Muliple Queues
> > Min: 0.335149999940768
> > Max: 10.6164910000516
> > Mean: 4.52802392866641
> > Median: 4.35808100004215
> > Std Dev: 0.840368954779232
> > Starting test for consumer count 900
> >
> > Number of Consumers 900. Muliple Queues
> > Min: 0.639438000041991
> > Max: 18.2733670000453
> > Mean: 8.00563488822206
> > Median: 7.6759294999647
> > Std Dev: 1.38340937172684
> > Starting test for consumer count 1200
> >
> > Number of Consumers 1200. Muliple Queues
> > Min: 0.474138000048697
> > Max: 31.5018520000158
> > Mean: 12.8169781057334
> > Median: 12.2411614999873
> > Std Dev: 2.45701978986895
> > Starting test for consumer count 1500
> >
> > Number of Consumers 1500. Muliple Queues
> > Min: 3.1234959999565
> > Max: 48.7995179999853
> > Mean: 18.8858608815866
> > Median: 17.5380175000173
> > Std Dev: 4.1516799330252
> > Starting test for consumer count 1800
> >
> > Number of Consumers 1800. Muliple Queues
> > Min: 4.99818900006358
> > Max: 73.2436839999864
> > Mean: 27.1358068585671
> > Median: 25.4123435000074
> > Std Dev: 6.30049000845097
> > Starting test for consumer count 2400
> >
> > Number of Consumers 2400. Muliple Queues
> > Min: 0.319424999994226
> > Max: 114.78910699999
> > Mean: 46.0846290592237
> > Median: 44.3440699999919
> > Std Dev: 10.2871979782358
> >
> > TEST Case only 1 queue
> >
> > Number of Consumers 1. Only One Queue
> > Min: 0.0413880000123754
> > Max: 0.0445370000088587
> > Mean: 0.0416983800008893
> > Median: 0.041657000023406
> > Std Dev: 0.00042437742781418
> > Starting test for consumer count 10
> >
> > Number of Consumers 10. Only One Queue
> > Min: 0.0409169999184087
> > Max: 0.0494429999962449
> > Mean: 0.0419903019983321
> > Median: 0.0417659999802709
> > Std Dev: 0.000839524388489985
> > Starting test for consumer count 50
> >
> > Number of Consumers 50. Only One Queue
> > Min: 0.00652100006118417
> > Max: 0.0843779999995604
> > Mean: 0.0431237947992515
> > Median: 0.0423434999538586
> > Std Dev: 0.00470470800328101
> > Starting test for consumer count 75
> >
> > Number of Consumers 75. Only One Queue
> > Min: 0.00334199995268136
> > Max: 0.120109000010416
> > Mean: 0.0456681223996294
> > Median: 0.0435704999836161
> > Std Dev: 0.00729394094656864
> > Starting test for consumer count 100
> >
> > Number of Consumers 100. Only One Queue
> > Min: 0.00263900007121265
> > Max: 0.206331999972463
> > Mean: 0.051723164400761
> > Median: 0.0513750000391155
> > Std Dev: 0.0225837245735077
> > Starting test for consumer count 300
> >
> > Number of Consumers 300. Only One Queue
> > Min: 0.00258900003973395
> > Max: 1.01170199993066
> > Mean: 0.138241231733017
> > Median: 0.136385999969207
> > Std Dev: 0.0863229692434055
> > Starting test for consumer count 600
> >
> > Number of Consumers 600. Only One Queue
> > Min: 0.00214999995660037
> > Max: 3.27785699989181
> > Mean: 0.274939405133063
> > Median: 0.256795499997679
> > Std Dev: 0.237695097382708
> > Starting test for consumer count 900
> >
> > Number of Consumers 900. Only One Queue
> > Min: 0.00206800003070384
> > Max: 31.7313950000098
> > Mean: 0.5553230254
> > Median: 0.338199999998324
> > Std Dev: 1.14882073602057
> > Starting test for consumer count 1200
> >
> > Number of Consumers 1200. Only One Queue
> > Min: 0.00257100001908839
> > Max: 49.8629720000317
> > Mean: 0.912980378683317
> > Median: 0.393762999970932
> > Std Dev: 2.87091387484458
> > Starting test for consumer count 1500
> >
> > Number of Consumers 1500. Only One Queue
> > Min: 0.00201100006233901
> > Max: 74.3607440000633
> > Mean: 1.19311908142647
> > Median: 0.205018000095152
> > Std Dev: 4.4037236439348
> > Starting test for consumer count 1800
> >
> > Number of Consumers 1800. Only One Queue
> > Min: 0.00196300004608929
> > Max: 84.4792379999999
> > Mean: 1.29789674880008
> > Median: 0.117239500046707
> > Std Dev: 5.19232074252423
> > Starting test for consumer count 2400
> >
> > Number of Consumers 2400. Only One Queue
> > Min: 0.00200599990785122
> > Max: 124.155756999971
> > Mean: 1.77886690554984
> > Median: 0.101840000017546
> > Std Dev: 8.38169615533614
> >
> >
> 
> 
> 
> --
> http://blog.garytully.com
> http://fusesource.com

Re: Scaleability problems with queue subscriptors

Posted by Gary Tully <ga...@gmail.com>.
setting up a consumer is a little expensive, have a look at using a
composite destination so that you can subscribe to all destinations at
once.
Also, there is a delay between new consumer registration and async
dispatch, so waiting a few seconds before unsubscribe is necessary.

http://activemq.apache.org/composite-destinations.html

On 28 April 2011 23:41, Josh Carlson <jc...@e-dialog.com> wrote:
> We are using a shared file system Master/Slave for the broker. Version 5.4.2. Our clients use the STOMP protocol. We use client acknowledgements and communicate synchronously with the broker (using receipts). We set prefetch to 1 in our subscriptions. Our clients iterate over several queues, subscribing, checking for messages (timeout of 50ms), and if one isn't available it un-subscribes and goes to the next queue. There are almost always cases where there are no messages in the queues. We ran into a problem where our application slowed down to a crawl when we deployed additional clients and I've narrowed it down to the fact that most of the time when we subscribed to the queue and then asked if a message was ready it wouldn't be even though there were messages in the queue. My assumption is that it is taking some time to dispatch the message.
>
> Is there some configuration parameters I might want to set to help with this problem? Or is this type of use just not going to scale?
>
> Here is some benchmark data. Each test, creates N consumers, but before they are allowed to start it enqueues 50*N messages for the consumer into *one* queue. The first set of metrics is for the case where the consumers are iterating over 6 different queues (even though there is only data in one). The second set of metrics we ONLY have 1 queue ... in this case the client only subscribes and un-subscribes once except in the case where a message 'isn't ready' in that 50ms (in which case it re-subscribes to the same queue). The metrics capture the entire process. getNextMessage, iterates over the queues, doing the subscribes/un-subscribes, receipts etc ...
>
> Note that in the 6 queue case time degrades once you have 100 consumers. In the other case it degrades after 100 but we never see a Median greater than 206ms.
>
> TEST Case 6 Queues ... 5 of which are empty (note that in this first case since 5 queues are empty one expects at least 250ms to poll those 5 empty queues). Times are in seconds.
>
> Number of Consumers 1. Muliple Queues
> Min: 0.349334999918938
> Max: 0.368788999971002
> Mean: 0.350222800001502
> Median: 0.349644500005525
> Std Dev: 0.00271797410606451
> Starting test for consumer count 10
>
> Number of Consumers 10. Muliple Queues
> Min: 0.349282000097446
> Max: 0.394184999982826
> Mean: 0.353602201999165
> Median: 0.352992500003892
> Std Dev: 0.00542072612850504
> Starting test for consumer count 50
>
> Number of Consumers 50. Muliple Queues
> Min: 0.315161000005901
> Max: 0.425882000010461
> Mean: 0.360078899599938
> Median: 0.359610499988775
> Std Dev: 0.00788422976924438
> Starting test for consumer count 75
>
> Number of Consumers 75. Muliple Queues
> Min: 0.342441000044346
> Max: 0.66088400001172
> Mean: 0.401721995466513
> Median: 0.396242500049994
> Std Dev: 0.0404559664668615
> Starting test for consumer count 100
>
> Number of Consumers 100. Muliple Queues
> Min: 0.352722999989055
> Max: 3.99510599998757
> Mean: 0.563622044800525
> Median: 0.494796500017401
> Std Dev: 0.413950797976057
> Starting test for consumer count 300
>
> Number of Consumers 300. Muliple Queues
> Min: 0.361888999934308
> Max: 5.53048999991734
> Mean: 1.91027370266765
> Median: 1.8000390000525
> Std Dev: 0.489824211293863
> Starting test for consumer count 600
>
> Number of Consumers 600. Muliple Queues
> Min: 0.335149999940768
> Max: 10.6164910000516
> Mean: 4.52802392866641
> Median: 4.35808100004215
> Std Dev: 0.840368954779232
> Starting test for consumer count 900
>
> Number of Consumers 900. Muliple Queues
> Min: 0.639438000041991
> Max: 18.2733670000453
> Mean: 8.00563488822206
> Median: 7.6759294999647
> Std Dev: 1.38340937172684
> Starting test for consumer count 1200
>
> Number of Consumers 1200. Muliple Queues
> Min: 0.474138000048697
> Max: 31.5018520000158
> Mean: 12.8169781057334
> Median: 12.2411614999873
> Std Dev: 2.45701978986895
> Starting test for consumer count 1500
>
> Number of Consumers 1500. Muliple Queues
> Min: 3.1234959999565
> Max: 48.7995179999853
> Mean: 18.8858608815866
> Median: 17.5380175000173
> Std Dev: 4.1516799330252
> Starting test for consumer count 1800
>
> Number of Consumers 1800. Muliple Queues
> Min: 4.99818900006358
> Max: 73.2436839999864
> Mean: 27.1358068585671
> Median: 25.4123435000074
> Std Dev: 6.30049000845097
> Starting test for consumer count 2400
>
> Number of Consumers 2400. Muliple Queues
> Min: 0.319424999994226
> Max: 114.78910699999
> Mean: 46.0846290592237
> Median: 44.3440699999919
> Std Dev: 10.2871979782358
>
> TEST Case only 1 queue
>
> Number of Consumers 1. Only One Queue
> Min: 0.0413880000123754
> Max: 0.0445370000088587
> Mean: 0.0416983800008893
> Median: 0.041657000023406
> Std Dev: 0.00042437742781418
> Starting test for consumer count 10
>
> Number of Consumers 10. Only One Queue
> Min: 0.0409169999184087
> Max: 0.0494429999962449
> Mean: 0.0419903019983321
> Median: 0.0417659999802709
> Std Dev: 0.000839524388489985
> Starting test for consumer count 50
>
> Number of Consumers 50. Only One Queue
> Min: 0.00652100006118417
> Max: 0.0843779999995604
> Mean: 0.0431237947992515
> Median: 0.0423434999538586
> Std Dev: 0.00470470800328101
> Starting test for consumer count 75
>
> Number of Consumers 75. Only One Queue
> Min: 0.00334199995268136
> Max: 0.120109000010416
> Mean: 0.0456681223996294
> Median: 0.0435704999836161
> Std Dev: 0.00729394094656864
> Starting test for consumer count 100
>
> Number of Consumers 100. Only One Queue
> Min: 0.00263900007121265
> Max: 0.206331999972463
> Mean: 0.051723164400761
> Median: 0.0513750000391155
> Std Dev: 0.0225837245735077
> Starting test for consumer count 300
>
> Number of Consumers 300. Only One Queue
> Min: 0.00258900003973395
> Max: 1.01170199993066
> Mean: 0.138241231733017
> Median: 0.136385999969207
> Std Dev: 0.0863229692434055
> Starting test for consumer count 600
>
> Number of Consumers 600. Only One Queue
> Min: 0.00214999995660037
> Max: 3.27785699989181
> Mean: 0.274939405133063
> Median: 0.256795499997679
> Std Dev: 0.237695097382708
> Starting test for consumer count 900
>
> Number of Consumers 900. Only One Queue
> Min: 0.00206800003070384
> Max: 31.7313950000098
> Mean: 0.5553230254
> Median: 0.338199999998324
> Std Dev: 1.14882073602057
> Starting test for consumer count 1200
>
> Number of Consumers 1200. Only One Queue
> Min: 0.00257100001908839
> Max: 49.8629720000317
> Mean: 0.912980378683317
> Median: 0.393762999970932
> Std Dev: 2.87091387484458
> Starting test for consumer count 1500
>
> Number of Consumers 1500. Only One Queue
> Min: 0.00201100006233901
> Max: 74.3607440000633
> Mean: 1.19311908142647
> Median: 0.205018000095152
> Std Dev: 4.4037236439348
> Starting test for consumer count 1800
>
> Number of Consumers 1800. Only One Queue
> Min: 0.00196300004608929
> Max: 84.4792379999999
> Mean: 1.29789674880008
> Median: 0.117239500046707
> Std Dev: 5.19232074252423
> Starting test for consumer count 2400
>
> Number of Consumers 2400. Only One Queue
> Min: 0.00200599990785122
> Max: 124.155756999971
> Mean: 1.77886690554984
> Median: 0.101840000017546
> Std Dev: 8.38169615533614
>
>



-- 
http://blog.garytully.com
http://fusesource.com