You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@qpid.apache.org by Robbie Gemmell <ro...@gmail.com> on 2012/01/04 19:11:12 UTC

Re: DerbyDB vs BerkeleyDB using the Java Broker

Hi Praveen,

I was using the head of trunk at the time of sending the message, and
was testing with your test classes. Persistent messaging performance
is almost entirely dependant on your storage, so down to a certain
extreme you wont really see any difference with varying memory or cpu
resources.

I ran the tests on a 3.5 year old Ubuntu virtual machine assigned 2
threads and 1.25GB of ram, running on an underlying quad core box with
8GB of ram running Windows 7. The probable reason it performed faster
would be that its storage was being held on a (2.5 year old) SSD.

Rob has done some work on trunk now to improve persistent messaging
performance a bit, its probably worth running your tests again with
that. I cant currently run the tests on the machine I used previously
as recent hurricane-level winds have left me without power or
telephone lines at home for the immediate future :(  There are some
other changes we expect would improve performance further that we are
likely to look at doing in future, but they will require much more
significant changes be made.

Robbie

On 19 December 2011 18:57, Praveen M <le...@gmail.com> wrote:
> Hi Robbie,
>
>             I tried grabbing the latest changes and re-running my tests. I
> didn't see the number that you mentioned in your mail. :( It kinda remains
> at what I had mentioned in my earlier email.
>
> Can you please tell me which changelist# you ran against so that I can try
> again?
>
> I'm running with allocated 4GB memory for the Broker and don't see any
> resource constraints in terms of memory and CPU.
> My test is on a box with 12GB Ram and 12 CPU cores.
>
> I think I might be missing something. Did you do any specific setting
> changes to your broker config, and were the results that you posted from
> running the tests that I emailed?
>
> Thanks,
> Praveen
>
> On Mon, Dec 19, 2011 at 10:45 AM, Praveen M <le...@gmail.com> wrote:
>
>> Hi Robbie,
>>
>> Thank you for the mail. I will try using the latest changes to grab the
>> recent
>> performance tweaks and run my tests over again.
>>
>> Yep, I made the test enqueue and dequeue at the same time, as I was trying
>> to simulate
>> something close to how it'd work in production. I do know that the dequeue
>> throughput rate
>> is not a very accurate one. :) But yeah, like you said, all I was trying
>> to check is more of
>> which one performs better berkeley/derby.
>>
>> Given that derby outperforms berkeley for some use cases, what would be
>> your recommendation to use as a
>> persistant store? I understand that berkeley is used more widely than
>> derby in production by
>> various users of qpid. Would that mean berkeley can be expected to be a
>> sheer more
>> robust a product as it might have been tested more thorough??
>>
>> Would you have a recommendation for picking one over the other as the
>> MessageStore?
>>
>> Thanks to you and the rest of the team for the work that you guys are
>> putting together towards performance tuning the product.
>> -
>> Praveen
>>
>>
>> On Sun, Dec 18, 2011 at 6:31 PM, Robbie Gemmell <ro...@gmail.com>wrote:
>>
>>> Hi Praveen,
>>>
>>> I notice both your tests actually seem to enqueue and dequeue messages
>>> at the same time (since you commit per publish and the message
>>> listeners will already be recieving a message which then gets commited
>>> by the next publish due to the single session in use, leaving a
>>> message on the queue at the end), so you might not be getting the
>>> precise number you are looking for in the first test, but that doesnt
>>> really change the relative results it gives.
>>>
>>> I didnt see quite the same disparity when I ran the tests on my box,
>>> but the Derby store did still win significantly (giving ~2.3 vs 4.4ms
>>> and 350 vs 600msg/s best cases), though there have been some changes
>>> made on trunk since your runs to massively improve transient messaging
>>> performance of the Java broker which may also have influenced things
>>> here a little. Either way, although it makes the test suite runs take
>>> significantly longer it would seem that in actual use the Derby store
>>> is currently noticably faster in at least some use cases. As I have
>>> said previously our attention to performance of the Java broker has
>>> been lacking for a while, but we are going to spend some quality time
>>> looking at performance testing very soon now, and given the recent
>>> transient improvements will undoubtedly be looking at persistent
>>> performance going forward as well.
>>>
>>> Robbie
>>>
>>> On 3 December 2011 00:45, Praveen M <le...@gmail.com> wrote:
>>> > Hi,
>>> >
>>> >    I've been trying to benchmark the BerkeleyDb against DerbyDb with the
>>> > java broker to find which DB is more perform-ant against the java
>>> broker.
>>> >
>>> > I have heard from earlier discussing that berkeleydb runs faster in the
>>> > scalability tests of Qpid. However, some of my tests showed the
>>> contrary.
>>> >
>>> > I had setup BDB using the "ant build release-bin -Dmodules.opt=bdbstore
>>> > -Ddownload-bdb=true" as directed in Robbie's earlier email in a similar
>>> > topic thread.
>>> >
>>> > I tried running two tests in particular which are of interest to me
>>> >
>>> > Test 1)
>>> > Produce 1000 messages to the broker in transacted mode such that after
>>> every
>>> > enqueue you commit the transaction.
>>> >
>>> > The time taken to enqueue a message in transacted mode from the above
>>> test
>>> > is approx 5-8 ms for derbyDb and about 18-25 ms in the case of
>>> BerkeleyDb.
>>> >
>>> >
>>> > Test 2)
>>> > Produce 1000 messages with auto-ack mode, with a consumer already setup
>>> for
>>> > the queue.
>>> > When the 1000th message is processed, calculate it's latency by doing
>>> > Latency =  (System.currentTimeInMillis() - message.getJMSTimeStamp()).
>>> >
>>> > Try to compute an *approximate* dequeue rate by doing
>>> > numberOfMessageProcessed/Latency.
>>> >
>>> > In the above test, the results I got were such that,
>>> >
>>> > DerbyDb - 300 - 350 messages/second
>>> > BDB - 40 - 50 messages/second
>>> >
>>> >
>>> > I ran the tests against trunk(12/1)
>>> >
>>> > My Connection to Qpid has a max prefetch of 1 (as my use case requires
>>> this)
>>> > and has tcp_nodelay set to true.
>>> >
>>> > I have attached the tests that I used for reference.
>>> >
>>> > Can someone please tell me if I'm doing something wrong in the above
>>> tests
>>> > or if there is an additional configuration that I'm missing?
>>> >
>>> > Or are these results valid..? If valid, it will be great if the
>>> difference
>>> > could be explained.
>>> >
>>> > Hoping to hear soon.
>>> >
>>> > Thank you,
>>> > --
>>> > -Praveen
>>> >
>>> >
>>> > ---------------------------------------------------------------------
>>> > Apache Qpid - AMQP Messaging Implementation
>>> > Project:      http://qpid.apache.org
>>> > Use/Interact: mailto:users-subscribe@qpid.apache.org
>>>
>>> ---------------------------------------------------------------------
>>> Apache Qpid - AMQP Messaging Implementation
>>> Project:      http://qpid.apache.org
>>> Use/Interact: mailto:users-subscribe@qpid.apache.org
>>>
>>>
>>
>>
>> --
>> -Praveen
>>
>
>
>
> --
> -Praveen

---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project:      http://qpid.apache.org
Use/Interact: mailto:users-subscribe@qpid.apache.org


Re: DerbyDB vs BerkeleyDB using the Java Broker

Posted by Praveen M <le...@gmail.com>.
Hi Rob,

Thanks for taking a deeper look into this.

Your results are very interesting. I've not tested the case of
multiple consumers/producers and the throughput in that case for BDB. I
will try to simulate
a test and see if I can get similar results.

Thank you,
Praveen

On Thu, Jan 5, 2012 at 10:44 AM, Rob Godfrey <ro...@gmail.com>wrote:

> On 4 January 2012 22:56, Rob Godfrey <ro...@gmail.com> wrote:
>
> > In terms of BDB vs. Derby performance, I wouldn't be surprised if for a
> > single producer / single consumer case the performance is very similar.
>  As
> > Robbie highlights, really the performance here is all to do with how
> often
> > you can synchronously write to disk.  If ach store is performing a single
> > write to disk for each transactional commit, then the performance should
> be
> > very smilar.
> >
> >
> So, I actually spent a bit of time today testing this out :-)
>
> The use case that my users most often encounter with persistent messaging
> is where each message sent/received from the broker is sent in its own
> transaction (using JMS), and for the testing I have chosen a 1Kb message
> size.
>
> The Derby store does indeed provide slightly superior performance if you
> have eight or less active connections, but the BDB store scales better
> above that number. For completeness I have also tested the C++ broker with
> its async store, and another popular AMQP broker implementation
>
> You can see the results here:
>
>
> https://docs.google.com/spreadsheet/pub?hl=en_GB&hl=en_GB&key=0AqizD3Y_JixzdFhKZFctbzRWbWtMbE9CcnJzWjZMQVE&output=html#
>
> Note that other test scenarios (in particular not using transactions) would
> likely give wildly different comparative performance, and message sizes may
> also affect the results.  Obviously people should always test on their own
> hardware and with test cases reflecting their actual usage pattern.
>
> Cheers,
> Rob
>



-- 
-Praveen

Re: DerbyDB vs BerkeleyDB using the Java Broker

Posted by Brandon Pedersen <bp...@gmail.com>.
On Jan 5, 2012 11:45 AM, "Rob Godfrey" <ro...@gmail.com> wrote:
>
> On 4 January 2012 22:56, Rob Godfrey <ro...@gmail.com> wrote:
>
> > In terms of BDB vs. Derby performance, I wouldn't be surprised if for a
> > single producer / single consumer case the performance is very similar.
 As
> > Robbie highlights, really the performance here is all to do with how
often
> > you can synchronously write to disk.  If ach store is performing a
single
> > write to disk for each transactional commit, then the performance
should be
> > very smilar.
> >
> >
> So, I actually spent a bit of time today testing this out :-)
>
> The use case that my users most often encounter with persistent messaging
> is where each message sent/received from the broker is sent in its own
> transaction (using JMS), and for the testing I have chosen a 1Kb message
> size.
>

> The Derby store does indeed provide slightly superior performance if you
> have eight or less active connections, but the BDB store scales better
> above that number. For completeness I have also tested the C++ broker with
> its async store, and another popular AMQP broker implementation
>
> You can see the results here:
>
>
https://docs.google.com/spreadsheet/pub?hl=en_GB&hl=en_GB&key=0AqizD3Y_JixzdFhKZFctbzRWbWtMbE9CcnJzWjZMQVE&output=html#
>
> Note that other test scenarios (in particular not using transactions)
would
> likely give wildly different comparative performance, and message sizes
may
> also affect the results.  Obviously people should always test on their own
> hardware and with test cases reflecting their actual usage pattern.
>
> Cheers,
> Rob

Re: DerbyDB vs BerkeleyDB using the Java Broker

Posted by Rob Godfrey <ro...@gmail.com>.
On 4 January 2012 22:56, Rob Godfrey <ro...@gmail.com> wrote:

> In terms of BDB vs. Derby performance, I wouldn't be surprised if for a
> single producer / single consumer case the performance is very similar.  As
> Robbie highlights, really the performance here is all to do with how often
> you can synchronously write to disk.  If ach store is performing a single
> write to disk for each transactional commit, then the performance should be
> very smilar.
>
>
So, I actually spent a bit of time today testing this out :-)

The use case that my users most often encounter with persistent messaging
is where each message sent/received from the broker is sent in its own
transaction (using JMS), and for the testing I have chosen a 1Kb message
size.

The Derby store does indeed provide slightly superior performance if you
have eight or less active connections, but the BDB store scales better
above that number. For completeness I have also tested the C++ broker with
its async store, and another popular AMQP broker implementation

You can see the results here:

https://docs.google.com/spreadsheet/pub?hl=en_GB&hl=en_GB&key=0AqizD3Y_JixzdFhKZFctbzRWbWtMbE9CcnJzWjZMQVE&output=html#

Note that other test scenarios (in particular not using transactions) would
likely give wildly different comparative performance, and message sizes may
also affect the results.  Obviously people should always test on their own
hardware and with test cases reflecting their actual usage pattern.

Cheers,
Rob

Re: DerbyDB vs BerkeleyDB using the Java Broker

Posted by Rob Godfrey <ro...@gmail.com>.
In terms of BDB vs. Derby performance, I wouldn't be surprised if for a
single producer / single consumer case the performance is very similar.  As
Robbie highlights, really the performance here is all to do with how often
you can synchronously write to disk.  If ach store is performing a single
write to disk for each transactional commit, then the performance should be
very smilar.

Where we have done more work on the BDB side of things is with regard to
scaling with multiple concurrent producer and consumer connections... The
BDB store uses a single thread to coalesce all concurrent work into a
single synchronous write to disk... Derby may do something like this under
the covers, but we don't have such explicit logic to do so.

In terms of remaining perf work that still needs to be done - I would like
to apply the same logic described above to allow for better scaling of work
when a client is using many Sessions on the same Connection (note that
currently Connections are treated strictly in order in the Java Broker, and
so a commit on one session cannot be coalesced with a commit on a separate
session on the same Connection).  I would also like to restructure the
design of the database a bit so that querying and inserting is slightly
faster (though my experience is that this will not make a significant
performance improvement).

There are some other pieces of work that could be done to greatly improve
the appearance of performance in non-transactional persistent messaging...
though I am unconvinced by the utility of these use cases (from the JMS API
you would then have no guarantees of the amount of message loss that may
occur on sudden failure).

Cheers,
Rob

On 4 January 2012 21:48, Praveen M <le...@gmail.com> wrote:

> Thanks for writing Robbie. That explains.
>
> On Wed, Jan 4, 2012 at 1:11 PM, Robbie Gemmell <robbie.gemmell@gmail.com
> >wrote:
>
> > Hi Praveen,
> >
> > I was using the head of trunk at the time of sending the message, and
> > was testing with your test classes. Persistent messaging performance
> > is almost entirely dependant on your storage, so down to a certain
> > extreme you wont really see any difference with varying memory or cpu
> > resources.
> >
> > I ran the tests on a 3.5 year old Ubuntu virtual machine assigned 2
> > threads and 1.25GB of ram, running on an underlying quad core box with
> > 8GB of ram running Windows 7. The probable reason it performed faster
> > would be that its storage was being held on a (2.5 year old) SSD.
> >
> > Rob has done some work on trunk now to improve persistent messaging
> > performance a bit, its probably worth running your tests again with
> > that. I cant currently run the tests on the machine I used previously
> > as recent hurricane-level winds have left me without power or
> > telephone lines at home for the immediate future :(  There are some
> > other changes we expect would improve performance further that we are
> > likely to look at doing in future, but they will require much more
> > significant changes be made.
> >
> > Robbie
> >
> > On 19 December 2011 18:57, Praveen M <le...@gmail.com> wrote:
> > > Hi Robbie,
> > >
> > >             I tried grabbing the latest changes and re-running my
> tests.
> > I
> > > didn't see the number that you mentioned in your mail. :( It kinda
> > remains
> > > at what I had mentioned in my earlier email.
> > >
> > > Can you please tell me which changelist# you ran against so that I can
> > try
> > > again?
> > >
> > > I'm running with allocated 4GB memory for the Broker and don't see any
> > > resource constraints in terms of memory and CPU.
> > > My test is on a box with 12GB Ram and 12 CPU cores.
> > >
> > > I think I might be missing something. Did you do any specific setting
> > > changes to your broker config, and were the results that you posted
> from
> > > running the tests that I emailed?
> > >
> > > Thanks,
> > > Praveen
> > >
> > > On Mon, Dec 19, 2011 at 10:45 AM, Praveen M <le...@gmail.com>
> > wrote:
> > >
> > >> Hi Robbie,
> > >>
> > >> Thank you for the mail. I will try using the latest changes to grab
> the
> > >> recent
> > >> performance tweaks and run my tests over again.
> > >>
> > >> Yep, I made the test enqueue and dequeue at the same time, as I was
> > trying
> > >> to simulate
> > >> something close to how it'd work in production. I do know that the
> > dequeue
> > >> throughput rate
> > >> is not a very accurate one. :) But yeah, like you said, all I was
> trying
> > >> to check is more of
> > >> which one performs better berkeley/derby.
> > >>
> > >> Given that derby outperforms berkeley for some use cases, what would
> be
> > >> your recommendation to use as a
> > >> persistant store? I understand that berkeley is used more widely than
> > >> derby in production by
> > >> various users of qpid. Would that mean berkeley can be expected to be
> a
> > >> sheer more
> > >> robust a product as it might have been tested more thorough??
> > >>
> > >> Would you have a recommendation for picking one over the other as the
> > >> MessageStore?
> > >>
> > >> Thanks to you and the rest of the team for the work that you guys are
> > >> putting together towards performance tuning the product.
> > >> -
> > >> Praveen
> > >>
> > >>
> > >> On Sun, Dec 18, 2011 at 6:31 PM, Robbie Gemmell <
> > robbie.gemmell@gmail.com>wrote:
> > >>
> > >>> Hi Praveen,
> > >>>
> > >>> I notice both your tests actually seem to enqueue and dequeue
> messages
> > >>> at the same time (since you commit per publish and the message
> > >>> listeners will already be recieving a message which then gets
> commited
> > >>> by the next publish due to the single session in use, leaving a
> > >>> message on the queue at the end), so you might not be getting the
> > >>> precise number you are looking for in the first test, but that doesnt
> > >>> really change the relative results it gives.
> > >>>
> > >>> I didnt see quite the same disparity when I ran the tests on my box,
> > >>> but the Derby store did still win significantly (giving ~2.3 vs 4.4ms
> > >>> and 350 vs 600msg/s best cases), though there have been some changes
> > >>> made on trunk since your runs to massively improve transient
> messaging
> > >>> performance of the Java broker which may also have influenced things
> > >>> here a little. Either way, although it makes the test suite runs take
> > >>> significantly longer it would seem that in actual use the Derby store
> > >>> is currently noticably faster in at least some use cases. As I have
> > >>> said previously our attention to performance of the Java broker has
> > >>> been lacking for a while, but we are going to spend some quality time
> > >>> looking at performance testing very soon now, and given the recent
> > >>> transient improvements will undoubtedly be looking at persistent
> > >>> performance going forward as well.
> > >>>
> > >>> Robbie
> > >>>
> > >>> On 3 December 2011 00:45, Praveen M <le...@gmail.com> wrote:
> > >>> > Hi,
> > >>> >
> > >>> >    I've been trying to benchmark the BerkeleyDb against DerbyDb
> with
> > the
> > >>> > java broker to find which DB is more perform-ant against the java
> > >>> broker.
> > >>> >
> > >>> > I have heard from earlier discussing that berkeleydb runs faster in
> > the
> > >>> > scalability tests of Qpid. However, some of my tests showed the
> > >>> contrary.
> > >>> >
> > >>> > I had setup BDB using the "ant build release-bin
> > -Dmodules.opt=bdbstore
> > >>> > -Ddownload-bdb=true" as directed in Robbie's earlier email in a
> > similar
> > >>> > topic thread.
> > >>> >
> > >>> > I tried running two tests in particular which are of interest to me
> > >>> >
> > >>> > Test 1)
> > >>> > Produce 1000 messages to the broker in transacted mode such that
> > after
> > >>> every
> > >>> > enqueue you commit the transaction.
> > >>> >
> > >>> > The time taken to enqueue a message in transacted mode from the
> above
> > >>> test
> > >>> > is approx 5-8 ms for derbyDb and about 18-25 ms in the case of
> > >>> BerkeleyDb.
> > >>> >
> > >>> >
> > >>> > Test 2)
> > >>> > Produce 1000 messages with auto-ack mode, with a consumer already
> > setup
> > >>> for
> > >>> > the queue.
> > >>> > When the 1000th message is processed, calculate it's latency by
> doing
> > >>> > Latency =  (System.currentTimeInMillis() -
> > message.getJMSTimeStamp()).
> > >>> >
> > >>> > Try to compute an *approximate* dequeue rate by doing
> > >>> > numberOfMessageProcessed/Latency.
> > >>> >
> > >>> > In the above test, the results I got were such that,
> > >>> >
> > >>> > DerbyDb - 300 - 350 messages/second
> > >>> > BDB - 40 - 50 messages/second
> > >>> >
> > >>> >
> > >>> > I ran the tests against trunk(12/1)
> > >>> >
> > >>> > My Connection to Qpid has a max prefetch of 1 (as my use case
> > requires
> > >>> this)
> > >>> > and has tcp_nodelay set to true.
> > >>> >
> > >>> > I have attached the tests that I used for reference.
> > >>> >
> > >>> > Can someone please tell me if I'm doing something wrong in the
> above
> > >>> tests
> > >>> > or if there is an additional configuration that I'm missing?
> > >>> >
> > >>> > Or are these results valid..? If valid, it will be great if the
> > >>> difference
> > >>> > could be explained.
> > >>> >
> > >>> > Hoping to hear soon.
> > >>> >
> > >>> > Thank you,
> > >>> > --
> > >>> > -Praveen
> > >>> >
> > >>> >
> > >>> >
> ---------------------------------------------------------------------
> > >>> > Apache Qpid - AMQP Messaging Implementation
> > >>> > Project:      http://qpid.apache.org
> > >>> > Use/Interact: mailto:users-subscribe@qpid.apache.org
> > >>>
> > >>> ---------------------------------------------------------------------
> > >>> Apache Qpid - AMQP Messaging Implementation
> > >>> Project:      http://qpid.apache.org
> > >>> Use/Interact: mailto:users-subscribe@qpid.apache.org
> > >>>
> > >>>
> > >>
> > >>
> > >> --
> > >> -Praveen
> > >>
> > >
> > >
> > >
> > > --
> > > -Praveen
> >
> > ---------------------------------------------------------------------
> > Apache Qpid - AMQP Messaging Implementation
> > Project:      http://qpid.apache.org
> > Use/Interact: mailto:users-subscribe@qpid.apache.org
> >
> >
>
>
> --
> -Praveen
>

Re: DerbyDB vs BerkeleyDB using the Java Broker

Posted by Praveen M <le...@gmail.com>.
Thanks for writing Robbie. That explains.

On Wed, Jan 4, 2012 at 1:11 PM, Robbie Gemmell <ro...@gmail.com>wrote:

> Hi Praveen,
>
> I was using the head of trunk at the time of sending the message, and
> was testing with your test classes. Persistent messaging performance
> is almost entirely dependant on your storage, so down to a certain
> extreme you wont really see any difference with varying memory or cpu
> resources.
>
> I ran the tests on a 3.5 year old Ubuntu virtual machine assigned 2
> threads and 1.25GB of ram, running on an underlying quad core box with
> 8GB of ram running Windows 7. The probable reason it performed faster
> would be that its storage was being held on a (2.5 year old) SSD.
>
> Rob has done some work on trunk now to improve persistent messaging
> performance a bit, its probably worth running your tests again with
> that. I cant currently run the tests on the machine I used previously
> as recent hurricane-level winds have left me without power or
> telephone lines at home for the immediate future :(  There are some
> other changes we expect would improve performance further that we are
> likely to look at doing in future, but they will require much more
> significant changes be made.
>
> Robbie
>
> On 19 December 2011 18:57, Praveen M <le...@gmail.com> wrote:
> > Hi Robbie,
> >
> >             I tried grabbing the latest changes and re-running my tests.
> I
> > didn't see the number that you mentioned in your mail. :( It kinda
> remains
> > at what I had mentioned in my earlier email.
> >
> > Can you please tell me which changelist# you ran against so that I can
> try
> > again?
> >
> > I'm running with allocated 4GB memory for the Broker and don't see any
> > resource constraints in terms of memory and CPU.
> > My test is on a box with 12GB Ram and 12 CPU cores.
> >
> > I think I might be missing something. Did you do any specific setting
> > changes to your broker config, and were the results that you posted from
> > running the tests that I emailed?
> >
> > Thanks,
> > Praveen
> >
> > On Mon, Dec 19, 2011 at 10:45 AM, Praveen M <le...@gmail.com>
> wrote:
> >
> >> Hi Robbie,
> >>
> >> Thank you for the mail. I will try using the latest changes to grab the
> >> recent
> >> performance tweaks and run my tests over again.
> >>
> >> Yep, I made the test enqueue and dequeue at the same time, as I was
> trying
> >> to simulate
> >> something close to how it'd work in production. I do know that the
> dequeue
> >> throughput rate
> >> is not a very accurate one. :) But yeah, like you said, all I was trying
> >> to check is more of
> >> which one performs better berkeley/derby.
> >>
> >> Given that derby outperforms berkeley for some use cases, what would be
> >> your recommendation to use as a
> >> persistant store? I understand that berkeley is used more widely than
> >> derby in production by
> >> various users of qpid. Would that mean berkeley can be expected to be a
> >> sheer more
> >> robust a product as it might have been tested more thorough??
> >>
> >> Would you have a recommendation for picking one over the other as the
> >> MessageStore?
> >>
> >> Thanks to you and the rest of the team for the work that you guys are
> >> putting together towards performance tuning the product.
> >> -
> >> Praveen
> >>
> >>
> >> On Sun, Dec 18, 2011 at 6:31 PM, Robbie Gemmell <
> robbie.gemmell@gmail.com>wrote:
> >>
> >>> Hi Praveen,
> >>>
> >>> I notice both your tests actually seem to enqueue and dequeue messages
> >>> at the same time (since you commit per publish and the message
> >>> listeners will already be recieving a message which then gets commited
> >>> by the next publish due to the single session in use, leaving a
> >>> message on the queue at the end), so you might not be getting the
> >>> precise number you are looking for in the first test, but that doesnt
> >>> really change the relative results it gives.
> >>>
> >>> I didnt see quite the same disparity when I ran the tests on my box,
> >>> but the Derby store did still win significantly (giving ~2.3 vs 4.4ms
> >>> and 350 vs 600msg/s best cases), though there have been some changes
> >>> made on trunk since your runs to massively improve transient messaging
> >>> performance of the Java broker which may also have influenced things
> >>> here a little. Either way, although it makes the test suite runs take
> >>> significantly longer it would seem that in actual use the Derby store
> >>> is currently noticably faster in at least some use cases. As I have
> >>> said previously our attention to performance of the Java broker has
> >>> been lacking for a while, but we are going to spend some quality time
> >>> looking at performance testing very soon now, and given the recent
> >>> transient improvements will undoubtedly be looking at persistent
> >>> performance going forward as well.
> >>>
> >>> Robbie
> >>>
> >>> On 3 December 2011 00:45, Praveen M <le...@gmail.com> wrote:
> >>> > Hi,
> >>> >
> >>> >    I've been trying to benchmark the BerkeleyDb against DerbyDb with
> the
> >>> > java broker to find which DB is more perform-ant against the java
> >>> broker.
> >>> >
> >>> > I have heard from earlier discussing that berkeleydb runs faster in
> the
> >>> > scalability tests of Qpid. However, some of my tests showed the
> >>> contrary.
> >>> >
> >>> > I had setup BDB using the "ant build release-bin
> -Dmodules.opt=bdbstore
> >>> > -Ddownload-bdb=true" as directed in Robbie's earlier email in a
> similar
> >>> > topic thread.
> >>> >
> >>> > I tried running two tests in particular which are of interest to me
> >>> >
> >>> > Test 1)
> >>> > Produce 1000 messages to the broker in transacted mode such that
> after
> >>> every
> >>> > enqueue you commit the transaction.
> >>> >
> >>> > The time taken to enqueue a message in transacted mode from the above
> >>> test
> >>> > is approx 5-8 ms for derbyDb and about 18-25 ms in the case of
> >>> BerkeleyDb.
> >>> >
> >>> >
> >>> > Test 2)
> >>> > Produce 1000 messages with auto-ack mode, with a consumer already
> setup
> >>> for
> >>> > the queue.
> >>> > When the 1000th message is processed, calculate it's latency by doing
> >>> > Latency =  (System.currentTimeInMillis() -
> message.getJMSTimeStamp()).
> >>> >
> >>> > Try to compute an *approximate* dequeue rate by doing
> >>> > numberOfMessageProcessed/Latency.
> >>> >
> >>> > In the above test, the results I got were such that,
> >>> >
> >>> > DerbyDb - 300 - 350 messages/second
> >>> > BDB - 40 - 50 messages/second
> >>> >
> >>> >
> >>> > I ran the tests against trunk(12/1)
> >>> >
> >>> > My Connection to Qpid has a max prefetch of 1 (as my use case
> requires
> >>> this)
> >>> > and has tcp_nodelay set to true.
> >>> >
> >>> > I have attached the tests that I used for reference.
> >>> >
> >>> > Can someone please tell me if I'm doing something wrong in the above
> >>> tests
> >>> > or if there is an additional configuration that I'm missing?
> >>> >
> >>> > Or are these results valid..? If valid, it will be great if the
> >>> difference
> >>> > could be explained.
> >>> >
> >>> > Hoping to hear soon.
> >>> >
> >>> > Thank you,
> >>> > --
> >>> > -Praveen
> >>> >
> >>> >
> >>> > ---------------------------------------------------------------------
> >>> > Apache Qpid - AMQP Messaging Implementation
> >>> > Project:      http://qpid.apache.org
> >>> > Use/Interact: mailto:users-subscribe@qpid.apache.org
> >>>
> >>> ---------------------------------------------------------------------
> >>> Apache Qpid - AMQP Messaging Implementation
> >>> Project:      http://qpid.apache.org
> >>> Use/Interact: mailto:users-subscribe@qpid.apache.org
> >>>
> >>>
> >>
> >>
> >> --
> >> -Praveen
> >>
> >
> >
> >
> > --
> > -Praveen
>
> ---------------------------------------------------------------------
> Apache Qpid - AMQP Messaging Implementation
> Project:      http://qpid.apache.org
> Use/Interact: mailto:users-subscribe@qpid.apache.org
>
>


-- 
-Praveen