You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@qpid.apache.org by Arnaud Simon <as...@redhat.com> on 2007/11/07 12:08:20 UTC

Weekly plans.

This week I will be adding dtx and crash recovery tests, I will also be
looking at optimizing the java 0_10 client.  


Re: Weekly plans.

Posted by Carl Trieloff <cc...@redhat.com>.
The number I gave was a single publisher, 1-4 consumers (but consume 
rate is not symmetric which is a problem)

Thanks for the config info of tests on M2.
Carl.


Rupert Smith wrote:
> What size of messages are you maxing the 1Gig connection at? Obviously 
> its easy to do with big messages. I will attempt to guess, assuming 
> the test is pubsub 1:x, with x large enough that I can assume the 
> broadcast traffic is what is consuming the bandwidth.
>
> 1Gbit/sec / 8 bits/byte / 176k msgs/sec = approx 710 bytes/msg
>
> Are you running p2p or pubsub tests, and if pubsub what is the fanout 
> ratio (1:x)?
>
> The fastest I've seen the Java M2 go on pubsub 2:16 is around 100k 
> msgs/sec w. 256 byte messages. Although, I feel it could go faster 
> because I was testing with just one client machine, and the CPU maxed 
> out on the client and not the broker well before the connection was 
> saturated :(
>
> I have been doing a bit of comparison of M2 with other middleware 
> products. Generally speaking to compare products, I use small messages 
> (settling on 256 bytes as a standard for all tests), because large 
> messages will reach IO bounds and test the hardware not the 
> middleware. So far, we hold up pretty well.
>
> I think one of the best direct comparison between two brokers, is to 
> do a transient 1:1 p2p test, but scale it up 8 or 16 times, so its 8:8 
> or 16:16 across that many separate queues. This gives the broker a 
> good opportunity to scale over many cores, but also tests the full 
> service time to route each message for every message (contrasted with 
> pubsub where each message might be routed once, then pumped out onto 
> the network multiple times). Ultimately it is this service time that 
> matters. Doing p2p with small messages uses more CPU/message on the 
> broker side, therefore gives you the best feel for the efficiency of 
> the software and the best chance of avoiding saturating the hardware. 
> Pubsub produces bigger, and therefore more impressive numbers, but I 
> do think p2p is better for comparison (unless you want to test the 
> efficiency of topics/selectors, which is also worth comparing).
>
> Likewise, in persistent mode, for p2p with small messages, the 
> limiting factor is the disk latency, so the test uncovers how good the 
> disk store/fetch algorithm is wrt the disks max IO operations per 
> second. Again this shows up the differences in algorithms used by 
> different middleware quite nicely. Best I have seen so far was SwiftMQ 
> which managed to write batch 8k msgs/sec in auto ack mode, 16:16 p2p, 
> on a disk setup that can handle maybe 500 IOPS (very rough estimate), 
> which is impressive.
>
> To do a direct compare, suggest you use the same hardware setup for 
> all tests. Build the perftests on M2, under java/perftests:
>
> mvn install uk.co.thebadgerset:junit-toolkit-maven-plugin:tkscriptgen 
> assembly:directory (or you could use assembly:assembly to create a 
> .tar.gz)
>
> cd target/qpid-perftests-1.0-incubating-M2-all-test-deps.dir
>
> run the test cases:
>
> ./TQBT-TX-Qpid-01
> .... through to
> ./PTBT-AA-Qpid-01
>
> detailed in the pom.xml. TQBT-TX stands for Transient Queue Benchmark 
> Throughput w Transactions, PTBT stands for Persistent Topic Benchmark 
> Throughput w AutoAck, etc. An example run might look like:
>
> ./TQBT-TX-Qpid-01 broker=tcp://10.0.0.1:5672 -o resultsdir/ --csv
>
> Also, perftest stuff is most up-to-date on M2.1, both the test code 
> and the numbers in the generated scripts in the pom.xml (which have 
> taken a lot of tweaking to get right). M2.1 perftests has been updated 
> to use pure JMS, like Arnaud did for trunk, but I have also put in a 
> few fixes into it that have not been merged onto trunk. I think I 
> should probably merge all these changes from M2.1 onto M2 and trunk, 
> to make direct comparison easier.
>
> Rupert
>
> On 07/11/2007, *Carl Trieloff* <cctrieloff@redhat.com 
> <ma...@redhat.com>> wrote:
>
>     Robert Greig wrote:
>     > On 07/11/2007, Arnaud Simon <asimon@redhat.com
>     <ma...@redhat.com>> wrote:
>     >
>     >
>     >> This week I will be adding dtx and crash recovery tests, I will
>     also be
>     >> looking at optimizing the java 0_10 client.
>     >>
>     >
>     > Do you have any performance test results for the 0-10 client?
>     >
>     > RG
>     >
>
>     As all the clients are to the C++ broker -there is what is the broker
>     capable of and then how close is the
>     client for each language. I still don't have enough data to quote for
>     each component.
>
>     looks like broker client C++ for publish can max TCP on Gig (176k
>     msg/sec) for the size of message
>     my test is using and it consume 1 core of CPU time to do this. Consume
>     does not show symmetric rate  -- still
>     working out if broker or client lib.
>
>     also don't think this is max - i.e. IB should be much faster - the
>     number above is limited by the specific network
>     I am running on. one of the upcoming tests will most likely be to
>     'cat'
>     the full conversation to the socket / IO
>     buffer on the local machine to determine the top limit if the machine
>     had multiple NICs or on IB. and find out
>     where the consume issue is.... (think Alan is hatching a plan to
>     try that)
>
>     what are the rate / message size / CPU you are seeing on M2? - would
>     like to do a direct compare.
>     Carl.
>
>
>
>
>
>
>
>


Re: Weekly plans.

Posted by Rupert Smith <ru...@googlemail.com>.
The Sonic installer script just seems to hang. I'll try it again.

On 08/11/2007, Robert Greig <ro...@gmail.com> wrote:
>
> On 08/11/2007, Rupert Smith <ru...@googlemail.com> wrote:
> > Robert, What other middleware did you do comparative testing against?
>
> I tested against ActiveMQ, OpenAMQ, JBoss Message (new style) and SonicMQ.
>
> > I'd like to test MQ Series, but I cannot get a trial version for Linux
> on
> > x86. There's Linux on zSeries, or Solaris on x86, but not Linux on x86.
> > Maybe there is no version that I can run on the hardware that I have
> access
> > to.
>
> It may be easier to test qpid on solaris 10 SPARC, and use an existing
> MQ installation.
>
> RG
>

Re: Weekly plans.

Posted by Robert Greig <ro...@gmail.com>.
On 08/11/2007, Rupert Smith <ru...@googlemail.com> wrote:
> Robert, What other middleware did you do comparative testing against?

I tested against ActiveMQ, OpenAMQ, JBoss Message (new style) and SonicMQ.

> I'd like to test MQ Series, but I cannot get a trial version for Linux on
> x86. There's Linux on zSeries, or Solaris on x86, but not Linux on x86.
> Maybe there is no version that I can run on the hardware that I have access
> to.

It may be easier to test qpid on solaris 10 SPARC, and use an existing
MQ installation.

RG

Re: Weekly plans.

Posted by Rupert Smith <ru...@googlemail.com>.
Robert, What other middleware did you do comparative testing against?

I'd like to test MQ Series, but I cannot get a trial version for Linux on
x86. There's Linux on zSeries, or Solaris on x86, but not Linux on x86.
Maybe there is no version that I can run on the hardware that I have access
to.

Rupert

On 08/11/2007, Robert Greig <ro...@gmail.com> wrote:
>
> On 08/11/2007, Rupert Smith <ru...@googlemail.com> wrote:
> > I thought I saw 'total time: 500ms' on a previous mail you sent about
> this,
> > but I guess I am mistaken.
>
> We did end up running for short periods but we also ran for much
> longer periods. As I said the results per second were consistent.
>
> > Trouble with max throughput tests, is that at saturation who can say
> what
> > the latency will be?
>
> Yes, the steady state point is interesting too.
>
> RG
>

Re: Weekly plans.

Posted by Robert Greig <ro...@gmail.com>.
On 08/11/2007, Rupert Smith <ru...@googlemail.com> wrote:
> I thought I saw 'total time: 500ms' on a previous mail you sent about this,
> but I guess I am mistaken.

We did end up running for short periods but we also ran for much
longer periods. As I said the results per second were consistent.

> Trouble with max throughput tests, is that at saturation who can say what
> the latency will be?

Yes, the steady state point is interesting too.

RG

Re: Weekly plans.

Posted by Rupert Smith <ru...@googlemail.com>.
In that case, the input rate was 20,000 msgs/s, for 10,000 msgs, so:

Latency = 10,000 / 20,000 = 500ms

That would be 500ms approx, to fully deliver each message to all 10 clients.
However, this is a pretty loose usage of Little's Law, because a 10,000 msg
test would not maintain a queue depth of 10,000 msgs on the broker, and the
latency being timed per message would be from the moment the message was
timestamped and sent on its way, rather than right from the beginning of the
test case. So that 500ms is more like an estimate of how long the entire
test would take to run, and the 50ms estimate is probably a bit more
ball-park for the per message latency figure. It was just a guestimate.
Trying to apply Little's Law, given tests that run in batches, gets pretty
confusing...

On 08/11/2007, Robert Greig <ro...@gmail.com> wrote:
>
> On 08/11/2007, Rupert Smith <ru...@googlemail.com> wrote:
>
> > Waiting events = Throughput * Latency
> >
> > Latency = 10,000 / 200,000 = 1/20 = 50ms.
>
> Is that valid when the 10 clients are handled in parallel? i.e we
> delivered 20k messages to *each client* in a second of wall time.
>
> RG
>

Re: Weekly plans.

Posted by Robert Greig <ro...@gmail.com>.
On 08/11/2007, Rupert Smith <ru...@googlemail.com> wrote:

> Waiting events = Throughput * Latency
>
> Latency = 10,000 / 200,000 = 1/20 = 50ms.

Is that valid when the 10 clients are handled in parallel? i.e we
delivered 20k messages to *each client* in a second of wall time.

RG

Re: Weekly plans.

Posted by Rupert Smith <ru...@googlemail.com>.
For 10,000 msgs per test, at 200k msgs per second, by Little's law, I would
estimate the latency to be:

Waiting events = Throughput * Latency

Latency = 10,000 / 200,000 = 1/20 = 50ms.

On 08/11/2007, Rupert Smith <ru...@googlemail.com> wrote:
>
> I thought I saw 'total time: 500ms' on a previous mail you sent about
> this, but I guess I am mistaken.
>
> Trouble with max throughput tests, is that at saturation who can say what
> the latency will be?
>
> On 08/11/2007, Robert Greig <ro...@gmail.com> wrote:
> >
> > On 08/11/2007, Rupert Smith <ru...@googlemail.com> wrote:
> > > I don't think there has been a degradation, its just that the test is
> > > different. As I said, the client machine maxed out before the broker
> > or
> > > network did, so the 100k I observed was not the brokers best effort.
> > In fact
> > > I will try and run the test that gave you 200k+ and see if I can
> > improve my
> > > test case to do this too.
> >
> > That is interesting since previously we were not hitting the CPU limit
> > even with 16 clients running.
> >
> > > One thing I did notice about the 200k test, is that it only ran for
> > > 0.5seconds.
> >
> > ? The test was configurable in terms of the number of messages sent.
> > We ran with various sizes (including very large numbers) but it was
> > pretty consistent so I think we often just ran with 10,000 message
> > batches.
> >
> > > If latency is around 50ms (guessing), then it would be
> > > advisable to
> > > run the test for at least 5 seconds (100 times latency).
> >
> > I hope latency is far lower than 50ms for transient messaging.
> >
> > > The test you are refering to is Publisher/Listener under
> > > org.apache.qpid.topic?
> >
> > Yes.
> >
> > RG
> >
>
>

Re: Weekly plans.

Posted by Rupert Smith <ru...@googlemail.com>.
I thought I saw 'total time: 500ms' on a previous mail you sent about this,
but I guess I am mistaken.

Trouble with max throughput tests, is that at saturation who can say what
the latency will be?

On 08/11/2007, Robert Greig <ro...@gmail.com> wrote:
>
> On 08/11/2007, Rupert Smith <ru...@googlemail.com> wrote:
> > I don't think there has been a degradation, its just that the test is
> > different. As I said, the client machine maxed out before the broker or
> > network did, so the 100k I observed was not the brokers best effort. In
> fact
> > I will try and run the test that gave you 200k+ and see if I can improve
> my
> > test case to do this too.
>
> That is interesting since previously we were not hitting the CPU limit
> even with 16 clients running.
>
> > One thing I did notice about the 200k test, is that it only ran for
> > 0.5seconds.
>
> ? The test was configurable in terms of the number of messages sent.
> We ran with various sizes (including very large numbers) but it was
> pretty consistent so I think we often just ran with 10,000 message
> batches.
>
> > If latency is around 50ms (guessing), then it would be
> > advisable to
> > run the test for at least 5 seconds (100 times latency).
>
> I hope latency is far lower than 50ms for transient messaging.
>
> > The test you are refering to is Publisher/Listener under
> > org.apache.qpid.topic?
>
> Yes.
>
> RG
>

Re: Weekly plans.

Posted by Robert Greig <ro...@gmail.com>.
On 08/11/2007, Rupert Smith <ru...@googlemail.com> wrote:
> I don't think there has been a degradation, its just that the test is
> different. As I said, the client machine maxed out before the broker or
> network did, so the 100k I observed was not the brokers best effort. In fact
> I will try and run the test that gave you 200k+ and see if I can improve my
> test case to do this too.

That is interesting since previously we were not hitting the CPU limit
even with 16 clients running.

> One thing I did notice about the 200k test, is that it only ran for
> 0.5seconds.

? The test was configurable in terms of the number of messages sent.
We ran with various sizes (including very large numbers) but it was
pretty consistent so I think we often just ran with 10,000 message
batches.

> If latency is around 50ms (guessing), then it would be
> advisable to
> run the test for at least 5 seconds (100 times latency).

I hope latency is far lower than 50ms for transient messaging.

> The test you are refering to is Publisher/Listener under
> org.apache.qpid.topic?

Yes.

RG

Re: Weekly plans.

Posted by Rupert Smith <ru...@googlemail.com>.
I don't think there has been a degradation, its just that the test is
different. As I said, the client machine maxed out before the broker or
network did, so the 100k I observed was not the brokers best effort. In fact
I will try and run the test that gave you 200k+ and see if I can improve my
test case to do this too.

One thing I did notice about the 200k test, is that it only ran for
0.5seconds. If latency is around 50ms (guessing), then it would be
advisable to
run the test for at least 5 seconds (100 times latency). The results I
obtained all ran at constant rates for ten minutes, and it has been a bit
tricky to do this. Reason being that max throughput tests have to run the
broker at saturation load for some time, and it becomes all to easy to end
up overflowing the queues, consequently there is logic in the test code to
hold back when too many messages at once are in flight.

The test you are refering to is Publisher/Listener under
org.apache.qpid.topic?

On 08/11/2007, Robert Greig <ro...@gmail.com> wrote:
>
> On 07/11/2007, Rupert Smith <ru...@googlemail.com> wrote:
>
> > The fastest I've seen the Java M2 go on pubsub 2:16 is around 100k
> msgs/sec
> > w. 256 byte messages. Although, I feel it could go faster because I was
> > testing with just one client machine, and the CPU maxed out on the
> client
> > and not the broker well before the connection was saturated :(
>
> This seems like there has been a significant degradation over time. On
> the old topic test (which is checked in on the M2 branch now) we used
> to see over 200k messages per second.
>
> Does you test use a single JVM for all the clients? I would be
> interested to know if there is any difference between a single JVM
> with multiple connections and multiple JVMs each with a single
> connection.
>
> Carl, does your test use separate processes or a single process with
> connections?
>
> > Best I have seen so far was SwiftMQ which managed to write
> > batch 8k msgs/sec in auto ack mode, 16:16 p2p, on a disk setup that can
> > handle maybe 500 IOPS (very rough estimate), which is impressive.
>
> What does this test do exactly - what does the "batch 8k msgs/sec"
> mean? How does it compare with the same test on Qpid?
>
> RG
>

Re: Weekly plans.

Posted by Rupert Smith <ru...@googlemail.com>.
Yes, its 343 per second, per consumer. Which seems reasonable given the
approximate disk IOPS. Just seems like SwiftMQ have managed to get their
write combining to work more effectively than ours.

On 08/11/2007, Robert Greig <ro...@gmail.com> wrote:
>
> On 08/11/2007, Rupert Smith <ru...@googlemail.com> wrote:
> > Auto ack it did 8k msgs/sec.
> > Using transactions, and one msg/transaction it did 5.5k.
> >
> > This was p2p, 1:1 run 16 times in parallel, to give the best possible
> > opportunity for parallelism and tricks like write combining. I was
> > impressed.
>
> So this is 5.5k msgs/second per consumer or is it 5500/16= 343 per
> second per consumer?
>
> RG
>

Re: Weekly plans.

Posted by Robert Greig <ro...@gmail.com>.
On 08/11/2007, Rupert Smith <ru...@googlemail.com> wrote:
> Auto ack it did 8k msgs/sec.
> Using transactions, and one msg/transaction it did 5.5k.
>
> This was p2p, 1:1 run 16 times in parallel, to give the best possible
> opportunity for parallelism and tricks like write combining. I was
> impressed.

So this is 5.5k msgs/second per consumer or is it 5500/16= 343 per
second per consumer?

RG

Re: Weekly plans.

Posted by Rupert Smith <ru...@googlemail.com>.
Auto ack it did 8k msgs/sec.
Using transactions, and one msg/transaction it did 5.5k.

This was p2p, 1:1 run 16 times in parallel, to give the best possible
opportunity for parallelism and tricks like write combining. I was
impressed.

On 08/11/2007, Robert Greig <ro...@gmail.com> wrote:
>
> On 08/11/2007, Rupert Smith <ru...@googlemail.com> wrote:
>
> > By batching, I mean that on a disk that can only do maybe 500 IOPS, by
> > combining (batching) writes, it manages 8k msgs/sec. We can get similar
> > results for Qpid when running under transactions, and sending 10
> > msgs/transaction, but in auto ack mode our write combining strategy does
> not
> > seem to be quite so effective, giving a result closer to the 500 for
> that
> > particular test. I'm not sure exactly why that is. Perhaps its all down
> to
> > our choice of transaction logger. Perhaps our write combining strategy
> is
> > not active in auto ack mode.
>
> So you're saying that Swift can do 8k msgs/sec with a single commit per
> message?
>
> RG
>

Re: Weekly plans.

Posted by Robert Greig <ro...@gmail.com>.
On 08/11/2007, Rupert Smith <ru...@googlemail.com> wrote:

> By batching, I mean that on a disk that can only do maybe 500 IOPS, by
> combining (batching) writes, it manages 8k msgs/sec. We can get similar
> results for Qpid when running under transactions, and sending 10
> msgs/transaction, but in auto ack mode our write combining strategy does not
> seem to be quite so effective, giving a result closer to the 500 for that
> particular test. I'm not sure exactly why that is. Perhaps its all down to
> our choice of transaction logger. Perhaps our write combining strategy is
> not active in auto ack mode.

So you're saying that Swift can do 8k msgs/sec with a single commit per message?

RG

Re: Weekly plans.

Posted by Rupert Smith <ru...@googlemail.com>.
On 08/11/2007, Robert Greig <ro...@gmail.com> wrote:

> > Best I have seen so far was SwiftMQ which managed to write
> > batch 8k msgs/sec in auto ack mode, 16:16 p2p, on a disk setup that can
> > handle maybe 500 IOPS (very rough estimate), which is impressive.
>
> What does this test do exactly - what does the "batch 8k msgs/sec"
> mean? How does it compare with the same test on Qpid?


By batching, I mean that on a disk that can only do maybe 500 IOPS, by
combining (batching) writes, it manages 8k msgs/sec. We can get similar
results for Qpid when running under transactions, and sending 10
msgs/transaction, but in auto ack mode our write combining strategy does not
seem to be quite so effective, giving a result closer to the 500 for that
particular test. I'm not sure exactly why that is. Perhaps its all down to
our choice of transaction logger. Perhaps our write combining strategy is
not active in auto ack mode.

Rupert

Re: Weekly plans.

Posted by Robert Greig <ro...@gmail.com>.
On 07/11/2007, Rupert Smith <ru...@googlemail.com> wrote:

> The fastest I've seen the Java M2 go on pubsub 2:16 is around 100k msgs/sec
> w. 256 byte messages. Although, I feel it could go faster because I was
> testing with just one client machine, and the CPU maxed out on the client
> and not the broker well before the connection was saturated :(

This seems like there has been a significant degradation over time. On
the old topic test (which is checked in on the M2 branch now) we used
to see over 200k messages per second.

Does you test use a single JVM for all the clients? I would be
interested to know if there is any difference between a single JVM
with multiple connections and multiple JVMs each with a single
connection.

Carl, does your test use separate processes or a single process with
connections?

> Best I have seen so far was SwiftMQ which managed to write
> batch 8k msgs/sec in auto ack mode, 16:16 p2p, on a disk setup that can
> handle maybe 500 IOPS (very rough estimate), which is impressive.

What does this test do exactly - what does the "batch 8k msgs/sec"
mean? How does it compare with the same test on Qpid?

RG

Re: Weekly plans.

Posted by Rupert Smith <ru...@googlemail.com>.
What size of messages are you maxing the 1Gig connection at? Obviously its
easy to do with big messages. I will attempt to guess, assuming the test is
pubsub 1:x, with x large enough that I can assume the broadcast traffic is
what is consuming the bandwidth.

1Gbit/sec / 8 bits/byte / 176k msgs/sec = approx 710 bytes/msg

Are you running p2p or pubsub tests, and if pubsub what is the fanout ratio
(1:x)?

The fastest I've seen the Java M2 go on pubsub 2:16 is around 100k msgs/sec
w. 256 byte messages. Although, I feel it could go faster because I was
testing with just one client machine, and the CPU maxed out on the client
and not the broker well before the connection was saturated :(

I have been doing a bit of comparison of M2 with other middleware products.
Generally speaking to compare products, I use small messages (settling on
256 bytes as a standard for all tests), because large messages will reach IO
bounds and test the hardware not the middleware. So far, we hold up pretty
well.

I think one of the best direct comparison between two brokers, is to do a
transient 1:1 p2p test, but scale it up 8 or 16 times, so its 8:8 or 16:16
across that many separate queues. This gives the broker a good opportunity
to scale over many cores, but also tests the full service time to route each
message for every message (contrasted with pubsub where each message might
be routed once, then pumped out onto the network multiple times). Ultimately
it is this service time that matters. Doing p2p with small messages uses
more CPU/message on the broker side, therefore gives you the best feel for
the efficiency of the software and the best chance of avoiding saturating
the hardware. Pubsub produces bigger, and therefore more impressive numbers,
but I do think p2p is better for comparison (unless you want to test the
efficiency of topics/selectors, which is also worth comparing).

Likewise, in persistent mode, for p2p with small messages, the limiting
factor is the disk latency, so the test uncovers how good the disk
store/fetch algorithm is wrt the disks max IO operations per second. Again
this shows up the differences in algorithms used by different middleware
quite nicely. Best I have seen so far was SwiftMQ which managed to write
batch 8k msgs/sec in auto ack mode, 16:16 p2p, on a disk setup that can
handle maybe 500 IOPS (very rough estimate), which is impressive.

To do a direct compare, suggest you use the same hardware setup for all
tests. Build the perftests on M2, under java/perftests:

mvn install uk.co.thebadgerset:junit-toolkit-maven-plugin:tkscriptgenassembly:directory
(or you could use assembly:assembly to create a .tar.gz)

cd target/qpid-perftests-1.0-incubating-M2-all-test-deps.dir

run the test cases:

./TQBT-TX-Qpid-01
.... through to
./PTBT-AA-Qpid-01

detailed in the pom.xml. TQBT-TX stands for Transient Queue Benchmark
Throughput w Transactions, PTBT stands for Persistent Topic Benchmark
Throughput w AutoAck, etc. An example run might look like:

./TQBT-TX-Qpid-01 broker=tcp://10.0.0.1:5672 -o resultsdir/ --csv

Also, perftest stuff is most up-to-date on M2.1, both the test code and the
numbers in the generated scripts in the pom.xml (which have taken a lot of
tweaking to get right). M2.1 perftests has been updated to use pure JMS,
like Arnaud did for trunk, but I have also put in a few fixes into it that
have not been merged onto trunk. I think I should probably merge all these
changes from M2.1 onto M2 and trunk, to make direct comparison easier.

Rupert

On 07/11/2007, Carl Trieloff <cc...@redhat.com> wrote:
>
> Robert Greig wrote:
> > On 07/11/2007, Arnaud Simon <as...@redhat.com> wrote:
> >
> >
> >> This week I will be adding dtx and crash recovery tests, I will also be
> >> looking at optimizing the java 0_10 client.
> >>
> >
> > Do you have any performance test results for the 0-10 client?
> >
> > RG
> >
>
> As all the clients are to the C++ broker -there is what is the broker
> capable of and then how close is the
> client for each language. I still don't have enough data to quote for
> each component.
>
> looks like broker client C++ for publish can max TCP on Gig (176k
> msg/sec) for the size of message
> my test is using and it consume 1 core of CPU time to do this. Consume
> does not show symmetric rate  -- still
> working out if broker or client lib.
>
> also don't think this is max - i.e. IB should be much faster - the
> number above is limited by the specific network
> I am running on. one of the upcoming tests will most likely be to 'cat'
> the full conversation to the socket / IO
> buffer on the local machine to determine the top limit if the machine
> had multiple NICs or on IB. and find out
> where the consume issue is.... (think Alan is hatching a plan to try that)
>
> what are the rate / message size / CPU you are seeing on M2? - would
> like to do a direct compare.
> Carl.
>
>
>
>
>
>
>
>

Re: Weekly plans.

Posted by Carl Trieloff <cc...@redhat.com>.
Robert Greig wrote:
> On 07/11/2007, Arnaud Simon <as...@redhat.com> wrote:
>
>   
>> This week I will be adding dtx and crash recovery tests, I will also be
>> looking at optimizing the java 0_10 client.
>>     
>
> Do you have any performance test results for the 0-10 client?
>
> RG
>   

As all the clients are to the C++ broker -there is what is the broker 
capable of and then how close is the
client for each language. I still don't have enough data to quote for 
each component.

looks like broker client C++ for publish can max TCP on Gig (176k 
msg/sec) for the size of message
my test is using and it consume 1 core of CPU time to do this. Consume 
does not show symmetric rate  -- still
working out if broker or client lib.

also don't think this is max - i.e. IB should be much faster - the 
number above is limited by the specific network
I am running on. one of the upcoming tests will most likely be to 'cat' 
the full conversation to the socket / IO
buffer on the local machine to determine the top limit if the machine 
had multiple NICs or on IB. and find out
where the consume issue is.... (think Alan is hatching a plan to try that)

what are the rate / message size / CPU you are seeing on M2? - would 
like to do a direct compare.
Carl.








Re: Weekly plans.

Posted by Robert Greig <ro...@gmail.com>.
On 07/11/2007, Arnaud Simon <as...@redhat.com> wrote:

> This week I will be adding dtx and crash recovery tests, I will also be
> looking at optimizing the java 0_10 client.

Do you have any performance test results for the 0-10 client?

RG