You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@qpid.apache.org by Adel Boutros <ad...@live.com> on 2016/07/25 16:10:45 UTC
[Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
Java Broker 6.0.0
Hello,We are actually running some performance benchmarks in an architecture consisting of a Java Broker connected to a Qpid dispatch router. We also have 3 producers and 3 consumers in the test. The producers send message to a topic which has a binding on a queue with a filter and the consumers receives message from that queue.
We have noticed a significant loss of performance in this architecture compared to an architecture composed of a simple Java Broker. The throughput of the producers is down to half and there are a lot of oscillations in the presence of the dispatcher.
I have tried to double the number of workers on the dispatcher but it had no impact.
Can you please help us find the cause of this issue?
Dispacth router config
router {
id: router.10454
mode: interior
worker-threads: 4
}
listener {
host: 0.0.0.0
port: 10454
role: normal
saslMechanisms: ANONYMOUS
requireSsl: no
authenticatePeer: no
}
Java Broker config
export QPID_JAVA_MEM="-Xmx16g -Xms2g"
1 Topic + 1 Queue
1 AMQP port without any authentication mechanism (ANONYMOUS)
Qdmanage on Dispatcher
qdmanage -b amqp://localhost:10454 create --type=address prefix=perfQueue waypoint=true name=perf.queue.addr
qdmanage -b amqp://localhost:10454 create --type=address prefix=perf.topic waypoint=true name=perf.topic.addr
qdmanage -b amqp://localhost:10454 create --type=connector role=route-container addr=localhost port=10455 name=localhost.broker.10455.connector
qdmanage -b amqp://localhost:10454 create --type=autoLink addr=perfQueue dir=in connection=localhost.broker.10455.connector name=localhost.broker.10455.perfQueue.in
qdmanage -b amqp://localhost:10454 create --type=autoLink addr=perf.topic dir=out connection=localhost.broker.10455.connector name=localhost.broker.10455.perf.topic.out
Combined producer throughput
1 Broker: http://hpics.li/a9d6efa
1 Broker + 1 Dispatcher: http://hpics.li/189299b
Regards,
Adel
Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
Java Broker 6.0.0
Posted by Gordon Sim <gs...@redhat.com>.
On 02/08/16 18:29, Adel Boutros wrote:
> Were you able to check the below? Can it be some other resource is being congested in the code such as the mutex mechanism or the IO?
When going through the router, all the messages will be transferred to
the broker over a single connection. Are the messages durable/persistent?
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org
RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
Java Broker 6.0.0
Posted by Adel Boutros <ad...@live.com>.
Hello Ted,
Were you able to check the below? Can it be some other resource is being congested in the code such as the mutex mechanism or the IO?
Regards,Adel
> From: adelboutros@live.com
> To: users@qpid.apache.org
> Subject: RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> Date: Fri, 29 Jul 2016 14:45:48 +0200
>
> Here is an image representation of the badly formatted table: http://imgur.com/a/EuWch
> > From: adelboutros@live.com
> > To: users@qpid.apache.org
> > Subject: RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> > Date: Fri, 29 Jul 2016 14:40:10 +0200
> >
> > Hello Ted,
> >
> > Increasing the link capacity had no impact. So, I have
> > done a series of tests to try and isolate the issue.
> > We tested 3 different architecture without any consumers:
> > Producer --> Broker
> > Producer --> Dispatcher
> > Producer --> Dispatcher --> Broker
> > In every test, we sent 100 000 messages which contained a byte array of 100 bytes. The producers are sending in synchronous mode and with AUTO_ACKNOWLEDGE.
> >
> > Our benchmark machines have 20 cores and 396 Gb Ram each. We have
> > currently put consumers/producers on 1 machine and dispatcher/brokers on another machine. They are both connected with a 10 Gbps ethernet connection. Nothing else is using the machines.
> >
> > The results are in
> > the table below.
> >
> > What I could observe:
> > The broker alone scales well when I add producers
> > The dispatcher alone scales well when I add producersThe dispatcher connected to a broker scales well with 2 producersThe dispatcher connected to a broker fails when having 3 producers or more
> >
> > I
> > also did some "qdstat -l" while the test was running and at max had 5
> > unsettled deliveries. So I don't think the problem comes with the
> > linkCapacity.
> >
> > What else can we look at? How does the dispatcher connect the producers to the broker? Does it open a new connection with each new producer? Or does it use some sort of a connection pool?
> >
> > Could the issue come from the capacity configuration of the link in the connection between the broker and the dispatcher?
> >
> >
> >
> >
> >
> >
> >
> >
> > Number of Producers
> > Broker
> > Dispatcher
> > Combined Producer Throughput (msg/s)
> > Combined Producer Latency (micros)
> >
> >
> > 1
> > YES
> >
> > NO
> >
> > 3 500
> > 370
> >
> >
> > 4
> > YES
> > NO
> >
> > 9 200
> > 420
> >
> >
> > 1
> > NO
> > YES
> > 6 000
> > 180
> >
> >
> > 2
> > NO
> > YES
> > 12 000
> > 192
> >
> >
> > 3
> > NO
> > YES
> > 16 000
> > 201
> >
> >
> > 1
> > YES
> > YES
> > 2 500
> > 360
> >
> >
> > 2
> > YES
> > YES
> > 4 800
> > 400
> >
> >
> > 3
> > YES
> > YES
> > 5 200
> > 540
> >
> >
> > qdstat -l
> > bash$ qdstat -b dell445srv:10254 -l
> > Router Links
> > type dir conn id id peer class addr phs cap undel unsettled deliveries admin oper
> > =======================================================================================================================
> > endpoint in 19 46 mobile perfQueue 1 250 0 0 0 enabled up
> > endpoint out 19 54 mobile perf.topic 0 250 0 2 4994922 enabled up
> > endpoint in 27 57 mobile perf.topic 0 250 0 1 1678835 enabled up
> > endpoint in 28 58 mobile perf.topic 0 250 0 1 1677653 enabled up
> > endpoint in 29 59 mobile perf.topic 0 250 0 0 1638434 enabled up
> > endpoint in 47 94 mobile $management 0 250 0 0 1 enabled up
> > endpoint out 47 95 local temp.2u+DSi+26jT3hvZ 250 0 0 0 enabled up
> >
> > Regards,
> > Adel
> >
> > > Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> > > To: users@qpid.apache.org
> > > From: tross@redhat.com
> > > Date: Tue, 26 Jul 2016 10:32:29 -0400
> > >
> > > Adel,
> > >
> > > That's a good question. I think it's highly dependent on your
> > > requirements and the environment. Here are some random thoughts:
> > >
> > > - There's a trade-off between memory use (message buffering) and
> > > throughput. If you have many clients sharing the message bus,
> > > smaller values of linkCapacity will protect the router memory. If
> > > you have relatively few clients wanting to go fast, a larger
> > > linkCapacity is appropriate.
> > > - If the underlying network has high latency (satellite links, long
> > > distances, etc.), larger values of linkCapacity will be needed to
> > > protect against stalling caused by delayed settlement.
> > > - The default of 250 is considered a reasonable compromise. I think a
> > > value around 10 is better for a shared bus, but 500-1000 might be
> > > better for throughput with few clients.
> > >
> > > -Ted
> > >
> > >
> > > On 07/26/2016 10:08 AM, Adel Boutros wrote:
> > > > Thanks Ted,
> > > >
> > > > I will try to change linkCapacity. However, I was wondering if there is a way to "calculate an optimal value for linkCapacity". What factors can impact this field?
> > > >
> > > > Regards,
> > > > Adel
> > > >
> > > >> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> > > >> To: users@qpid.apache.org
> > > >> From: tross@redhat.com
> > > >> Date: Tue, 26 Jul 2016 09:44:43 -0400
> > > >>
> > > >> Adel,
> > > >>
> > > >> The number of workers should be related to the number of available
> > > >> processor cores, not the volume of work or number of connections. 4 is
> > > >> probably a good number for testing.
> > > >>
> > > >> I'm not sure what the default link credit is for the Java broker (it's
> > > >> 500 for the c++ broker) or the clients you're using.
> > > >>
> > > >> The metric you should adjust is the linkCapacity for the listener and
> > > >> route-container connector. LinkCapacity is the number of deliveries
> > > >> that can be in-flight (unsettled) on each link. Qpid Dispatch Router
> > > >> defaults linkCapacity to 250. Depending on the volumes in your test,
> > > >> this might account for the discrepancy. You should try increasing this
> > > >> value.
> > > >>
> > > >> Note that linkCapacity is used to set initial credit for your links.
> > > >>
> > > >> -Ted
> > > >>
> > > >> On 07/25/2016 12:10 PM, Adel Boutros wrote:
> > > >>> Hello,We are actually running some performance benchmarks in an architecture consisting of a Java Broker connected to a Qpid dispatch router. We also have 3 producers and 3 consumers in the test. The producers send message to a topic which has a binding on a queue with a filter and the consumers receives message from that queue.
> > > >>> We have noticed a significant loss of performance in this architecture compared to an architecture composed of a simple Java Broker. The throughput of the producers is down to half and there are a lot of oscillations in the presence of the dispatcher.
> > > >>>
> > > >>> I have tried to double the number of workers on the dispatcher but it had no impact.
> > > >>>
> > > >>> Can you please help us find the cause of this issue?
> > > >>>
> > > >>> Dispacth router config
> > > >>> router {
> > > >>> id: router.10454
> > > >>> mode: interior
> > > >>> worker-threads: 4
> > > >>> }
> > > >>>
> > > >>> listener {
> > > >>> host: 0.0.0.0
> > > >>> port: 10454
> > > >>> role: normal
> > > >>> saslMechanisms: ANONYMOUS
> > > >>> requireSsl: no
> > > >>> authenticatePeer: no
> > > >>> }
> > > >>>
> > > >>> Java Broker config
> > > >>> export QPID_JAVA_MEM="-Xmx16g -Xms2g"
> > > >>> 1 Topic + 1 Queue
> > > >>> 1 AMQP port without any authentication mechanism (ANONYMOUS)
> > > >>>
> > > >>> Qdmanage on Dispatcher
> > > >>> qdmanage -b amqp://localhost:10454 create --type=address prefix=perfQueue waypoint=true name=perf.queue.addr
> > > >>> qdmanage -b amqp://localhost:10454 create --type=address prefix=perf.topic waypoint=true name=perf.topic.addr
> > > >>> qdmanage -b amqp://localhost:10454 create --type=connector role=route-container addr=localhost port=10455 name=localhost.broker.10455.connector
> > > >>> qdmanage -b amqp://localhost:10454 create --type=autoLink addr=perfQueue dir=in connection=localhost.broker.10455.connector name=localhost.broker.10455.perfQueue.in
> > > >>> qdmanage -b amqp://localhost:10454 create --type=autoLink addr=perf.topic dir=out connection=localhost.broker.10455.connector name=localhost.broker.10455.perf.topic.out
> > > >>>
> > > >>> Combined producer throughput
> > > >>> 1 Broker: http://hpics.li/a9d6efa
> > > >>> 1 Broker + 1 Dispatcher: http://hpics.li/189299b
> > > >>>
> > > >>> Regards,
> > > >>> Adel
> > > >>>
> > > >>>
> > > >>>
> > > >>
> > > >> ---------------------------------------------------------------------
> > > >> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> > > >> For additional commands, e-mail: users-help@qpid.apache.org
> > > >>
> > > >
> > > >
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> > > For additional commands, e-mail: users-help@qpid.apache.org
> > >
> >
>
RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
Java Broker 6.0.0
Posted by Adel Boutros <ad...@live.com>.
Here is an image representation of the badly formatted table: http://imgur.com/a/EuWch
> From: adelboutros@live.com
> To: users@qpid.apache.org
> Subject: RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> Date: Fri, 29 Jul 2016 14:40:10 +0200
>
> Hello Ted,
>
> Increasing the link capacity had no impact. So, I have
> done a series of tests to try and isolate the issue.
> We tested 3 different architecture without any consumers:
> Producer --> Broker
> Producer --> Dispatcher
> Producer --> Dispatcher --> Broker
> In every test, we sent 100 000 messages which contained a byte array of 100 bytes. The producers are sending in synchronous mode and with AUTO_ACKNOWLEDGE.
>
> Our benchmark machines have 20 cores and 396 Gb Ram each. We have
> currently put consumers/producers on 1 machine and dispatcher/brokers on another machine. They are both connected with a 10 Gbps ethernet connection. Nothing else is using the machines.
>
> The results are in
> the table below.
>
> What I could observe:
> The broker alone scales well when I add producers
> The dispatcher alone scales well when I add producersThe dispatcher connected to a broker scales well with 2 producersThe dispatcher connected to a broker fails when having 3 producers or more
>
> I
> also did some "qdstat -l" while the test was running and at max had 5
> unsettled deliveries. So I don't think the problem comes with the
> linkCapacity.
>
> What else can we look at? How does the dispatcher connect the producers to the broker? Does it open a new connection with each new producer? Or does it use some sort of a connection pool?
>
> Could the issue come from the capacity configuration of the link in the connection between the broker and the dispatcher?
>
>
>
>
>
>
>
>
> Number of Producers
> Broker
> Dispatcher
> Combined Producer Throughput (msg/s)
> Combined Producer Latency (micros)
>
>
> 1
> YES
>
> NO
>
> 3 500
> 370
>
>
> 4
> YES
> NO
>
> 9 200
> 420
>
>
> 1
> NO
> YES
> 6 000
> 180
>
>
> 2
> NO
> YES
> 12 000
> 192
>
>
> 3
> NO
> YES
> 16 000
> 201
>
>
> 1
> YES
> YES
> 2 500
> 360
>
>
> 2
> YES
> YES
> 4 800
> 400
>
>
> 3
> YES
> YES
> 5 200
> 540
>
>
> qdstat -l
> bash$ qdstat -b dell445srv:10254 -l
> Router Links
> type dir conn id id peer class addr phs cap undel unsettled deliveries admin oper
> =======================================================================================================================
> endpoint in 19 46 mobile perfQueue 1 250 0 0 0 enabled up
> endpoint out 19 54 mobile perf.topic 0 250 0 2 4994922 enabled up
> endpoint in 27 57 mobile perf.topic 0 250 0 1 1678835 enabled up
> endpoint in 28 58 mobile perf.topic 0 250 0 1 1677653 enabled up
> endpoint in 29 59 mobile perf.topic 0 250 0 0 1638434 enabled up
> endpoint in 47 94 mobile $management 0 250 0 0 1 enabled up
> endpoint out 47 95 local temp.2u+DSi+26jT3hvZ 250 0 0 0 enabled up
>
> Regards,
> Adel
>
> > Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> > To: users@qpid.apache.org
> > From: tross@redhat.com
> > Date: Tue, 26 Jul 2016 10:32:29 -0400
> >
> > Adel,
> >
> > That's a good question. I think it's highly dependent on your
> > requirements and the environment. Here are some random thoughts:
> >
> > - There's a trade-off between memory use (message buffering) and
> > throughput. If you have many clients sharing the message bus,
> > smaller values of linkCapacity will protect the router memory. If
> > you have relatively few clients wanting to go fast, a larger
> > linkCapacity is appropriate.
> > - If the underlying network has high latency (satellite links, long
> > distances, etc.), larger values of linkCapacity will be needed to
> > protect against stalling caused by delayed settlement.
> > - The default of 250 is considered a reasonable compromise. I think a
> > value around 10 is better for a shared bus, but 500-1000 might be
> > better for throughput with few clients.
> >
> > -Ted
> >
> >
> > On 07/26/2016 10:08 AM, Adel Boutros wrote:
> > > Thanks Ted,
> > >
> > > I will try to change linkCapacity. However, I was wondering if there is a way to "calculate an optimal value for linkCapacity". What factors can impact this field?
> > >
> > > Regards,
> > > Adel
> > >
> > >> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> > >> To: users@qpid.apache.org
> > >> From: tross@redhat.com
> > >> Date: Tue, 26 Jul 2016 09:44:43 -0400
> > >>
> > >> Adel,
> > >>
> > >> The number of workers should be related to the number of available
> > >> processor cores, not the volume of work or number of connections. 4 is
> > >> probably a good number for testing.
> > >>
> > >> I'm not sure what the default link credit is for the Java broker (it's
> > >> 500 for the c++ broker) or the clients you're using.
> > >>
> > >> The metric you should adjust is the linkCapacity for the listener and
> > >> route-container connector. LinkCapacity is the number of deliveries
> > >> that can be in-flight (unsettled) on each link. Qpid Dispatch Router
> > >> defaults linkCapacity to 250. Depending on the volumes in your test,
> > >> this might account for the discrepancy. You should try increasing this
> > >> value.
> > >>
> > >> Note that linkCapacity is used to set initial credit for your links.
> > >>
> > >> -Ted
> > >>
> > >> On 07/25/2016 12:10 PM, Adel Boutros wrote:
> > >>> Hello,We are actually running some performance benchmarks in an architecture consisting of a Java Broker connected to a Qpid dispatch router. We also have 3 producers and 3 consumers in the test. The producers send message to a topic which has a binding on a queue with a filter and the consumers receives message from that queue.
> > >>> We have noticed a significant loss of performance in this architecture compared to an architecture composed of a simple Java Broker. The throughput of the producers is down to half and there are a lot of oscillations in the presence of the dispatcher.
> > >>>
> > >>> I have tried to double the number of workers on the dispatcher but it had no impact.
> > >>>
> > >>> Can you please help us find the cause of this issue?
> > >>>
> > >>> Dispacth router config
> > >>> router {
> > >>> id: router.10454
> > >>> mode: interior
> > >>> worker-threads: 4
> > >>> }
> > >>>
> > >>> listener {
> > >>> host: 0.0.0.0
> > >>> port: 10454
> > >>> role: normal
> > >>> saslMechanisms: ANONYMOUS
> > >>> requireSsl: no
> > >>> authenticatePeer: no
> > >>> }
> > >>>
> > >>> Java Broker config
> > >>> export QPID_JAVA_MEM="-Xmx16g -Xms2g"
> > >>> 1 Topic + 1 Queue
> > >>> 1 AMQP port without any authentication mechanism (ANONYMOUS)
> > >>>
> > >>> Qdmanage on Dispatcher
> > >>> qdmanage -b amqp://localhost:10454 create --type=address prefix=perfQueue waypoint=true name=perf.queue.addr
> > >>> qdmanage -b amqp://localhost:10454 create --type=address prefix=perf.topic waypoint=true name=perf.topic.addr
> > >>> qdmanage -b amqp://localhost:10454 create --type=connector role=route-container addr=localhost port=10455 name=localhost.broker.10455.connector
> > >>> qdmanage -b amqp://localhost:10454 create --type=autoLink addr=perfQueue dir=in connection=localhost.broker.10455.connector name=localhost.broker.10455.perfQueue.in
> > >>> qdmanage -b amqp://localhost:10454 create --type=autoLink addr=perf.topic dir=out connection=localhost.broker.10455.connector name=localhost.broker.10455.perf.topic.out
> > >>>
> > >>> Combined producer throughput
> > >>> 1 Broker: http://hpics.li/a9d6efa
> > >>> 1 Broker + 1 Dispatcher: http://hpics.li/189299b
> > >>>
> > >>> Regards,
> > >>> Adel
> > >>>
> > >>>
> > >>>
> > >>
> > >> ---------------------------------------------------------------------
> > >> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> > >> For additional commands, e-mail: users-help@qpid.apache.org
> > >>
> > >
> > >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> > For additional commands, e-mail: users-help@qpid.apache.org
> >
>
Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
Java Broker 6.0.0
Posted by Rob Godfrey <ro...@gmail.com>.
On 2 August 2016 at 21:21, Gordon Sim <gs...@redhat.com> wrote:
> On 02/08/16 20:18, Ted Ross wrote:
>
>> Since this is synchronous and durable, I would expect the store to be
>> the bottleneck in these cases and that for rates of ~7.5K, the router
>> shouldn't be a factor.
>>
>
> I don't know anything about the java broker internals, but when going
> through a router the messages will all be sent down one connection. The
> broker will probably then process these serially. That _may_ have a lower
> limit than writing to the store from multiple threads in parallel.
>
>
>
Just to confirm that synchronously sending messages in a single session on
a connection is pretty much the slowest possible way to do things with the
Java Broker. Using multiple connections will speed things up as
writes/syncs to disk will be coalesced. On earlier protocols even
splitting across multiple sessions will show improvement - I can't remember
off the top of my head whether that is implemented in the 1-0 protocol
layer or not. If it isn't I'll make sure it is fixed for the next version.
-- Rob
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
>
>
Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
Java Broker 6.0.0
Posted by Ted Ross <tr...@redhat.com>.
On 08/02/2016 03:25 PM, Adel Boutros wrote:
> How about the tests we did with consumer/producers connected directly to the dispatcher without any broker where we had 16 000 msg/s with 4 producers. Is it also a very low value given that there is no persistence or storing here? It was also synchronous sending.
If you're benchmarking throughput, you really want to avoid synchronous
sending. I think 16K msg/s synchronous with four senders sounds about
right.
>
>> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
>> To: users@qpid.apache.org
>> From: gsim@redhat.com
>> Date: Tue, 2 Aug 2016 20:21:40 +0100
>>
>> On 02/08/16 20:18, Ted Ross wrote:
>>> Since this is synchronous and durable, I would expect the store to be
>>> the bottleneck in these cases and that for rates of ~7.5K, the router
>>> shouldn't be a factor.
>>
>> I don't know anything about the java broker internals, but when going
>> through a router the messages will all be sent down one connection. The
>> broker will probably then process these serially. That _may_ have a
>> lower limit than writing to the store from multiple threads in parallel.
>>
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>> For additional commands, e-mail: users-help@qpid.apache.org
>>
>
>
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org
Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
Java Broker 6.0.0
Posted by Ulf Lilleengen <lu...@redhat.com>.
Hi Adel,
I used a benchmarking tool called ebench
(https://github.com/EnMasseProject/enmasse-bench) that connects a client
(actually 2, a sender and a receiver) to an AMQP endpoint.
Before sending each message, it puts a message tag and timestamp into a
map, and when it is settled by the receiver, it records the elapsed time
for that message and increases 'sentMessages'. Once the test has
finished, it can produce various statistics such as throughput by
calculating sentMessages/totalTime and also average latencies and
percentiles.
I can't comment on JMS performance, as I have not used it yet. The
benchmark tool uses qpid-proton reactor java API. Anyway, I'm not sure
my numbers are comparable to yours. I just wanted to point out that
dispatch router should be able to process that amount of messages just
fine on standard hardware.
On 08/03/2016 06:39 PM, Adel Boutros wrote:
> And how do you measure your throughput?
>
>> From: adelboutros@live.com
>> To: users@qpid.apache.org
>> Subject: RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
>> Date: Wed, 3 Aug 2016 18:38:12 +0200
>>
>> Hello Ulf,
>>
>> I am sending messages with a byte array of 100 bytes and I am using Berkley DB as a message store (which should be slower than having memory only message store, no?)
>>
>> With 1 consumer, 1 producer and no broker, I am at 33k msgs/sec if they are all on the same machine and I have set "jms.forceAsyncSend=true" on the producer and "jms.sendAcksAsync=true" for the consumer.
>>
>> Are you using other options to get 190k? Do you think JMS might be a bottleneck? Or something else in my config/test?
>>
>> JMS client 0.9.0
>> Qpid Java Broker 6.0.1
>> Dispatcher 0.6.0
>>
>> Adel
>>
>>> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
>>> To: users@qpid.apache.org
>>> From: lulf@redhat.com
>>> Date: Wed, 3 Aug 2016 16:23:06 +0200
>>>
>>> Hi,
>>>
>>> Excuse me if this was already mentioned somewhere, but what is the size
>>> of the messages you are sending ?
>>>
>>> FWIW, I'm able to get around 30-40k msgs/sec sustained with 1 producer,
>>> 1 consumer, 1 dispatch (4 worker threads) and 1 broker (activemq-5). The
>>> sender sends unsettled messages as fast as it can using qpid-proton
>>> reactor API which is sending async up to the window limit.
>>>
>>> With no broker involved, I'm getting ~190k msgs/sec.
>>>
>>> All of these numbers are from my 8 core laptop. Message size is 128 bytes.
>>>
>>> I don't know the dispatcher that well, but I think it should be able to
>>> handle data from each connector just fine given the numbers I have seen.
>>>
>>> On 08/03/2016 02:41 PM, Adel Boutros wrote:
>>>>
>>>>
>>>>
>>>> Hello again,
>>>>
>>>
>>>
>>>
>>>> As requested, I added a 2nd connector and the appropriate autoLinks on the same host/port but with a different name. It seems to have resolved the issue.
>>>>
>>>> 1 Broker, 1 Dispatcher, 3 producers, 0 consumers, 1 connectors --> 5000 msg/s.
>>>> 1 Broker, 1 Dispatcher, 3 producers, 0 consumers, 2 connectors --> 6600 msg/s.
>>>> 1 Broker, 1 Dispatcher, 4 producers, 0 consumers, 2 connectors --> 7700 msg/s.
>>>>
>>>> I think this confirms the problem is due to the fact a single connection is being shared by all clients (consumers/producers) and that having a sort of pool of connections or a connection per workerThread is a solution to consider.
>>>>
>>>> What do you think?
>>>>
>>>> I added a 3rd connector to see if it changes anything but it
>>>> didn't. Do you think this is maybe because the dispatcher is not able
>>>> to process fast enough and saturate the 2 connectors?
>>>> 1 Broker, 1 Dispatcher, 4 producers, 0 consumers, 3 connectors --> 7700 msg/s.
>>>>
>>>
>>>
>>>
>>>> Adel
>>>>
>>>>> From: adelboutros@live.com
>>>>> To: users@qpid.apache.org
>>>>> Subject: RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
>>>>> Date: Tue, 2 Aug 2016 22:21:54 +0200
>>>>>
>>>>> Sorry for the typo. Indeed, it was with 3 producers. I used 4 and 8 workerThread but there wasn't a difference.
>>>>> We want to benchmark in the worst case scenarios actually to see what is the minimum we can guarantee. This is why we are using synchronous sending. In the future, we will also benchmark with full SSL/SASL to see what impact it has on the performance.
>>>>>> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
>>>>>> To: users@qpid.apache.org
>>>>>> From: gsim@redhat.com
>>>>>> Date: Tue, 2 Aug 2016 20:41:54 +0100
>>>>>>
>>>>>> On 02/08/16 20:25, Adel Boutros wrote:
>>>>>>> How about the tests we did with consumer/producers connected directly to the dispatcher without any broker where we had 16 000 msg/s with 4 producers. Is it also a very low value given that there is no persistence or storing here? It was also synchronous sending.
>>>>>>
>>>>>> The rate is low because it is synchronous. One messages is sent to the
>>>>>> consumer who acknowledges it, the acknowledgement is then conveyed back
>>>>>> to the sender who then can send the next message.
>>>>>>
>>>>>> The rate for a single producer through the router was 6,000 per second.
>>>>>> That works out as a roundtrip time of 167 microsecs or so. In your
>>>>>> table, the 16,000 rate was listed as being for 3 producers. Based on the
>>>>>> rate of a single producer, the best you could hope for there is 3 *
>>>>>> 6,000 i.e 18,000. (How many worker threads did you have on the router
>>>>>> for that case?)
>>>>>>
>>>>>> ---------------------------------------------------------------------
>>>>>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>>>>>> For additional commands, e-mail: users-help@qpid.apache.org
>>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>
>>> --
>>> Ulf
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>>> For additional commands, e-mail: users-help@qpid.apache.org
>>>
>>
>
>
--
Ulf
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org
RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
Java Broker 6.0.0
Posted by William Davidson <ws...@gmail.com>.
"V"c"'r few w s44d ! d re es2 www www s
On Aug 8, 2016 3:56 AM, "Adel Boutros" <ad...@live.com> wrote:
> Hello guys,
>
> Just to wrap up, the last JMS tests performed with synchronous sending
> were:
> 1 Broker, 1 Dispatcher, 4 producers, 3 consumers, 4 connectors per broker
> --> 6 100 msg/s.
> 2 Broker, 1 Dispatcher, 4 producers, 3 consumers, 4 connectors per broker
> --> 5 800 msg/s.
> 2 Broker, 1 Dispatcher, 8 producers, 3 consumers, 4 connectors per broker
> --> 7 600 msg/s.
> 2 Broker, 1 Dispatcher, 12 producers, 3 consumers, 4 connectors per broker
> --> 8 100 msg/s.
>
> In conclusion:
> * The dispatch router itself is capable of handling high load of data.
> * The Java Broker is capable of handling high load of data.
> * Increasing the number of connectors increases the performance until
> other components become the bottleneck (Doubling the producers increased
> the throughput in the case of 2 brokers)
> * Having a pool of connections as a config parameter just like
> "workerThreads" might be considered as a neater option than defining
> multiple connectors with their autolinks.
> * JMS overhead and serialization/de-serialization might be also a
> bottleneck.
>
> Regards,
> Adel
>
> > From: robbie.gemmell@gmail.com
> > Date: Thu, 4 Aug 2016 10:58:13 +0100
> > Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with
> Qpid Java Broker 6.0.0
> > To: users@qpid.apache.org
> >
> > I havent seen the code Ulf is using, but I would guess...edit: ninja'd
> > by Ulf while I was looking at something else, deleted ;)
> >
> > The reactor Ulf is using is a good bit lower level and has a
> > significantly different threading and application usage model than the
> > JMS client, so they will differ a good amount from that alone, but we
> > can likely improve on the JMS clients performance still.
> >
> > Another big reason they will also typically differ beyond their basic
> > architecture though is that they will often send very different
> > messages on the wire for what may seem on the face of it like similar
> > messages at the application level, as there is a good amount of
> > metadata related to supporting behaviours required of a JMS client.
> > Unless you were to code the reactor based sender to send more similar
> > content (which obviously in some of the cases might not actually make
> > sense), then the messages themselves aren't really equivalent. I'd
> > guess that the messages being used in the reactor sender are body
> > section only (is the body reused?), whereas the ones the JMS client is
> > sending will have properties, header and annotations sections on top
> > with content in each of those. Some of that content is going to be
> > general purpose stuff a reactor based sender might want to send too
> > (e.g message-id) whereas other bits are just JMS-client specific
> > meta-data it likely wouldnt.
> >
> > Robbie
> >
> > On 4 August 2016 at 09:40, Adel Boutros <ad...@live.com> wrote:
> > > Our producers/consumers actually logs the elapsed time. This was
> slowing down the test. I deactivated the logging and with a dispatcher
> only, I am at around 47 000 msg/s with asynchronous sending.
> > >
> > >> From: adelboutros@live.com
> > >> To: users@qpid.apache.org
> > >> Subject: RE: [Performance] Benchmarking Qpid dispatch router 0.6.0
> with Qpid Java Broker 6.0.0
> > >> Date: Wed, 3 Aug 2016 18:39:23 +0200
> > >>
> > >> And how do you measure your throughput?
> > >>
> > >> > From: adelboutros@live.com
> > >> > To: users@qpid.apache.org
> > >> > Subject: RE: [Performance] Benchmarking Qpid dispatch router 0.6.0
> with Qpid Java Broker 6.0.0
> > >> > Date: Wed, 3 Aug 2016 18:38:12 +0200
> > >> >
> > >> > Hello Ulf,
> > >> >
> > >> > I am sending messages with a byte array of 100 bytes and I am using
> Berkley DB as a message store (which should be slower than having memory
> only message store, no?)
> > >> >
> > >> > With 1 consumer, 1 producer and no broker, I am at 33k msgs/sec if
> they are all on the same machine and I have set "jms.forceAsyncSend=true"
> on the producer and "jms.sendAcksAsync=true" for the consumer.
> > >> >
> > >> > Are you using other options to get 190k? Do you think JMS might be
> a bottleneck? Or something else in my config/test?
> > >> >
> > >> > JMS client 0.9.0
> > >> > Qpid Java Broker 6.0.1
> > >> > Dispatcher 0.6.0
> > >> >
> > >> > Adel
> > >> >
> > >> > > Subject: Re: [Performance] Benchmarking Qpid dispatch router
> 0.6.0 with Qpid Java Broker 6.0.0
> > >> > > To: users@qpid.apache.org
> > >> > > From: lulf@redhat.com
> > >> > > Date: Wed, 3 Aug 2016 16:23:06 +0200
> > >> > >
> > >> > > Hi,
> > >> > >
> > >> > > Excuse me if this was already mentioned somewhere, but what is
> the size
> > >> > > of the messages you are sending ?
> > >> > >
> > >> > > FWIW, I'm able to get around 30-40k msgs/sec sustained with 1
> producer,
> > >> > > 1 consumer, 1 dispatch (4 worker threads) and 1 broker
> (activemq-5). The
> > >> > > sender sends unsettled messages as fast as it can using
> qpid-proton
> > >> > > reactor API which is sending async up to the window limit.
> > >> > >
> > >> > > With no broker involved, I'm getting ~190k msgs/sec.
> > >> > >
> > >> > > All of these numbers are from my 8 core laptop. Message size is
> 128 bytes.
> > >> > >
> > >> > > I don't know the dispatcher that well, but I think it should be
> able to
> > >> > > handle data from each connector just fine given the numbers I
> have seen.
> > >> > >
> > >> > > On 08/03/2016 02:41 PM, Adel Boutros wrote:
> > >> > > >
> > >> > > >
> > >> > > >
> > >> > > > Hello again,
> > >> > > >
> > >> > >
> > >> > >
> > >> > >
> > >> > > > As requested, I added a 2nd connector and the appropriate
> autoLinks on the same host/port but with a different name. It seems to have
> resolved the issue.
> > >> > > >
> > >> > > > 1 Broker, 1 Dispatcher, 3 producers, 0 consumers, 1 connectors
> --> 5000 msg/s.
> > >> > > > 1 Broker, 1 Dispatcher, 3 producers, 0 consumers, 2 connectors
> --> 6600 msg/s.
> > >> > > > 1 Broker, 1 Dispatcher, 4 producers, 0 consumers, 2 connectors
> --> 7700 msg/s.
> > >> > > >
> > >> > > > I think this confirms the problem is due to the fact a single
> connection is being shared by all clients (consumers/producers) and that
> having a sort of pool of connections or a connection per workerThread is a
> solution to consider.
> > >> > > >
> > >> > > > What do you think?
> > >> > > >
> > >> > > > I added a 3rd connector to see if it changes anything but it
> > >> > > > didn't. Do you think this is maybe because the dispatcher is
> not able
> > >> > > > to process fast enough and saturate the 2 connectors?
> > >> > > > 1 Broker, 1 Dispatcher, 4 producers, 0 consumers, 3 connectors
> --> 7700 msg/s.
> > >> > > >
> > >> > >
> > >> > >
> > >> > >
> > >> > > > Adel
> > >> > > >
> > >> > > >> From: adelboutros@live.com
> > >> > > >> To: users@qpid.apache.org
> > >> > > >> Subject: RE: [Performance] Benchmarking Qpid dispatch router
> 0.6.0 with Qpid Java Broker 6.0.0
> > >> > > >> Date: Tue, 2 Aug 2016 22:21:54 +0200
> > >> > > >>
> > >> > > >> Sorry for the typo. Indeed, it was with 3 producers. I used 4
> and 8 workerThread but there wasn't a difference.
> > >> > > >> We want to benchmark in the worst case scenarios actually to
> see what is the minimum we can guarantee. This is why we are using
> synchronous sending. In the future, we will also benchmark with full
> SSL/SASL to see what impact it has on the performance.
> > >> > > >>> Subject: Re: [Performance] Benchmarking Qpid dispatch router
> 0.6.0 with Qpid Java Broker 6.0.0
> > >> > > >>> To: users@qpid.apache.org
> > >> > > >>> From: gsim@redhat.com
> > >> > > >>> Date: Tue, 2 Aug 2016 20:41:54 +0100
> > >> > > >>>
> > >> > > >>> On 02/08/16 20:25, Adel Boutros wrote:
> > >> > > >>>> How about the tests we did with consumer/producers connected
> directly to the dispatcher without any broker where we had 16 000 msg/s
> with 4 producers. Is it also a very low value given that there is no
> persistence or storing here? It was also synchronous sending.
> > >> > > >>>
> > >> > > >>> The rate is low because it is synchronous. One messages is
> sent to the
> > >> > > >>> consumer who acknowledges it, the acknowledgement is then
> conveyed back
> > >> > > >>> to the sender who then can send the next message.
> > >> > > >>>
> > >> > > >>> The rate for a single producer through the router was 6,000
> per second.
> > >> > > >>> That works out as a roundtrip time of 167 microsecs or so. In
> your
> > >> > > >>> table, the 16,000 rate was listed as being for 3 producers.
> Based on the
> > >> > > >>> rate of a single producer, the best you could hope for there
> is 3 *
> > >> > > >>> 6,000 i.e 18,000. (How many worker threads did you have on
> the router
> > >> > > >>> for that case?)
> > >> > > >>>
> > >> > > >>> ------------------------------------------------------------
> ---------
> > >> > > >>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> > >> > > >>> For additional commands, e-mail: users-help@qpid.apache.org
> > >> > > >>>
> > >> > > >>
> > >> > > >
> > >> > > >
> > >> > > >
> > >> > >
> > >> > > --
> > >> > > Ulf
> > >> > >
> > >> > > ------------------------------------------------------------
> ---------
> > >> > > To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> > >> > > For additional commands, e-mail: users-help@qpid.apache.org
> > >> > >
> > >> >
> > >>
> > >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> > For additional commands, e-mail: users-help@qpid.apache.org
> >
>
RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
Java Broker 6.0.0
Posted by Adel Boutros <ad...@live.com>.
Hello guys,
Just to wrap up, the last JMS tests performed with synchronous sending were:
1 Broker, 1 Dispatcher, 4 producers, 3 consumers, 4 connectors per broker --> 6 100 msg/s.
2 Broker, 1 Dispatcher, 4 producers, 3 consumers, 4 connectors per broker --> 5 800 msg/s.
2 Broker, 1 Dispatcher, 8 producers, 3 consumers, 4 connectors per broker --> 7 600 msg/s.
2 Broker, 1 Dispatcher, 12 producers, 3 consumers, 4 connectors per broker --> 8 100 msg/s.
In conclusion:
* The dispatch router itself is capable of handling high load of data.
* The Java Broker is capable of handling high load of data.
* Increasing the number of connectors increases the performance until other components become the bottleneck (Doubling the producers increased the throughput in the case of 2 brokers)
* Having a pool of connections as a config parameter just like "workerThreads" might be considered as a neater option than defining multiple connectors with their autolinks.
* JMS overhead and serialization/de-serialization might be also a bottleneck.
Regards,
Adel
> From: robbie.gemmell@gmail.com
> Date: Thu, 4 Aug 2016 10:58:13 +0100
> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> To: users@qpid.apache.org
>
> I havent seen the code Ulf is using, but I would guess...edit: ninja'd
> by Ulf while I was looking at something else, deleted ;)
>
> The reactor Ulf is using is a good bit lower level and has a
> significantly different threading and application usage model than the
> JMS client, so they will differ a good amount from that alone, but we
> can likely improve on the JMS clients performance still.
>
> Another big reason they will also typically differ beyond their basic
> architecture though is that they will often send very different
> messages on the wire for what may seem on the face of it like similar
> messages at the application level, as there is a good amount of
> metadata related to supporting behaviours required of a JMS client.
> Unless you were to code the reactor based sender to send more similar
> content (which obviously in some of the cases might not actually make
> sense), then the messages themselves aren't really equivalent. I'd
> guess that the messages being used in the reactor sender are body
> section only (is the body reused?), whereas the ones the JMS client is
> sending will have properties, header and annotations sections on top
> with content in each of those. Some of that content is going to be
> general purpose stuff a reactor based sender might want to send too
> (e.g message-id) whereas other bits are just JMS-client specific
> meta-data it likely wouldnt.
>
> Robbie
>
> On 4 August 2016 at 09:40, Adel Boutros <ad...@live.com> wrote:
> > Our producers/consumers actually logs the elapsed time. This was slowing down the test. I deactivated the logging and with a dispatcher only, I am at around 47 000 msg/s with asynchronous sending.
> >
> >> From: adelboutros@live.com
> >> To: users@qpid.apache.org
> >> Subject: RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> >> Date: Wed, 3 Aug 2016 18:39:23 +0200
> >>
> >> And how do you measure your throughput?
> >>
> >> > From: adelboutros@live.com
> >> > To: users@qpid.apache.org
> >> > Subject: RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> >> > Date: Wed, 3 Aug 2016 18:38:12 +0200
> >> >
> >> > Hello Ulf,
> >> >
> >> > I am sending messages with a byte array of 100 bytes and I am using Berkley DB as a message store (which should be slower than having memory only message store, no?)
> >> >
> >> > With 1 consumer, 1 producer and no broker, I am at 33k msgs/sec if they are all on the same machine and I have set "jms.forceAsyncSend=true" on the producer and "jms.sendAcksAsync=true" for the consumer.
> >> >
> >> > Are you using other options to get 190k? Do you think JMS might be a bottleneck? Or something else in my config/test?
> >> >
> >> > JMS client 0.9.0
> >> > Qpid Java Broker 6.0.1
> >> > Dispatcher 0.6.0
> >> >
> >> > Adel
> >> >
> >> > > Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> >> > > To: users@qpid.apache.org
> >> > > From: lulf@redhat.com
> >> > > Date: Wed, 3 Aug 2016 16:23:06 +0200
> >> > >
> >> > > Hi,
> >> > >
> >> > > Excuse me if this was already mentioned somewhere, but what is the size
> >> > > of the messages you are sending ?
> >> > >
> >> > > FWIW, I'm able to get around 30-40k msgs/sec sustained with 1 producer,
> >> > > 1 consumer, 1 dispatch (4 worker threads) and 1 broker (activemq-5). The
> >> > > sender sends unsettled messages as fast as it can using qpid-proton
> >> > > reactor API which is sending async up to the window limit.
> >> > >
> >> > > With no broker involved, I'm getting ~190k msgs/sec.
> >> > >
> >> > > All of these numbers are from my 8 core laptop. Message size is 128 bytes.
> >> > >
> >> > > I don't know the dispatcher that well, but I think it should be able to
> >> > > handle data from each connector just fine given the numbers I have seen.
> >> > >
> >> > > On 08/03/2016 02:41 PM, Adel Boutros wrote:
> >> > > >
> >> > > >
> >> > > >
> >> > > > Hello again,
> >> > > >
> >> > >
> >> > >
> >> > >
> >> > > > As requested, I added a 2nd connector and the appropriate autoLinks on the same host/port but with a different name. It seems to have resolved the issue.
> >> > > >
> >> > > > 1 Broker, 1 Dispatcher, 3 producers, 0 consumers, 1 connectors --> 5000 msg/s.
> >> > > > 1 Broker, 1 Dispatcher, 3 producers, 0 consumers, 2 connectors --> 6600 msg/s.
> >> > > > 1 Broker, 1 Dispatcher, 4 producers, 0 consumers, 2 connectors --> 7700 msg/s.
> >> > > >
> >> > > > I think this confirms the problem is due to the fact a single connection is being shared by all clients (consumers/producers) and that having a sort of pool of connections or a connection per workerThread is a solution to consider.
> >> > > >
> >> > > > What do you think?
> >> > > >
> >> > > > I added a 3rd connector to see if it changes anything but it
> >> > > > didn't. Do you think this is maybe because the dispatcher is not able
> >> > > > to process fast enough and saturate the 2 connectors?
> >> > > > 1 Broker, 1 Dispatcher, 4 producers, 0 consumers, 3 connectors --> 7700 msg/s.
> >> > > >
> >> > >
> >> > >
> >> > >
> >> > > > Adel
> >> > > >
> >> > > >> From: adelboutros@live.com
> >> > > >> To: users@qpid.apache.org
> >> > > >> Subject: RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> >> > > >> Date: Tue, 2 Aug 2016 22:21:54 +0200
> >> > > >>
> >> > > >> Sorry for the typo. Indeed, it was with 3 producers. I used 4 and 8 workerThread but there wasn't a difference.
> >> > > >> We want to benchmark in the worst case scenarios actually to see what is the minimum we can guarantee. This is why we are using synchronous sending. In the future, we will also benchmark with full SSL/SASL to see what impact it has on the performance.
> >> > > >>> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> >> > > >>> To: users@qpid.apache.org
> >> > > >>> From: gsim@redhat.com
> >> > > >>> Date: Tue, 2 Aug 2016 20:41:54 +0100
> >> > > >>>
> >> > > >>> On 02/08/16 20:25, Adel Boutros wrote:
> >> > > >>>> How about the tests we did with consumer/producers connected directly to the dispatcher without any broker where we had 16 000 msg/s with 4 producers. Is it also a very low value given that there is no persistence or storing here? It was also synchronous sending.
> >> > > >>>
> >> > > >>> The rate is low because it is synchronous. One messages is sent to the
> >> > > >>> consumer who acknowledges it, the acknowledgement is then conveyed back
> >> > > >>> to the sender who then can send the next message.
> >> > > >>>
> >> > > >>> The rate for a single producer through the router was 6,000 per second.
> >> > > >>> That works out as a roundtrip time of 167 microsecs or so. In your
> >> > > >>> table, the 16,000 rate was listed as being for 3 producers. Based on the
> >> > > >>> rate of a single producer, the best you could hope for there is 3 *
> >> > > >>> 6,000 i.e 18,000. (How many worker threads did you have on the router
> >> > > >>> for that case?)
> >> > > >>>
> >> > > >>> ---------------------------------------------------------------------
> >> > > >>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> >> > > >>> For additional commands, e-mail: users-help@qpid.apache.org
> >> > > >>>
> >> > > >>
> >> > > >
> >> > > >
> >> > > >
> >> > >
> >> > > --
> >> > > Ulf
> >> > >
> >> > > ---------------------------------------------------------------------
> >> > > To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> >> > > For additional commands, e-mail: users-help@qpid.apache.org
> >> > >
> >> >
> >>
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
>
Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
Java Broker 6.0.0
Posted by Robbie Gemmell <ro...@gmail.com>.
I havent seen the code Ulf is using, but I would guess...edit: ninja'd
by Ulf while I was looking at something else, deleted ;)
The reactor Ulf is using is a good bit lower level and has a
significantly different threading and application usage model than the
JMS client, so they will differ a good amount from that alone, but we
can likely improve on the JMS clients performance still.
Another big reason they will also typically differ beyond their basic
architecture though is that they will often send very different
messages on the wire for what may seem on the face of it like similar
messages at the application level, as there is a good amount of
metadata related to supporting behaviours required of a JMS client.
Unless you were to code the reactor based sender to send more similar
content (which obviously in some of the cases might not actually make
sense), then the messages themselves aren't really equivalent. I'd
guess that the messages being used in the reactor sender are body
section only (is the body reused?), whereas the ones the JMS client is
sending will have properties, header and annotations sections on top
with content in each of those. Some of that content is going to be
general purpose stuff a reactor based sender might want to send too
(e.g message-id) whereas other bits are just JMS-client specific
meta-data it likely wouldnt.
Robbie
On 4 August 2016 at 09:40, Adel Boutros <ad...@live.com> wrote:
> Our producers/consumers actually logs the elapsed time. This was slowing down the test. I deactivated the logging and with a dispatcher only, I am at around 47 000 msg/s with asynchronous sending.
>
>> From: adelboutros@live.com
>> To: users@qpid.apache.org
>> Subject: RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
>> Date: Wed, 3 Aug 2016 18:39:23 +0200
>>
>> And how do you measure your throughput?
>>
>> > From: adelboutros@live.com
>> > To: users@qpid.apache.org
>> > Subject: RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
>> > Date: Wed, 3 Aug 2016 18:38:12 +0200
>> >
>> > Hello Ulf,
>> >
>> > I am sending messages with a byte array of 100 bytes and I am using Berkley DB as a message store (which should be slower than having memory only message store, no?)
>> >
>> > With 1 consumer, 1 producer and no broker, I am at 33k msgs/sec if they are all on the same machine and I have set "jms.forceAsyncSend=true" on the producer and "jms.sendAcksAsync=true" for the consumer.
>> >
>> > Are you using other options to get 190k? Do you think JMS might be a bottleneck? Or something else in my config/test?
>> >
>> > JMS client 0.9.0
>> > Qpid Java Broker 6.0.1
>> > Dispatcher 0.6.0
>> >
>> > Adel
>> >
>> > > Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
>> > > To: users@qpid.apache.org
>> > > From: lulf@redhat.com
>> > > Date: Wed, 3 Aug 2016 16:23:06 +0200
>> > >
>> > > Hi,
>> > >
>> > > Excuse me if this was already mentioned somewhere, but what is the size
>> > > of the messages you are sending ?
>> > >
>> > > FWIW, I'm able to get around 30-40k msgs/sec sustained with 1 producer,
>> > > 1 consumer, 1 dispatch (4 worker threads) and 1 broker (activemq-5). The
>> > > sender sends unsettled messages as fast as it can using qpid-proton
>> > > reactor API which is sending async up to the window limit.
>> > >
>> > > With no broker involved, I'm getting ~190k msgs/sec.
>> > >
>> > > All of these numbers are from my 8 core laptop. Message size is 128 bytes.
>> > >
>> > > I don't know the dispatcher that well, but I think it should be able to
>> > > handle data from each connector just fine given the numbers I have seen.
>> > >
>> > > On 08/03/2016 02:41 PM, Adel Boutros wrote:
>> > > >
>> > > >
>> > > >
>> > > > Hello again,
>> > > >
>> > >
>> > >
>> > >
>> > > > As requested, I added a 2nd connector and the appropriate autoLinks on the same host/port but with a different name. It seems to have resolved the issue.
>> > > >
>> > > > 1 Broker, 1 Dispatcher, 3 producers, 0 consumers, 1 connectors --> 5000 msg/s.
>> > > > 1 Broker, 1 Dispatcher, 3 producers, 0 consumers, 2 connectors --> 6600 msg/s.
>> > > > 1 Broker, 1 Dispatcher, 4 producers, 0 consumers, 2 connectors --> 7700 msg/s.
>> > > >
>> > > > I think this confirms the problem is due to the fact a single connection is being shared by all clients (consumers/producers) and that having a sort of pool of connections or a connection per workerThread is a solution to consider.
>> > > >
>> > > > What do you think?
>> > > >
>> > > > I added a 3rd connector to see if it changes anything but it
>> > > > didn't. Do you think this is maybe because the dispatcher is not able
>> > > > to process fast enough and saturate the 2 connectors?
>> > > > 1 Broker, 1 Dispatcher, 4 producers, 0 consumers, 3 connectors --> 7700 msg/s.
>> > > >
>> > >
>> > >
>> > >
>> > > > Adel
>> > > >
>> > > >> From: adelboutros@live.com
>> > > >> To: users@qpid.apache.org
>> > > >> Subject: RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
>> > > >> Date: Tue, 2 Aug 2016 22:21:54 +0200
>> > > >>
>> > > >> Sorry for the typo. Indeed, it was with 3 producers. I used 4 and 8 workerThread but there wasn't a difference.
>> > > >> We want to benchmark in the worst case scenarios actually to see what is the minimum we can guarantee. This is why we are using synchronous sending. In the future, we will also benchmark with full SSL/SASL to see what impact it has on the performance.
>> > > >>> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
>> > > >>> To: users@qpid.apache.org
>> > > >>> From: gsim@redhat.com
>> > > >>> Date: Tue, 2 Aug 2016 20:41:54 +0100
>> > > >>>
>> > > >>> On 02/08/16 20:25, Adel Boutros wrote:
>> > > >>>> How about the tests we did with consumer/producers connected directly to the dispatcher without any broker where we had 16 000 msg/s with 4 producers. Is it also a very low value given that there is no persistence or storing here? It was also synchronous sending.
>> > > >>>
>> > > >>> The rate is low because it is synchronous. One messages is sent to the
>> > > >>> consumer who acknowledges it, the acknowledgement is then conveyed back
>> > > >>> to the sender who then can send the next message.
>> > > >>>
>> > > >>> The rate for a single producer through the router was 6,000 per second.
>> > > >>> That works out as a roundtrip time of 167 microsecs or so. In your
>> > > >>> table, the 16,000 rate was listed as being for 3 producers. Based on the
>> > > >>> rate of a single producer, the best you could hope for there is 3 *
>> > > >>> 6,000 i.e 18,000. (How many worker threads did you have on the router
>> > > >>> for that case?)
>> > > >>>
>> > > >>> ---------------------------------------------------------------------
>> > > >>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>> > > >>> For additional commands, e-mail: users-help@qpid.apache.org
>> > > >>>
>> > > >>
>> > > >
>> > > >
>> > > >
>> > >
>> > > --
>> > > Ulf
>> > >
>> > > ---------------------------------------------------------------------
>> > > To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>> > > For additional commands, e-mail: users-help@qpid.apache.org
>> > >
>> >
>>
>
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org
RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
Java Broker 6.0.0
Posted by Adel Boutros <ad...@live.com>.
Our producers/consumers actually logs the elapsed time. This was slowing down the test. I deactivated the logging and with a dispatcher only, I am at around 47 000 msg/s with asynchronous sending.
> From: adelboutros@live.com
> To: users@qpid.apache.org
> Subject: RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> Date: Wed, 3 Aug 2016 18:39:23 +0200
>
> And how do you measure your throughput?
>
> > From: adelboutros@live.com
> > To: users@qpid.apache.org
> > Subject: RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> > Date: Wed, 3 Aug 2016 18:38:12 +0200
> >
> > Hello Ulf,
> >
> > I am sending messages with a byte array of 100 bytes and I am using Berkley DB as a message store (which should be slower than having memory only message store, no?)
> >
> > With 1 consumer, 1 producer and no broker, I am at 33k msgs/sec if they are all on the same machine and I have set "jms.forceAsyncSend=true" on the producer and "jms.sendAcksAsync=true" for the consumer.
> >
> > Are you using other options to get 190k? Do you think JMS might be a bottleneck? Or something else in my config/test?
> >
> > JMS client 0.9.0
> > Qpid Java Broker 6.0.1
> > Dispatcher 0.6.0
> >
> > Adel
> >
> > > Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> > > To: users@qpid.apache.org
> > > From: lulf@redhat.com
> > > Date: Wed, 3 Aug 2016 16:23:06 +0200
> > >
> > > Hi,
> > >
> > > Excuse me if this was already mentioned somewhere, but what is the size
> > > of the messages you are sending ?
> > >
> > > FWIW, I'm able to get around 30-40k msgs/sec sustained with 1 producer,
> > > 1 consumer, 1 dispatch (4 worker threads) and 1 broker (activemq-5). The
> > > sender sends unsettled messages as fast as it can using qpid-proton
> > > reactor API which is sending async up to the window limit.
> > >
> > > With no broker involved, I'm getting ~190k msgs/sec.
> > >
> > > All of these numbers are from my 8 core laptop. Message size is 128 bytes.
> > >
> > > I don't know the dispatcher that well, but I think it should be able to
> > > handle data from each connector just fine given the numbers I have seen.
> > >
> > > On 08/03/2016 02:41 PM, Adel Boutros wrote:
> > > >
> > > >
> > > >
> > > > Hello again,
> > > >
> > >
> > >
> > >
> > > > As requested, I added a 2nd connector and the appropriate autoLinks on the same host/port but with a different name. It seems to have resolved the issue.
> > > >
> > > > 1 Broker, 1 Dispatcher, 3 producers, 0 consumers, 1 connectors --> 5000 msg/s.
> > > > 1 Broker, 1 Dispatcher, 3 producers, 0 consumers, 2 connectors --> 6600 msg/s.
> > > > 1 Broker, 1 Dispatcher, 4 producers, 0 consumers, 2 connectors --> 7700 msg/s.
> > > >
> > > > I think this confirms the problem is due to the fact a single connection is being shared by all clients (consumers/producers) and that having a sort of pool of connections or a connection per workerThread is a solution to consider.
> > > >
> > > > What do you think?
> > > >
> > > > I added a 3rd connector to see if it changes anything but it
> > > > didn't. Do you think this is maybe because the dispatcher is not able
> > > > to process fast enough and saturate the 2 connectors?
> > > > 1 Broker, 1 Dispatcher, 4 producers, 0 consumers, 3 connectors --> 7700 msg/s.
> > > >
> > >
> > >
> > >
> > > > Adel
> > > >
> > > >> From: adelboutros@live.com
> > > >> To: users@qpid.apache.org
> > > >> Subject: RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> > > >> Date: Tue, 2 Aug 2016 22:21:54 +0200
> > > >>
> > > >> Sorry for the typo. Indeed, it was with 3 producers. I used 4 and 8 workerThread but there wasn't a difference.
> > > >> We want to benchmark in the worst case scenarios actually to see what is the minimum we can guarantee. This is why we are using synchronous sending. In the future, we will also benchmark with full SSL/SASL to see what impact it has on the performance.
> > > >>> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> > > >>> To: users@qpid.apache.org
> > > >>> From: gsim@redhat.com
> > > >>> Date: Tue, 2 Aug 2016 20:41:54 +0100
> > > >>>
> > > >>> On 02/08/16 20:25, Adel Boutros wrote:
> > > >>>> How about the tests we did with consumer/producers connected directly to the dispatcher without any broker where we had 16 000 msg/s with 4 producers. Is it also a very low value given that there is no persistence or storing here? It was also synchronous sending.
> > > >>>
> > > >>> The rate is low because it is synchronous. One messages is sent to the
> > > >>> consumer who acknowledges it, the acknowledgement is then conveyed back
> > > >>> to the sender who then can send the next message.
> > > >>>
> > > >>> The rate for a single producer through the router was 6,000 per second.
> > > >>> That works out as a roundtrip time of 167 microsecs or so. In your
> > > >>> table, the 16,000 rate was listed as being for 3 producers. Based on the
> > > >>> rate of a single producer, the best you could hope for there is 3 *
> > > >>> 6,000 i.e 18,000. (How many worker threads did you have on the router
> > > >>> for that case?)
> > > >>>
> > > >>> ---------------------------------------------------------------------
> > > >>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> > > >>> For additional commands, e-mail: users-help@qpid.apache.org
> > > >>>
> > > >>
> > > >
> > > >
> > > >
> > >
> > > --
> > > Ulf
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> > > For additional commands, e-mail: users-help@qpid.apache.org
> > >
> >
>
RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
Java Broker 6.0.0
Posted by Adel Boutros <ad...@live.com>.
And how do you measure your throughput?
> From: adelboutros@live.com
> To: users@qpid.apache.org
> Subject: RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> Date: Wed, 3 Aug 2016 18:38:12 +0200
>
> Hello Ulf,
>
> I am sending messages with a byte array of 100 bytes and I am using Berkley DB as a message store (which should be slower than having memory only message store, no?)
>
> With 1 consumer, 1 producer and no broker, I am at 33k msgs/sec if they are all on the same machine and I have set "jms.forceAsyncSend=true" on the producer and "jms.sendAcksAsync=true" for the consumer.
>
> Are you using other options to get 190k? Do you think JMS might be a bottleneck? Or something else in my config/test?
>
> JMS client 0.9.0
> Qpid Java Broker 6.0.1
> Dispatcher 0.6.0
>
> Adel
>
> > Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> > To: users@qpid.apache.org
> > From: lulf@redhat.com
> > Date: Wed, 3 Aug 2016 16:23:06 +0200
> >
> > Hi,
> >
> > Excuse me if this was already mentioned somewhere, but what is the size
> > of the messages you are sending ?
> >
> > FWIW, I'm able to get around 30-40k msgs/sec sustained with 1 producer,
> > 1 consumer, 1 dispatch (4 worker threads) and 1 broker (activemq-5). The
> > sender sends unsettled messages as fast as it can using qpid-proton
> > reactor API which is sending async up to the window limit.
> >
> > With no broker involved, I'm getting ~190k msgs/sec.
> >
> > All of these numbers are from my 8 core laptop. Message size is 128 bytes.
> >
> > I don't know the dispatcher that well, but I think it should be able to
> > handle data from each connector just fine given the numbers I have seen.
> >
> > On 08/03/2016 02:41 PM, Adel Boutros wrote:
> > >
> > >
> > >
> > > Hello again,
> > >
> >
> >
> >
> > > As requested, I added a 2nd connector and the appropriate autoLinks on the same host/port but with a different name. It seems to have resolved the issue.
> > >
> > > 1 Broker, 1 Dispatcher, 3 producers, 0 consumers, 1 connectors --> 5000 msg/s.
> > > 1 Broker, 1 Dispatcher, 3 producers, 0 consumers, 2 connectors --> 6600 msg/s.
> > > 1 Broker, 1 Dispatcher, 4 producers, 0 consumers, 2 connectors --> 7700 msg/s.
> > >
> > > I think this confirms the problem is due to the fact a single connection is being shared by all clients (consumers/producers) and that having a sort of pool of connections or a connection per workerThread is a solution to consider.
> > >
> > > What do you think?
> > >
> > > I added a 3rd connector to see if it changes anything but it
> > > didn't. Do you think this is maybe because the dispatcher is not able
> > > to process fast enough and saturate the 2 connectors?
> > > 1 Broker, 1 Dispatcher, 4 producers, 0 consumers, 3 connectors --> 7700 msg/s.
> > >
> >
> >
> >
> > > Adel
> > >
> > >> From: adelboutros@live.com
> > >> To: users@qpid.apache.org
> > >> Subject: RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> > >> Date: Tue, 2 Aug 2016 22:21:54 +0200
> > >>
> > >> Sorry for the typo. Indeed, it was with 3 producers. I used 4 and 8 workerThread but there wasn't a difference.
> > >> We want to benchmark in the worst case scenarios actually to see what is the minimum we can guarantee. This is why we are using synchronous sending. In the future, we will also benchmark with full SSL/SASL to see what impact it has on the performance.
> > >>> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> > >>> To: users@qpid.apache.org
> > >>> From: gsim@redhat.com
> > >>> Date: Tue, 2 Aug 2016 20:41:54 +0100
> > >>>
> > >>> On 02/08/16 20:25, Adel Boutros wrote:
> > >>>> How about the tests we did with consumer/producers connected directly to the dispatcher without any broker where we had 16 000 msg/s with 4 producers. Is it also a very low value given that there is no persistence or storing here? It was also synchronous sending.
> > >>>
> > >>> The rate is low because it is synchronous. One messages is sent to the
> > >>> consumer who acknowledges it, the acknowledgement is then conveyed back
> > >>> to the sender who then can send the next message.
> > >>>
> > >>> The rate for a single producer through the router was 6,000 per second.
> > >>> That works out as a roundtrip time of 167 microsecs or so. In your
> > >>> table, the 16,000 rate was listed as being for 3 producers. Based on the
> > >>> rate of a single producer, the best you could hope for there is 3 *
> > >>> 6,000 i.e 18,000. (How many worker threads did you have on the router
> > >>> for that case?)
> > >>>
> > >>> ---------------------------------------------------------------------
> > >>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> > >>> For additional commands, e-mail: users-help@qpid.apache.org
> > >>>
> > >>
> > >
> > >
> > >
> >
> > --
> > Ulf
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> > For additional commands, e-mail: users-help@qpid.apache.org
> >
>
RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
Java Broker 6.0.0
Posted by Adel Boutros <ad...@live.com>.
Hello Ulf,
I am sending messages with a byte array of 100 bytes and I am using Berkley DB as a message store (which should be slower than having memory only message store, no?)
With 1 consumer, 1 producer and no broker, I am at 33k msgs/sec if they are all on the same machine and I have set "jms.forceAsyncSend=true" on the producer and "jms.sendAcksAsync=true" for the consumer.
Are you using other options to get 190k? Do you think JMS might be a bottleneck? Or something else in my config/test?
JMS client 0.9.0
Qpid Java Broker 6.0.1
Dispatcher 0.6.0
Adel
> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> To: users@qpid.apache.org
> From: lulf@redhat.com
> Date: Wed, 3 Aug 2016 16:23:06 +0200
>
> Hi,
>
> Excuse me if this was already mentioned somewhere, but what is the size
> of the messages you are sending ?
>
> FWIW, I'm able to get around 30-40k msgs/sec sustained with 1 producer,
> 1 consumer, 1 dispatch (4 worker threads) and 1 broker (activemq-5). The
> sender sends unsettled messages as fast as it can using qpid-proton
> reactor API which is sending async up to the window limit.
>
> With no broker involved, I'm getting ~190k msgs/sec.
>
> All of these numbers are from my 8 core laptop. Message size is 128 bytes.
>
> I don't know the dispatcher that well, but I think it should be able to
> handle data from each connector just fine given the numbers I have seen.
>
> On 08/03/2016 02:41 PM, Adel Boutros wrote:
> >
> >
> >
> > Hello again,
> >
>
>
>
> > As requested, I added a 2nd connector and the appropriate autoLinks on the same host/port but with a different name. It seems to have resolved the issue.
> >
> > 1 Broker, 1 Dispatcher, 3 producers, 0 consumers, 1 connectors --> 5000 msg/s.
> > 1 Broker, 1 Dispatcher, 3 producers, 0 consumers, 2 connectors --> 6600 msg/s.
> > 1 Broker, 1 Dispatcher, 4 producers, 0 consumers, 2 connectors --> 7700 msg/s.
> >
> > I think this confirms the problem is due to the fact a single connection is being shared by all clients (consumers/producers) and that having a sort of pool of connections or a connection per workerThread is a solution to consider.
> >
> > What do you think?
> >
> > I added a 3rd connector to see if it changes anything but it
> > didn't. Do you think this is maybe because the dispatcher is not able
> > to process fast enough and saturate the 2 connectors?
> > 1 Broker, 1 Dispatcher, 4 producers, 0 consumers, 3 connectors --> 7700 msg/s.
> >
>
>
>
> > Adel
> >
> >> From: adelboutros@live.com
> >> To: users@qpid.apache.org
> >> Subject: RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> >> Date: Tue, 2 Aug 2016 22:21:54 +0200
> >>
> >> Sorry for the typo. Indeed, it was with 3 producers. I used 4 and 8 workerThread but there wasn't a difference.
> >> We want to benchmark in the worst case scenarios actually to see what is the minimum we can guarantee. This is why we are using synchronous sending. In the future, we will also benchmark with full SSL/SASL to see what impact it has on the performance.
> >>> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> >>> To: users@qpid.apache.org
> >>> From: gsim@redhat.com
> >>> Date: Tue, 2 Aug 2016 20:41:54 +0100
> >>>
> >>> On 02/08/16 20:25, Adel Boutros wrote:
> >>>> How about the tests we did with consumer/producers connected directly to the dispatcher without any broker where we had 16 000 msg/s with 4 producers. Is it also a very low value given that there is no persistence or storing here? It was also synchronous sending.
> >>>
> >>> The rate is low because it is synchronous. One messages is sent to the
> >>> consumer who acknowledges it, the acknowledgement is then conveyed back
> >>> to the sender who then can send the next message.
> >>>
> >>> The rate for a single producer through the router was 6,000 per second.
> >>> That works out as a roundtrip time of 167 microsecs or so. In your
> >>> table, the 16,000 rate was listed as being for 3 producers. Based on the
> >>> rate of a single producer, the best you could hope for there is 3 *
> >>> 6,000 i.e 18,000. (How many worker threads did you have on the router
> >>> for that case?)
> >>>
> >>> ---------------------------------------------------------------------
> >>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> >>> For additional commands, e-mail: users-help@qpid.apache.org
> >>>
> >>
> >
> >
> >
>
> --
> Ulf
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
>
Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
Java Broker 6.0.0
Posted by Ulf Lilleengen <lu...@redhat.com>.
Hi,
Excuse me if this was already mentioned somewhere, but what is the size
of the messages you are sending ?
FWIW, I'm able to get around 30-40k msgs/sec sustained with 1 producer,
1 consumer, 1 dispatch (4 worker threads) and 1 broker (activemq-5). The
sender sends unsettled messages as fast as it can using qpid-proton
reactor API which is sending async up to the window limit.
With no broker involved, I'm getting ~190k msgs/sec.
All of these numbers are from my 8 core laptop. Message size is 128 bytes.
I don't know the dispatcher that well, but I think it should be able to
handle data from each connector just fine given the numbers I have seen.
On 08/03/2016 02:41 PM, Adel Boutros wrote:
>
>
>
> Hello again,
>
> As requested, I added a 2nd connector and the appropriate autoLinks on the same host/port but with a different name. It seems to have resolved the issue.
>
> 1 Broker, 1 Dispatcher, 3 producers, 0 consumers, 1 connectors --> 5000 msg/s.
> 1 Broker, 1 Dispatcher, 3 producers, 0 consumers, 2 connectors --> 6600 msg/s.
> 1 Broker, 1 Dispatcher, 4 producers, 0 consumers, 2 connectors --> 7700 msg/s.
>
> I think this confirms the problem is due to the fact a single connection is being shared by all clients (consumers/producers) and that having a sort of pool of connections or a connection per workerThread is a solution to consider.
>
> What do you think?
>
> I added a 3rd connector to see if it changes anything but it
> didn't. Do you think this is maybe because the dispatcher is not able
> to process fast enough and saturate the 2 connectors?
> 1 Broker, 1 Dispatcher, 4 producers, 0 consumers, 3 connectors --> 7700 msg/s.
>
> Adel
>
>> From: adelboutros@live.com
>> To: users@qpid.apache.org
>> Subject: RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
>> Date: Tue, 2 Aug 2016 22:21:54 +0200
>>
>> Sorry for the typo. Indeed, it was with 3 producers. I used 4 and 8 workerThread but there wasn't a difference.
>> We want to benchmark in the worst case scenarios actually to see what is the minimum we can guarantee. This is why we are using synchronous sending. In the future, we will also benchmark with full SSL/SASL to see what impact it has on the performance.
>>> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
>>> To: users@qpid.apache.org
>>> From: gsim@redhat.com
>>> Date: Tue, 2 Aug 2016 20:41:54 +0100
>>>
>>> On 02/08/16 20:25, Adel Boutros wrote:
>>>> How about the tests we did with consumer/producers connected directly to the dispatcher without any broker where we had 16 000 msg/s with 4 producers. Is it also a very low value given that there is no persistence or storing here? It was also synchronous sending.
>>>
>>> The rate is low because it is synchronous. One messages is sent to the
>>> consumer who acknowledges it, the acknowledgement is then conveyed back
>>> to the sender who then can send the next message.
>>>
>>> The rate for a single producer through the router was 6,000 per second.
>>> That works out as a roundtrip time of 167 microsecs or so. In your
>>> table, the 16,000 rate was listed as being for 3 producers. Based on the
>>> rate of a single producer, the best you could hope for there is 3 *
>>> 6,000 i.e 18,000. (How many worker threads did you have on the router
>>> for that case?)
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>>> For additional commands, e-mail: users-help@qpid.apache.org
>>>
>>
>
>
>
--
Ulf
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org
RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
Java Broker 6.0.0
Posted by Adel Boutros <ad...@live.com>.
Hello again,
As requested, I added a 2nd connector and the appropriate autoLinks on the same host/port but with a different name. It seems to have resolved the issue.
1 Broker, 1 Dispatcher, 3 producers, 0 consumers, 1 connectors --> 5000 msg/s.
1 Broker, 1 Dispatcher, 3 producers, 0 consumers, 2 connectors --> 6600 msg/s.
1 Broker, 1 Dispatcher, 4 producers, 0 consumers, 2 connectors --> 7700 msg/s.
I think this confirms the problem is due to the fact a single connection is being shared by all clients (consumers/producers) and that having a sort of pool of connections or a connection per workerThread is a solution to consider.
What do you think?
I added a 3rd connector to see if it changes anything but it
didn't. Do you think this is maybe because the dispatcher is not able
to process fast enough and saturate the 2 connectors?
1 Broker, 1 Dispatcher, 4 producers, 0 consumers, 3 connectors --> 7700 msg/s.
Adel
> From: adelboutros@live.com
> To: users@qpid.apache.org
> Subject: RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> Date: Tue, 2 Aug 2016 22:21:54 +0200
>
> Sorry for the typo. Indeed, it was with 3 producers. I used 4 and 8 workerThread but there wasn't a difference.
> We want to benchmark in the worst case scenarios actually to see what is the minimum we can guarantee. This is why we are using synchronous sending. In the future, we will also benchmark with full SSL/SASL to see what impact it has on the performance.
> > Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> > To: users@qpid.apache.org
> > From: gsim@redhat.com
> > Date: Tue, 2 Aug 2016 20:41:54 +0100
> >
> > On 02/08/16 20:25, Adel Boutros wrote:
> > > How about the tests we did with consumer/producers connected directly to the dispatcher without any broker where we had 16 000 msg/s with 4 producers. Is it also a very low value given that there is no persistence or storing here? It was also synchronous sending.
> >
> > The rate is low because it is synchronous. One messages is sent to the
> > consumer who acknowledges it, the acknowledgement is then conveyed back
> > to the sender who then can send the next message.
> >
> > The rate for a single producer through the router was 6,000 per second.
> > That works out as a roundtrip time of 167 microsecs or so. In your
> > table, the 16,000 rate was listed as being for 3 producers. Based on the
> > rate of a single producer, the best you could hope for there is 3 *
> > 6,000 i.e 18,000. (How many worker threads did you have on the router
> > for that case?)
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> > For additional commands, e-mail: users-help@qpid.apache.org
> >
>
RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
Java Broker 6.0.0
Posted by Adel Boutros <ad...@live.com>.
Sorry for the typo. Indeed, it was with 3 producers. I used 4 and 8 workerThread but there wasn't a difference.
We want to benchmark in the worst case scenarios actually to see what is the minimum we can guarantee. This is why we are using synchronous sending. In the future, we will also benchmark with full SSL/SASL to see what impact it has on the performance.
> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> To: users@qpid.apache.org
> From: gsim@redhat.com
> Date: Tue, 2 Aug 2016 20:41:54 +0100
>
> On 02/08/16 20:25, Adel Boutros wrote:
> > How about the tests we did with consumer/producers connected directly to the dispatcher without any broker where we had 16 000 msg/s with 4 producers. Is it also a very low value given that there is no persistence or storing here? It was also synchronous sending.
>
> The rate is low because it is synchronous. One messages is sent to the
> consumer who acknowledges it, the acknowledgement is then conveyed back
> to the sender who then can send the next message.
>
> The rate for a single producer through the router was 6,000 per second.
> That works out as a roundtrip time of 167 microsecs or so. In your
> table, the 16,000 rate was listed as being for 3 producers. Based on the
> rate of a single producer, the best you could hope for there is 3 *
> 6,000 i.e 18,000. (How many worker threads did you have on the router
> for that case?)
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
>
Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
Java Broker 6.0.0
Posted by Gordon Sim <gs...@redhat.com>.
On 02/08/16 20:25, Adel Boutros wrote:
> How about the tests we did with consumer/producers connected directly to the dispatcher without any broker where we had 16 000 msg/s with 4 producers. Is it also a very low value given that there is no persistence or storing here? It was also synchronous sending.
The rate is low because it is synchronous. One messages is sent to the
consumer who acknowledges it, the acknowledgement is then conveyed back
to the sender who then can send the next message.
The rate for a single producer through the router was 6,000 per second.
That works out as a roundtrip time of 167 microsecs or so. In your
table, the 16,000 rate was listed as being for 3 producers. Based on the
rate of a single producer, the best you could hope for there is 3 *
6,000 i.e 18,000. (How many worker threads did you have on the router
for that case?)
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org
RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
Java Broker 6.0.0
Posted by Adel Boutros <ad...@live.com>.
How about the tests we did with consumer/producers connected directly to the dispatcher without any broker where we had 16 000 msg/s with 4 producers. Is it also a very low value given that there is no persistence or storing here? It was also synchronous sending.
> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> To: users@qpid.apache.org
> From: gsim@redhat.com
> Date: Tue, 2 Aug 2016 20:21:40 +0100
>
> On 02/08/16 20:18, Ted Ross wrote:
> > Since this is synchronous and durable, I would expect the store to be
> > the bottleneck in these cases and that for rates of ~7.5K, the router
> > shouldn't be a factor.
>
> I don't know anything about the java broker internals, but when going
> through a router the messages will all be sent down one connection. The
> broker will probably then process these serially. That _may_ have a
> lower limit than writing to the store from multiple threads in parallel.
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
>
Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
Java Broker 6.0.0
Posted by Gordon Sim <gs...@redhat.com>.
On 02/08/16 20:18, Ted Ross wrote:
> Since this is synchronous and durable, I would expect the store to be
> the bottleneck in these cases and that for rates of ~7.5K, the router
> shouldn't be a factor.
I don't know anything about the java broker internals, but when going
through a router the messages will all be sent down one connection. The
broker will probably then process these serially. That _may_ have a
lower limit than writing to the store from multiple threads in parallel.
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org
Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
Java Broker 6.0.0
Posted by Ted Ross <tr...@redhat.com>.
Since this is synchronous and durable, I would expect the store to be
the bottleneck in these cases and that for rates of ~7.5K, the router
shouldn't be a factor. The only reason I can see for the router to
affect throughput would be by introduced latency. Of course, it's
possible that there's a defect we need to fix.
-Ted
On 08/02/2016 03:12 PM, Adel Boutros wrote:
> I forgot to add we use durable queues and the persistence is set to DEFAULT.
>
>> From: adelboutros@live.com
>> To: users@qpid.apache.org
>> Subject: RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
>> Date: Tue, 2 Aug 2016 21:10:35 +0200
>>
>> We are using Qpid Java Broker 6.0.1 with Berkley DB as message store. Were you using asynchronous sending when you got 80K? Because I think with asynchronous sending, we can reach higher speeds.We actually timestamp right before and after the call to the "send" method. If we use asynchronous sending, the timestamping will be wrong as it doesn't account the settlement.
>> I will try tomorrow the multiple connectors and let you know how it goes. Do you want me to test asynchronous sending as well?
>> Regards,Adel
>>> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
>>> To: users@qpid.apache.org
>>> From: tross@redhat.com
>>> Date: Tue, 2 Aug 2016 14:44:22 -0400
>>>
>>>
>>>
>>> On 08/02/2016 02:10 PM, Adel Boutros wrote:
>>>> Hello Ted, Gordon,
>>>>
>>>> When I say the JMS producers are sending synchronously, I mean they don't set any options to the connection URL such as jms.forceAsyncSend. So I guess this means the producer will wait for the settlement before sending message X + 1.
>>>>
>>>> When I say it fails, I mean that with 1 producer, I have 2500 msg/s. When I add a second producer, I am at 4800 msg/s (Which is roughly twice the throughput of a single producer). But when I add a 3rd producer, I am at 5100 msg/s while I except it to be around 7500 msg/s. So for me the scaling stops working when adding a 3rd producer and above.
>>>
>>> Understood.
>>>
>>>>
>>>> What you both explained to me about the single connection is indeed a plausible candidate because in the tests of "broker only", the throughput of a single connection is around 3 500 msg/s. So on a single connection, I shouldn't go above that figure which is what I am seeing. I imagine that when I add more producers/consumers, the throughput will shrink even more because the same connection is used by all the producers and the consumers.
>>>>
>>>> Do you think it might be an a good idea if the connections were per workerThread and not only a single connection?
>>>
>>> I think this is an interesting feature to consider, however 5.1K
>>> messages per second on a connection seems like a really low limit to me.
>>> As I recall, we were able to get closer to 80K to 100K per connection
>>> on qpidd. Which broker are you using?
>>>
>>> An interesting experiment would be to configure two connectors to the
>>> same broker (with different names) and configure autoLinks with
>>> different addresses to the two connectors. This would show if the
>>> bottleneck is the router-to-broker connection.
>>>
>>>>
>>>> Another solution would be to use a maximum of 3 clients (producer or consumer) per dispatcher and have a network of interconnected dispatchers but I find it very heavy and hard to maintain and support on the client-side. Do you agree?
>>>
>>> I don't think this would solve your problem anyway.
>>>
>>>>
>>>> JMS Producer code
>>>> ConnectionFactory connectionFactory = new JmsConnectionFactory("amqp://machine:port");
>>>> Connection connection = connectionFactory.createConnection();
>>>> Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
>>>> Topic topic = session.createTopic("perf.topic");
>>>> messageProducer = session.createProducer(topic);
>>>> messageProducer.send(message);
>>>>
>>>> Regards,
>>>> Adel
>>>>
>>>>> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
>>>>> To: users@qpid.apache.org
>>>>> From: tross@redhat.com
>>>>> Date: Tue, 2 Aug 2016 13:42:24 -0400
>>>>>
>>>>>
>>>>>
>>>>> On 07/29/2016 08:40 AM, Adel Boutros wrote:
>>>>>> Hello Ted,
>>>>>>
>>>>>> Increasing the link capacity had no impact. So, I have
>>>>>> done a series of tests to try and isolate the issue.
>>>>>> We tested 3 different architecture without any consumers:
>>>>>> Producer --> Broker
>>>>>> Producer --> Dispatcher
>>>>>> Producer --> Dispatcher --> Broker
>>>>>> In every test, we sent 100 000 messages which contained a byte array of 100 bytes. The producers are sending in synchronous mode and with AUTO_ACKNOWLEDGE.
>>>>>>
>>>>>> Our benchmark machines have 20 cores and 396 Gb Ram each. We have
>>>>>> currently put consumers/producers on 1 machine and dispatcher/brokers on another machine. They are both connected with a 10 Gbps ethernet connection. Nothing else is using the machines.
>>>>>>
>>>>>> The results are in
>>>>>> the table below.
>>>>>>
>>>>>> What I could observe:
>>>>>> The broker alone scales well when I add producers
>>>>>> The dispatcher alone scales well when I add producersThe dispatcher connected to a broker scales well with 2 producersThe dispatcher connected to a broker fails when having 3 producers or more
>>>>>
>>>>> In what way does it fail?
>>>>>
>>>>>>
>>>>>> I
>>>>>> also did some "qdstat -l" while the test was running and at max had 5
>>>>>> unsettled deliveries. So I don't think the problem comes with the
>>>>>> linkCapacity.
>>>>>
>>>>> You mentioned that you are running in synchronous mode. Does this mean
>>>>> that each producer is waiting for settlement on message X before sending
>>>>> message X+1?
>>>>>
>>>>>>
>>>>>> What else can we look at? How does the dispatcher connect the producers to the broker? Does it open a new connection with each new producer? Or does it use some sort of a connection pool?
>>>>>
>>>>> The router multiplexes the broker traffic over a single connection to
>>>>> the broker.
>>>>>
>>>>>>
>>>>>> Could the issue come from the capacity configuration of the link in the connection between the broker and the dispatcher?
>>>>>
>>>>> Probably not in your case since the backlogs are much smaller than the
>>>>> default capacity.
>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> Number of Producers
>>>>>> Broker
>>>>>> Dispatcher
>>>>>> Combined Producer Throughput (msg/s)
>>>>>> Combined Producer Latency (micros)
>>>>>>
>>>>>>
>>>>>> 1
>>>>>> YES
>>>>>>
>>>>>> NO
>>>>>>
>>>>>> 3 500
>>>>>> 370
>>>>>>
>>>>>>
>>>>>> 4
>>>>>> YES
>>>>>> NO
>>>>>>
>>>>>> 9 200
>>>>>> 420
>>>>>>
>>>>>>
>>>>>> 1
>>>>>> NO
>>>>>> YES
>>>>>> 6 000
>>>>>> 180
>>>>>>
>>>>>>
>>>>>> 2
>>>>>> NO
>>>>>> YES
>>>>>> 12 000
>>>>>> 192
>>>>>>
>>>>>>
>>>>>> 3
>>>>>> NO
>>>>>> YES
>>>>>> 16 000
>>>>>> 201
>>>>>>
>>>>>>
>>>>>> 1
>>>>>> YES
>>>>>> YES
>>>>>> 2 500
>>>>>> 360
>>>>>>
>>>>>>
>>>>>> 2
>>>>>> YES
>>>>>> YES
>>>>>> 4 800
>>>>>> 400
>>>>>>
>>>>>>
>>>>>> 3
>>>>>> YES
>>>>>> YES
>>>>>> 5 200
>>>>>> 540
>>>>>>
>>>>>>
>>>>>> qdstat -l
>>>>>> bash$ qdstat -b dell445srv:10254 -l
>>>>>> Router Links
>>>>>> type dir conn id id peer class addr phs cap undel unsettled deliveries admin oper
>>>>>> =======================================================================================================================
>>>>>> endpoint in 19 46 mobile perfQueue 1 250 0 0 0 enabled up
>>>>>> endpoint out 19 54 mobile perf.topic 0 250 0 2 4994922 enabled up
>>>>>> endpoint in 27 57 mobile perf.topic 0 250 0 1 1678835 enabled up
>>>>>> endpoint in 28 58 mobile perf.topic 0 250 0 1 1677653 enabled up
>>>>>> endpoint in 29 59 mobile perf.topic 0 250 0 0 1638434 enabled up
>>>>>> endpoint in 47 94 mobile $management 0 250 0 0 1 enabled up
>>>>>> endpoint out 47 95 local temp.2u+DSi+26jT3hvZ 250 0 0 0 enabled up
>>>>>>
>>>>>> Regards,
>>>>>> Adel
>>>>>>
>>>>>>> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
>>>>>>> To: users@qpid.apache.org
>>>>>>> From: tross@redhat.com
>>>>>>> Date: Tue, 26 Jul 2016 10:32:29 -0400
>>>>>>>
>>>>>>> Adel,
>>>>>>>
>>>>>>> That's a good question. I think it's highly dependent on your
>>>>>>> requirements and the environment. Here are some random thoughts:
>>>>>>>
>>>>>>> - There's a trade-off between memory use (message buffering) and
>>>>>>> throughput. If you have many clients sharing the message bus,
>>>>>>> smaller values of linkCapacity will protect the router memory. If
>>>>>>> you have relatively few clients wanting to go fast, a larger
>>>>>>> linkCapacity is appropriate.
>>>>>>> - If the underlying network has high latency (satellite links, long
>>>>>>> distances, etc.), larger values of linkCapacity will be needed to
>>>>>>> protect against stalling caused by delayed settlement.
>>>>>>> - The default of 250 is considered a reasonable compromise. I think a
>>>>>>> value around 10 is better for a shared bus, but 500-1000 might be
>>>>>>> better for throughput with few clients.
>>>>>>>
>>>>>>> -Ted
>>>>>>>
>>>>>>>
>>>>>>> On 07/26/2016 10:08 AM, Adel Boutros wrote:
>>>>>>>> Thanks Ted,
>>>>>>>>
>>>>>>>> I will try to change linkCapacity. However, I was wondering if there is a way to "calculate an optimal value for linkCapacity". What factors can impact this field?
>>>>>>>>
>>>>>>>> Regards,
>>>>>>>> Adel
>>>>>>>>
>>>>>>>>> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
>>>>>>>>> To: users@qpid.apache.org
>>>>>>>>> From: tross@redhat.com
>>>>>>>>> Date: Tue, 26 Jul 2016 09:44:43 -0400
>>>>>>>>>
>>>>>>>>> Adel,
>>>>>>>>>
>>>>>>>>> The number of workers should be related to the number of available
>>>>>>>>> processor cores, not the volume of work or number of connections. 4 is
>>>>>>>>> probably a good number for testing.
>>>>>>>>>
>>>>>>>>> I'm not sure what the default link credit is for the Java broker (it's
>>>>>>>>> 500 for the c++ broker) or the clients you're using.
>>>>>>>>>
>>>>>>>>> The metric you should adjust is the linkCapacity for the listener and
>>>>>>>>> route-container connector. LinkCapacity is the number of deliveries
>>>>>>>>> that can be in-flight (unsettled) on each link. Qpid Dispatch Router
>>>>>>>>> defaults linkCapacity to 250. Depending on the volumes in your test,
>>>>>>>>> this might account for the discrepancy. You should try increasing this
>>>>>>>>> value.
>>>>>>>>>
>>>>>>>>> Note that linkCapacity is used to set initial credit for your links.
>>>>>>>>>
>>>>>>>>> -Ted
>>>>>>>>>
>>>>>>>>> On 07/25/2016 12:10 PM, Adel Boutros wrote:
>>>>>>>>>> Hello,We are actually running some performance benchmarks in an architecture consisting of a Java Broker connected to a Qpid dispatch router. We also have 3 producers and 3 consumers in the test. The producers send message to a topic which has a binding on a queue with a filter and the consumers receives message from that queue.
>>>>>>>>>> We have noticed a significant loss of performance in this architecture compared to an architecture composed of a simple Java Broker. The throughput of the producers is down to half and there are a lot of oscillations in the presence of the dispatcher.
>>>>>>>>>>
>>>>>>>>>> I have tried to double the number of workers on the dispatcher but it had no impact.
>>>>>>>>>>
>>>>>>>>>> Can you please help us find the cause of this issue?
>>>>>>>>>>
>>>>>>>>>> Dispacth router config
>>>>>>>>>> router {
>>>>>>>>>> id: router.10454
>>>>>>>>>> mode: interior
>>>>>>>>>> worker-threads: 4
>>>>>>>>>> }
>>>>>>>>>>
>>>>>>>>>> listener {
>>>>>>>>>> host: 0.0.0.0
>>>>>>>>>> port: 10454
>>>>>>>>>> role: normal
>>>>>>>>>> saslMechanisms: ANONYMOUS
>>>>>>>>>> requireSsl: no
>>>>>>>>>> authenticatePeer: no
>>>>>>>>>> }
>>>>>>>>>>
>>>>>>>>>> Java Broker config
>>>>>>>>>> export QPID_JAVA_MEM="-Xmx16g -Xms2g"
>>>>>>>>>> 1 Topic + 1 Queue
>>>>>>>>>> 1 AMQP port without any authentication mechanism (ANONYMOUS)
>>>>>>>>>>
>>>>>>>>>> Qdmanage on Dispatcher
>>>>>>>>>> qdmanage -b amqp://localhost:10454 create --type=address prefix=perfQueue waypoint=true name=perf.queue.addr
>>>>>>>>>> qdmanage -b amqp://localhost:10454 create --type=address prefix=perf.topic waypoint=true name=perf.topic.addr
>>>>>>>>>> qdmanage -b amqp://localhost:10454 create --type=connector role=route-container addr=localhost port=10455 name=localhost.broker.10455.connector
>>>>>>>>>> qdmanage -b amqp://localhost:10454 create --type=autoLink addr=perfQueue dir=in connection=localhost.broker.10455.connector name=localhost.broker.10455.perfQueue.in
>>>>>>>>>> qdmanage -b amqp://localhost:10454 create --type=autoLink addr=perf.topic dir=out connection=localhost.broker.10455.connector name=localhost.broker.10455.perf.topic.out
>>>>>>>>>>
>>>>>>>>>> Combined producer throughput
>>>>>>>>>> 1 Broker: http://hpics.li/a9d6efa
>>>>>>>>>> 1 Broker + 1 Dispatcher: http://hpics.li/189299b
>>>>>>>>>>
>>>>>>>>>> Regards,
>>>>>>>>>> Adel
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> ---------------------------------------------------------------------
>>>>>>>>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>>>>>>>>> For additional commands, e-mail: users-help@qpid.apache.org
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>> ---------------------------------------------------------------------
>>>>>>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>>>>>>> For additional commands, e-mail: users-help@qpid.apache.org
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>> ---------------------------------------------------------------------
>>>>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>>>>> For additional commands, e-mail: users-help@qpid.apache.org
>>>>>
>>>>
>>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>>> For additional commands, e-mail: users-help@qpid.apache.org
>>>
>>
>
>
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org
RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
Java Broker 6.0.0
Posted by Adel Boutros <ad...@live.com>.
I forgot to add we use durable queues and the persistence is set to DEFAULT.
> From: adelboutros@live.com
> To: users@qpid.apache.org
> Subject: RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> Date: Tue, 2 Aug 2016 21:10:35 +0200
>
> We are using Qpid Java Broker 6.0.1 with Berkley DB as message store. Were you using asynchronous sending when you got 80K? Because I think with asynchronous sending, we can reach higher speeds.We actually timestamp right before and after the call to the "send" method. If we use asynchronous sending, the timestamping will be wrong as it doesn't account the settlement.
> I will try tomorrow the multiple connectors and let you know how it goes. Do you want me to test asynchronous sending as well?
> Regards,Adel
> > Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> > To: users@qpid.apache.org
> > From: tross@redhat.com
> > Date: Tue, 2 Aug 2016 14:44:22 -0400
> >
> >
> >
> > On 08/02/2016 02:10 PM, Adel Boutros wrote:
> > > Hello Ted, Gordon,
> > >
> > > When I say the JMS producers are sending synchronously, I mean they don't set any options to the connection URL such as jms.forceAsyncSend. So I guess this means the producer will wait for the settlement before sending message X + 1.
> > >
> > > When I say it fails, I mean that with 1 producer, I have 2500 msg/s. When I add a second producer, I am at 4800 msg/s (Which is roughly twice the throughput of a single producer). But when I add a 3rd producer, I am at 5100 msg/s while I except it to be around 7500 msg/s. So for me the scaling stops working when adding a 3rd producer and above.
> >
> > Understood.
> >
> > >
> > > What you both explained to me about the single connection is indeed a plausible candidate because in the tests of "broker only", the throughput of a single connection is around 3 500 msg/s. So on a single connection, I shouldn't go above that figure which is what I am seeing. I imagine that when I add more producers/consumers, the throughput will shrink even more because the same connection is used by all the producers and the consumers.
> > >
> > > Do you think it might be an a good idea if the connections were per workerThread and not only a single connection?
> >
> > I think this is an interesting feature to consider, however 5.1K
> > messages per second on a connection seems like a really low limit to me.
> > As I recall, we were able to get closer to 80K to 100K per connection
> > on qpidd. Which broker are you using?
> >
> > An interesting experiment would be to configure two connectors to the
> > same broker (with different names) and configure autoLinks with
> > different addresses to the two connectors. This would show if the
> > bottleneck is the router-to-broker connection.
> >
> > >
> > > Another solution would be to use a maximum of 3 clients (producer or consumer) per dispatcher and have a network of interconnected dispatchers but I find it very heavy and hard to maintain and support on the client-side. Do you agree?
> >
> > I don't think this would solve your problem anyway.
> >
> > >
> > > JMS Producer code
> > > ConnectionFactory connectionFactory = new JmsConnectionFactory("amqp://machine:port");
> > > Connection connection = connectionFactory.createConnection();
> > > Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
> > > Topic topic = session.createTopic("perf.topic");
> > > messageProducer = session.createProducer(topic);
> > > messageProducer.send(message);
> > >
> > > Regards,
> > > Adel
> > >
> > >> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> > >> To: users@qpid.apache.org
> > >> From: tross@redhat.com
> > >> Date: Tue, 2 Aug 2016 13:42:24 -0400
> > >>
> > >>
> > >>
> > >> On 07/29/2016 08:40 AM, Adel Boutros wrote:
> > >>> Hello Ted,
> > >>>
> > >>> Increasing the link capacity had no impact. So, I have
> > >>> done a series of tests to try and isolate the issue.
> > >>> We tested 3 different architecture without any consumers:
> > >>> Producer --> Broker
> > >>> Producer --> Dispatcher
> > >>> Producer --> Dispatcher --> Broker
> > >>> In every test, we sent 100 000 messages which contained a byte array of 100 bytes. The producers are sending in synchronous mode and with AUTO_ACKNOWLEDGE.
> > >>>
> > >>> Our benchmark machines have 20 cores and 396 Gb Ram each. We have
> > >>> currently put consumers/producers on 1 machine and dispatcher/brokers on another machine. They are both connected with a 10 Gbps ethernet connection. Nothing else is using the machines.
> > >>>
> > >>> The results are in
> > >>> the table below.
> > >>>
> > >>> What I could observe:
> > >>> The broker alone scales well when I add producers
> > >>> The dispatcher alone scales well when I add producersThe dispatcher connected to a broker scales well with 2 producersThe dispatcher connected to a broker fails when having 3 producers or more
> > >>
> > >> In what way does it fail?
> > >>
> > >>>
> > >>> I
> > >>> also did some "qdstat -l" while the test was running and at max had 5
> > >>> unsettled deliveries. So I don't think the problem comes with the
> > >>> linkCapacity.
> > >>
> > >> You mentioned that you are running in synchronous mode. Does this mean
> > >> that each producer is waiting for settlement on message X before sending
> > >> message X+1?
> > >>
> > >>>
> > >>> What else can we look at? How does the dispatcher connect the producers to the broker? Does it open a new connection with each new producer? Or does it use some sort of a connection pool?
> > >>
> > >> The router multiplexes the broker traffic over a single connection to
> > >> the broker.
> > >>
> > >>>
> > >>> Could the issue come from the capacity configuration of the link in the connection between the broker and the dispatcher?
> > >>
> > >> Probably not in your case since the backlogs are much smaller than the
> > >> default capacity.
> > >>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>> Number of Producers
> > >>> Broker
> > >>> Dispatcher
> > >>> Combined Producer Throughput (msg/s)
> > >>> Combined Producer Latency (micros)
> > >>>
> > >>>
> > >>> 1
> > >>> YES
> > >>>
> > >>> NO
> > >>>
> > >>> 3 500
> > >>> 370
> > >>>
> > >>>
> > >>> 4
> > >>> YES
> > >>> NO
> > >>>
> > >>> 9 200
> > >>> 420
> > >>>
> > >>>
> > >>> 1
> > >>> NO
> > >>> YES
> > >>> 6 000
> > >>> 180
> > >>>
> > >>>
> > >>> 2
> > >>> NO
> > >>> YES
> > >>> 12 000
> > >>> 192
> > >>>
> > >>>
> > >>> 3
> > >>> NO
> > >>> YES
> > >>> 16 000
> > >>> 201
> > >>>
> > >>>
> > >>> 1
> > >>> YES
> > >>> YES
> > >>> 2 500
> > >>> 360
> > >>>
> > >>>
> > >>> 2
> > >>> YES
> > >>> YES
> > >>> 4 800
> > >>> 400
> > >>>
> > >>>
> > >>> 3
> > >>> YES
> > >>> YES
> > >>> 5 200
> > >>> 540
> > >>>
> > >>>
> > >>> qdstat -l
> > >>> bash$ qdstat -b dell445srv:10254 -l
> > >>> Router Links
> > >>> type dir conn id id peer class addr phs cap undel unsettled deliveries admin oper
> > >>> =======================================================================================================================
> > >>> endpoint in 19 46 mobile perfQueue 1 250 0 0 0 enabled up
> > >>> endpoint out 19 54 mobile perf.topic 0 250 0 2 4994922 enabled up
> > >>> endpoint in 27 57 mobile perf.topic 0 250 0 1 1678835 enabled up
> > >>> endpoint in 28 58 mobile perf.topic 0 250 0 1 1677653 enabled up
> > >>> endpoint in 29 59 mobile perf.topic 0 250 0 0 1638434 enabled up
> > >>> endpoint in 47 94 mobile $management 0 250 0 0 1 enabled up
> > >>> endpoint out 47 95 local temp.2u+DSi+26jT3hvZ 250 0 0 0 enabled up
> > >>>
> > >>> Regards,
> > >>> Adel
> > >>>
> > >>>> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> > >>>> To: users@qpid.apache.org
> > >>>> From: tross@redhat.com
> > >>>> Date: Tue, 26 Jul 2016 10:32:29 -0400
> > >>>>
> > >>>> Adel,
> > >>>>
> > >>>> That's a good question. I think it's highly dependent on your
> > >>>> requirements and the environment. Here are some random thoughts:
> > >>>>
> > >>>> - There's a trade-off between memory use (message buffering) and
> > >>>> throughput. If you have many clients sharing the message bus,
> > >>>> smaller values of linkCapacity will protect the router memory. If
> > >>>> you have relatively few clients wanting to go fast, a larger
> > >>>> linkCapacity is appropriate.
> > >>>> - If the underlying network has high latency (satellite links, long
> > >>>> distances, etc.), larger values of linkCapacity will be needed to
> > >>>> protect against stalling caused by delayed settlement.
> > >>>> - The default of 250 is considered a reasonable compromise. I think a
> > >>>> value around 10 is better for a shared bus, but 500-1000 might be
> > >>>> better for throughput with few clients.
> > >>>>
> > >>>> -Ted
> > >>>>
> > >>>>
> > >>>> On 07/26/2016 10:08 AM, Adel Boutros wrote:
> > >>>>> Thanks Ted,
> > >>>>>
> > >>>>> I will try to change linkCapacity. However, I was wondering if there is a way to "calculate an optimal value for linkCapacity". What factors can impact this field?
> > >>>>>
> > >>>>> Regards,
> > >>>>> Adel
> > >>>>>
> > >>>>>> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> > >>>>>> To: users@qpid.apache.org
> > >>>>>> From: tross@redhat.com
> > >>>>>> Date: Tue, 26 Jul 2016 09:44:43 -0400
> > >>>>>>
> > >>>>>> Adel,
> > >>>>>>
> > >>>>>> The number of workers should be related to the number of available
> > >>>>>> processor cores, not the volume of work or number of connections. 4 is
> > >>>>>> probably a good number for testing.
> > >>>>>>
> > >>>>>> I'm not sure what the default link credit is for the Java broker (it's
> > >>>>>> 500 for the c++ broker) or the clients you're using.
> > >>>>>>
> > >>>>>> The metric you should adjust is the linkCapacity for the listener and
> > >>>>>> route-container connector. LinkCapacity is the number of deliveries
> > >>>>>> that can be in-flight (unsettled) on each link. Qpid Dispatch Router
> > >>>>>> defaults linkCapacity to 250. Depending on the volumes in your test,
> > >>>>>> this might account for the discrepancy. You should try increasing this
> > >>>>>> value.
> > >>>>>>
> > >>>>>> Note that linkCapacity is used to set initial credit for your links.
> > >>>>>>
> > >>>>>> -Ted
> > >>>>>>
> > >>>>>> On 07/25/2016 12:10 PM, Adel Boutros wrote:
> > >>>>>>> Hello,We are actually running some performance benchmarks in an architecture consisting of a Java Broker connected to a Qpid dispatch router. We also have 3 producers and 3 consumers in the test. The producers send message to a topic which has a binding on a queue with a filter and the consumers receives message from that queue.
> > >>>>>>> We have noticed a significant loss of performance in this architecture compared to an architecture composed of a simple Java Broker. The throughput of the producers is down to half and there are a lot of oscillations in the presence of the dispatcher.
> > >>>>>>>
> > >>>>>>> I have tried to double the number of workers on the dispatcher but it had no impact.
> > >>>>>>>
> > >>>>>>> Can you please help us find the cause of this issue?
> > >>>>>>>
> > >>>>>>> Dispacth router config
> > >>>>>>> router {
> > >>>>>>> id: router.10454
> > >>>>>>> mode: interior
> > >>>>>>> worker-threads: 4
> > >>>>>>> }
> > >>>>>>>
> > >>>>>>> listener {
> > >>>>>>> host: 0.0.0.0
> > >>>>>>> port: 10454
> > >>>>>>> role: normal
> > >>>>>>> saslMechanisms: ANONYMOUS
> > >>>>>>> requireSsl: no
> > >>>>>>> authenticatePeer: no
> > >>>>>>> }
> > >>>>>>>
> > >>>>>>> Java Broker config
> > >>>>>>> export QPID_JAVA_MEM="-Xmx16g -Xms2g"
> > >>>>>>> 1 Topic + 1 Queue
> > >>>>>>> 1 AMQP port without any authentication mechanism (ANONYMOUS)
> > >>>>>>>
> > >>>>>>> Qdmanage on Dispatcher
> > >>>>>>> qdmanage -b amqp://localhost:10454 create --type=address prefix=perfQueue waypoint=true name=perf.queue.addr
> > >>>>>>> qdmanage -b amqp://localhost:10454 create --type=address prefix=perf.topic waypoint=true name=perf.topic.addr
> > >>>>>>> qdmanage -b amqp://localhost:10454 create --type=connector role=route-container addr=localhost port=10455 name=localhost.broker.10455.connector
> > >>>>>>> qdmanage -b amqp://localhost:10454 create --type=autoLink addr=perfQueue dir=in connection=localhost.broker.10455.connector name=localhost.broker.10455.perfQueue.in
> > >>>>>>> qdmanage -b amqp://localhost:10454 create --type=autoLink addr=perf.topic dir=out connection=localhost.broker.10455.connector name=localhost.broker.10455.perf.topic.out
> > >>>>>>>
> > >>>>>>> Combined producer throughput
> > >>>>>>> 1 Broker: http://hpics.li/a9d6efa
> > >>>>>>> 1 Broker + 1 Dispatcher: http://hpics.li/189299b
> > >>>>>>>
> > >>>>>>> Regards,
> > >>>>>>> Adel
> > >>>>>>>
> > >>>>>>>
> > >>>>>>>
> > >>>>>>
> > >>>>>> ---------------------------------------------------------------------
> > >>>>>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> > >>>>>> For additional commands, e-mail: users-help@qpid.apache.org
> > >>>>>>
> > >>>>>
> > >>>>>
> > >>>>
> > >>>> ---------------------------------------------------------------------
> > >>>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> > >>>> For additional commands, e-mail: users-help@qpid.apache.org
> > >>>>
> > >>>
> > >>>
> > >>
> > >> ---------------------------------------------------------------------
> > >> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> > >> For additional commands, e-mail: users-help@qpid.apache.org
> > >>
> > >
> > >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> > For additional commands, e-mail: users-help@qpid.apache.org
> >
>
RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
Java Broker 6.0.0
Posted by Adel Boutros <ad...@live.com>.
We are using Qpid Java Broker 6.0.1 with Berkley DB as message store. Were you using asynchronous sending when you got 80K? Because I think with asynchronous sending, we can reach higher speeds.We actually timestamp right before and after the call to the "send" method. If we use asynchronous sending, the timestamping will be wrong as it doesn't account the settlement.
I will try tomorrow the multiple connectors and let you know how it goes. Do you want me to test asynchronous sending as well?
Regards,Adel
> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> To: users@qpid.apache.org
> From: tross@redhat.com
> Date: Tue, 2 Aug 2016 14:44:22 -0400
>
>
>
> On 08/02/2016 02:10 PM, Adel Boutros wrote:
> > Hello Ted, Gordon,
> >
> > When I say the JMS producers are sending synchronously, I mean they don't set any options to the connection URL such as jms.forceAsyncSend. So I guess this means the producer will wait for the settlement before sending message X + 1.
> >
> > When I say it fails, I mean that with 1 producer, I have 2500 msg/s. When I add a second producer, I am at 4800 msg/s (Which is roughly twice the throughput of a single producer). But when I add a 3rd producer, I am at 5100 msg/s while I except it to be around 7500 msg/s. So for me the scaling stops working when adding a 3rd producer and above.
>
> Understood.
>
> >
> > What you both explained to me about the single connection is indeed a plausible candidate because in the tests of "broker only", the throughput of a single connection is around 3 500 msg/s. So on a single connection, I shouldn't go above that figure which is what I am seeing. I imagine that when I add more producers/consumers, the throughput will shrink even more because the same connection is used by all the producers and the consumers.
> >
> > Do you think it might be an a good idea if the connections were per workerThread and not only a single connection?
>
> I think this is an interesting feature to consider, however 5.1K
> messages per second on a connection seems like a really low limit to me.
> As I recall, we were able to get closer to 80K to 100K per connection
> on qpidd. Which broker are you using?
>
> An interesting experiment would be to configure two connectors to the
> same broker (with different names) and configure autoLinks with
> different addresses to the two connectors. This would show if the
> bottleneck is the router-to-broker connection.
>
> >
> > Another solution would be to use a maximum of 3 clients (producer or consumer) per dispatcher and have a network of interconnected dispatchers but I find it very heavy and hard to maintain and support on the client-side. Do you agree?
>
> I don't think this would solve your problem anyway.
>
> >
> > JMS Producer code
> > ConnectionFactory connectionFactory = new JmsConnectionFactory("amqp://machine:port");
> > Connection connection = connectionFactory.createConnection();
> > Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
> > Topic topic = session.createTopic("perf.topic");
> > messageProducer = session.createProducer(topic);
> > messageProducer.send(message);
> >
> > Regards,
> > Adel
> >
> >> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> >> To: users@qpid.apache.org
> >> From: tross@redhat.com
> >> Date: Tue, 2 Aug 2016 13:42:24 -0400
> >>
> >>
> >>
> >> On 07/29/2016 08:40 AM, Adel Boutros wrote:
> >>> Hello Ted,
> >>>
> >>> Increasing the link capacity had no impact. So, I have
> >>> done a series of tests to try and isolate the issue.
> >>> We tested 3 different architecture without any consumers:
> >>> Producer --> Broker
> >>> Producer --> Dispatcher
> >>> Producer --> Dispatcher --> Broker
> >>> In every test, we sent 100 000 messages which contained a byte array of 100 bytes. The producers are sending in synchronous mode and with AUTO_ACKNOWLEDGE.
> >>>
> >>> Our benchmark machines have 20 cores and 396 Gb Ram each. We have
> >>> currently put consumers/producers on 1 machine and dispatcher/brokers on another machine. They are both connected with a 10 Gbps ethernet connection. Nothing else is using the machines.
> >>>
> >>> The results are in
> >>> the table below.
> >>>
> >>> What I could observe:
> >>> The broker alone scales well when I add producers
> >>> The dispatcher alone scales well when I add producersThe dispatcher connected to a broker scales well with 2 producersThe dispatcher connected to a broker fails when having 3 producers or more
> >>
> >> In what way does it fail?
> >>
> >>>
> >>> I
> >>> also did some "qdstat -l" while the test was running and at max had 5
> >>> unsettled deliveries. So I don't think the problem comes with the
> >>> linkCapacity.
> >>
> >> You mentioned that you are running in synchronous mode. Does this mean
> >> that each producer is waiting for settlement on message X before sending
> >> message X+1?
> >>
> >>>
> >>> What else can we look at? How does the dispatcher connect the producers to the broker? Does it open a new connection with each new producer? Or does it use some sort of a connection pool?
> >>
> >> The router multiplexes the broker traffic over a single connection to
> >> the broker.
> >>
> >>>
> >>> Could the issue come from the capacity configuration of the link in the connection between the broker and the dispatcher?
> >>
> >> Probably not in your case since the backlogs are much smaller than the
> >> default capacity.
> >>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> Number of Producers
> >>> Broker
> >>> Dispatcher
> >>> Combined Producer Throughput (msg/s)
> >>> Combined Producer Latency (micros)
> >>>
> >>>
> >>> 1
> >>> YES
> >>>
> >>> NO
> >>>
> >>> 3 500
> >>> 370
> >>>
> >>>
> >>> 4
> >>> YES
> >>> NO
> >>>
> >>> 9 200
> >>> 420
> >>>
> >>>
> >>> 1
> >>> NO
> >>> YES
> >>> 6 000
> >>> 180
> >>>
> >>>
> >>> 2
> >>> NO
> >>> YES
> >>> 12 000
> >>> 192
> >>>
> >>>
> >>> 3
> >>> NO
> >>> YES
> >>> 16 000
> >>> 201
> >>>
> >>>
> >>> 1
> >>> YES
> >>> YES
> >>> 2 500
> >>> 360
> >>>
> >>>
> >>> 2
> >>> YES
> >>> YES
> >>> 4 800
> >>> 400
> >>>
> >>>
> >>> 3
> >>> YES
> >>> YES
> >>> 5 200
> >>> 540
> >>>
> >>>
> >>> qdstat -l
> >>> bash$ qdstat -b dell445srv:10254 -l
> >>> Router Links
> >>> type dir conn id id peer class addr phs cap undel unsettled deliveries admin oper
> >>> =======================================================================================================================
> >>> endpoint in 19 46 mobile perfQueue 1 250 0 0 0 enabled up
> >>> endpoint out 19 54 mobile perf.topic 0 250 0 2 4994922 enabled up
> >>> endpoint in 27 57 mobile perf.topic 0 250 0 1 1678835 enabled up
> >>> endpoint in 28 58 mobile perf.topic 0 250 0 1 1677653 enabled up
> >>> endpoint in 29 59 mobile perf.topic 0 250 0 0 1638434 enabled up
> >>> endpoint in 47 94 mobile $management 0 250 0 0 1 enabled up
> >>> endpoint out 47 95 local temp.2u+DSi+26jT3hvZ 250 0 0 0 enabled up
> >>>
> >>> Regards,
> >>> Adel
> >>>
> >>>> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> >>>> To: users@qpid.apache.org
> >>>> From: tross@redhat.com
> >>>> Date: Tue, 26 Jul 2016 10:32:29 -0400
> >>>>
> >>>> Adel,
> >>>>
> >>>> That's a good question. I think it's highly dependent on your
> >>>> requirements and the environment. Here are some random thoughts:
> >>>>
> >>>> - There's a trade-off between memory use (message buffering) and
> >>>> throughput. If you have many clients sharing the message bus,
> >>>> smaller values of linkCapacity will protect the router memory. If
> >>>> you have relatively few clients wanting to go fast, a larger
> >>>> linkCapacity is appropriate.
> >>>> - If the underlying network has high latency (satellite links, long
> >>>> distances, etc.), larger values of linkCapacity will be needed to
> >>>> protect against stalling caused by delayed settlement.
> >>>> - The default of 250 is considered a reasonable compromise. I think a
> >>>> value around 10 is better for a shared bus, but 500-1000 might be
> >>>> better for throughput with few clients.
> >>>>
> >>>> -Ted
> >>>>
> >>>>
> >>>> On 07/26/2016 10:08 AM, Adel Boutros wrote:
> >>>>> Thanks Ted,
> >>>>>
> >>>>> I will try to change linkCapacity. However, I was wondering if there is a way to "calculate an optimal value for linkCapacity". What factors can impact this field?
> >>>>>
> >>>>> Regards,
> >>>>> Adel
> >>>>>
> >>>>>> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> >>>>>> To: users@qpid.apache.org
> >>>>>> From: tross@redhat.com
> >>>>>> Date: Tue, 26 Jul 2016 09:44:43 -0400
> >>>>>>
> >>>>>> Adel,
> >>>>>>
> >>>>>> The number of workers should be related to the number of available
> >>>>>> processor cores, not the volume of work or number of connections. 4 is
> >>>>>> probably a good number for testing.
> >>>>>>
> >>>>>> I'm not sure what the default link credit is for the Java broker (it's
> >>>>>> 500 for the c++ broker) or the clients you're using.
> >>>>>>
> >>>>>> The metric you should adjust is the linkCapacity for the listener and
> >>>>>> route-container connector. LinkCapacity is the number of deliveries
> >>>>>> that can be in-flight (unsettled) on each link. Qpid Dispatch Router
> >>>>>> defaults linkCapacity to 250. Depending on the volumes in your test,
> >>>>>> this might account for the discrepancy. You should try increasing this
> >>>>>> value.
> >>>>>>
> >>>>>> Note that linkCapacity is used to set initial credit for your links.
> >>>>>>
> >>>>>> -Ted
> >>>>>>
> >>>>>> On 07/25/2016 12:10 PM, Adel Boutros wrote:
> >>>>>>> Hello,We are actually running some performance benchmarks in an architecture consisting of a Java Broker connected to a Qpid dispatch router. We also have 3 producers and 3 consumers in the test. The producers send message to a topic which has a binding on a queue with a filter and the consumers receives message from that queue.
> >>>>>>> We have noticed a significant loss of performance in this architecture compared to an architecture composed of a simple Java Broker. The throughput of the producers is down to half and there are a lot of oscillations in the presence of the dispatcher.
> >>>>>>>
> >>>>>>> I have tried to double the number of workers on the dispatcher but it had no impact.
> >>>>>>>
> >>>>>>> Can you please help us find the cause of this issue?
> >>>>>>>
> >>>>>>> Dispacth router config
> >>>>>>> router {
> >>>>>>> id: router.10454
> >>>>>>> mode: interior
> >>>>>>> worker-threads: 4
> >>>>>>> }
> >>>>>>>
> >>>>>>> listener {
> >>>>>>> host: 0.0.0.0
> >>>>>>> port: 10454
> >>>>>>> role: normal
> >>>>>>> saslMechanisms: ANONYMOUS
> >>>>>>> requireSsl: no
> >>>>>>> authenticatePeer: no
> >>>>>>> }
> >>>>>>>
> >>>>>>> Java Broker config
> >>>>>>> export QPID_JAVA_MEM="-Xmx16g -Xms2g"
> >>>>>>> 1 Topic + 1 Queue
> >>>>>>> 1 AMQP port without any authentication mechanism (ANONYMOUS)
> >>>>>>>
> >>>>>>> Qdmanage on Dispatcher
> >>>>>>> qdmanage -b amqp://localhost:10454 create --type=address prefix=perfQueue waypoint=true name=perf.queue.addr
> >>>>>>> qdmanage -b amqp://localhost:10454 create --type=address prefix=perf.topic waypoint=true name=perf.topic.addr
> >>>>>>> qdmanage -b amqp://localhost:10454 create --type=connector role=route-container addr=localhost port=10455 name=localhost.broker.10455.connector
> >>>>>>> qdmanage -b amqp://localhost:10454 create --type=autoLink addr=perfQueue dir=in connection=localhost.broker.10455.connector name=localhost.broker.10455.perfQueue.in
> >>>>>>> qdmanage -b amqp://localhost:10454 create --type=autoLink addr=perf.topic dir=out connection=localhost.broker.10455.connector name=localhost.broker.10455.perf.topic.out
> >>>>>>>
> >>>>>>> Combined producer throughput
> >>>>>>> 1 Broker: http://hpics.li/a9d6efa
> >>>>>>> 1 Broker + 1 Dispatcher: http://hpics.li/189299b
> >>>>>>>
> >>>>>>> Regards,
> >>>>>>> Adel
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>
> >>>>>> ---------------------------------------------------------------------
> >>>>>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> >>>>>> For additional commands, e-mail: users-help@qpid.apache.org
> >>>>>>
> >>>>>
> >>>>>
> >>>>
> >>>> ---------------------------------------------------------------------
> >>>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> >>>> For additional commands, e-mail: users-help@qpid.apache.org
> >>>>
> >>>
> >>>
> >>
> >> ---------------------------------------------------------------------
> >> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> >> For additional commands, e-mail: users-help@qpid.apache.org
> >>
> >
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
>
Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
Java Broker 6.0.0
Posted by Gordon Sim <gs...@redhat.com>.
On 02/08/16 19:44, Ted Ross wrote:
> 5.1K messages per second on a connection seems like a really low limit
> to me. As I recall, we were able to get closer to 80K to 100K per
> connection on qpidd.
If these are persistent messages (which I think is the default for JMS)
and the queue to which they are sent on the broker is durable, then the
rate will be lower.
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org
[dispatch] router concurrency and scale [was Re: [Performance]
Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0]
Posted by Alan Conway <ac...@redhat.com>.
On Tue, 2016-08-02 at 14:44 -0400, Ted Ross wrote:
>
> On 08/02/2016 02:10 PM, Adel Boutros wrote:
[snip]
> >�
> What you both explained to me about the single connection is indeed
> > a plausible candidate because in the tests of "broker only", the
> > throughput of a single connection is around 3 500 msg/s. So on a
> > single connection, I shouldn't go above that figure which is what I
> > am seeing. I imagine that when I add more producers/consumers, the
> > throughput will shrink even more because the same connection is
> > used by all the producers and the consumers.
> >
> > Do you think it might be an a good idea if the connections were per
> > workerThread and not only a single connection?
>
> I think this is an interesting feature to consider
I think we should consider this. For dispatch the big question is not what rate the back-end can handle, but "how does adding dispatch affect performance and scalability?" Say the back-end only does 5k msg/s on a single connection, but can do 15k on 3 connections. If dispatch reduces that to 5k *no matter how many client connections* that is a problem.�
Perhaps connectors should have a "concurrency" setting. I wouldn't tie it to the router's worker threads because it is really about concurrency at the back-end, not on the router. We need to avoid unexpected re-ordering of messages between a given client/back-end pair, so we have to think about how to bind/balance client load over back-end connections. It complicates the disconnect/reconnect/am-I-connected story quite a bit.
Not simple, but definitely worth a think...
Cheers,
Alan.
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org
Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
Java Broker 6.0.0
Posted by Ted Ross <tr...@redhat.com>.
On 08/02/2016 02:10 PM, Adel Boutros wrote:
> Hello Ted, Gordon,
>
> When I say the JMS producers are sending synchronously, I mean they don't set any options to the connection URL such as jms.forceAsyncSend. So I guess this means the producer will wait for the settlement before sending message X + 1.
>
> When I say it fails, I mean that with 1 producer, I have 2500 msg/s. When I add a second producer, I am at 4800 msg/s (Which is roughly twice the throughput of a single producer). But when I add a 3rd producer, I am at 5100 msg/s while I except it to be around 7500 msg/s. So for me the scaling stops working when adding a 3rd producer and above.
Understood.
>
> What you both explained to me about the single connection is indeed a plausible candidate because in the tests of "broker only", the throughput of a single connection is around 3 500 msg/s. So on a single connection, I shouldn't go above that figure which is what I am seeing. I imagine that when I add more producers/consumers, the throughput will shrink even more because the same connection is used by all the producers and the consumers.
>
> Do you think it might be an a good idea if the connections were per workerThread and not only a single connection?
I think this is an interesting feature to consider, however 5.1K
messages per second on a connection seems like a really low limit to me.
As I recall, we were able to get closer to 80K to 100K per connection
on qpidd. Which broker are you using?
An interesting experiment would be to configure two connectors to the
same broker (with different names) and configure autoLinks with
different addresses to the two connectors. This would show if the
bottleneck is the router-to-broker connection.
>
> Another solution would be to use a maximum of 3 clients (producer or consumer) per dispatcher and have a network of interconnected dispatchers but I find it very heavy and hard to maintain and support on the client-side. Do you agree?
I don't think this would solve your problem anyway.
>
> JMS Producer code
> ConnectionFactory connectionFactory = new JmsConnectionFactory("amqp://machine:port");
> Connection connection = connectionFactory.createConnection();
> Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
> Topic topic = session.createTopic("perf.topic");
> messageProducer = session.createProducer(topic);
> messageProducer.send(message);
>
> Regards,
> Adel
>
>> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
>> To: users@qpid.apache.org
>> From: tross@redhat.com
>> Date: Tue, 2 Aug 2016 13:42:24 -0400
>>
>>
>>
>> On 07/29/2016 08:40 AM, Adel Boutros wrote:
>>> Hello Ted,
>>>
>>> Increasing the link capacity had no impact. So, I have
>>> done a series of tests to try and isolate the issue.
>>> We tested 3 different architecture without any consumers:
>>> Producer --> Broker
>>> Producer --> Dispatcher
>>> Producer --> Dispatcher --> Broker
>>> In every test, we sent 100 000 messages which contained a byte array of 100 bytes. The producers are sending in synchronous mode and with AUTO_ACKNOWLEDGE.
>>>
>>> Our benchmark machines have 20 cores and 396 Gb Ram each. We have
>>> currently put consumers/producers on 1 machine and dispatcher/brokers on another machine. They are both connected with a 10 Gbps ethernet connection. Nothing else is using the machines.
>>>
>>> The results are in
>>> the table below.
>>>
>>> What I could observe:
>>> The broker alone scales well when I add producers
>>> The dispatcher alone scales well when I add producersThe dispatcher connected to a broker scales well with 2 producersThe dispatcher connected to a broker fails when having 3 producers or more
>>
>> In what way does it fail?
>>
>>>
>>> I
>>> also did some "qdstat -l" while the test was running and at max had 5
>>> unsettled deliveries. So I don't think the problem comes with the
>>> linkCapacity.
>>
>> You mentioned that you are running in synchronous mode. Does this mean
>> that each producer is waiting for settlement on message X before sending
>> message X+1?
>>
>>>
>>> What else can we look at? How does the dispatcher connect the producers to the broker? Does it open a new connection with each new producer? Or does it use some sort of a connection pool?
>>
>> The router multiplexes the broker traffic over a single connection to
>> the broker.
>>
>>>
>>> Could the issue come from the capacity configuration of the link in the connection between the broker and the dispatcher?
>>
>> Probably not in your case since the backlogs are much smaller than the
>> default capacity.
>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> Number of Producers
>>> Broker
>>> Dispatcher
>>> Combined Producer Throughput (msg/s)
>>> Combined Producer Latency (micros)
>>>
>>>
>>> 1
>>> YES
>>>
>>> NO
>>>
>>> 3 500
>>> 370
>>>
>>>
>>> 4
>>> YES
>>> NO
>>>
>>> 9 200
>>> 420
>>>
>>>
>>> 1
>>> NO
>>> YES
>>> 6 000
>>> 180
>>>
>>>
>>> 2
>>> NO
>>> YES
>>> 12 000
>>> 192
>>>
>>>
>>> 3
>>> NO
>>> YES
>>> 16 000
>>> 201
>>>
>>>
>>> 1
>>> YES
>>> YES
>>> 2 500
>>> 360
>>>
>>>
>>> 2
>>> YES
>>> YES
>>> 4 800
>>> 400
>>>
>>>
>>> 3
>>> YES
>>> YES
>>> 5 200
>>> 540
>>>
>>>
>>> qdstat -l
>>> bash$ qdstat -b dell445srv:10254 -l
>>> Router Links
>>> type dir conn id id peer class addr phs cap undel unsettled deliveries admin oper
>>> =======================================================================================================================
>>> endpoint in 19 46 mobile perfQueue 1 250 0 0 0 enabled up
>>> endpoint out 19 54 mobile perf.topic 0 250 0 2 4994922 enabled up
>>> endpoint in 27 57 mobile perf.topic 0 250 0 1 1678835 enabled up
>>> endpoint in 28 58 mobile perf.topic 0 250 0 1 1677653 enabled up
>>> endpoint in 29 59 mobile perf.topic 0 250 0 0 1638434 enabled up
>>> endpoint in 47 94 mobile $management 0 250 0 0 1 enabled up
>>> endpoint out 47 95 local temp.2u+DSi+26jT3hvZ 250 0 0 0 enabled up
>>>
>>> Regards,
>>> Adel
>>>
>>>> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
>>>> To: users@qpid.apache.org
>>>> From: tross@redhat.com
>>>> Date: Tue, 26 Jul 2016 10:32:29 -0400
>>>>
>>>> Adel,
>>>>
>>>> That's a good question. I think it's highly dependent on your
>>>> requirements and the environment. Here are some random thoughts:
>>>>
>>>> - There's a trade-off between memory use (message buffering) and
>>>> throughput. If you have many clients sharing the message bus,
>>>> smaller values of linkCapacity will protect the router memory. If
>>>> you have relatively few clients wanting to go fast, a larger
>>>> linkCapacity is appropriate.
>>>> - If the underlying network has high latency (satellite links, long
>>>> distances, etc.), larger values of linkCapacity will be needed to
>>>> protect against stalling caused by delayed settlement.
>>>> - The default of 250 is considered a reasonable compromise. I think a
>>>> value around 10 is better for a shared bus, but 500-1000 might be
>>>> better for throughput with few clients.
>>>>
>>>> -Ted
>>>>
>>>>
>>>> On 07/26/2016 10:08 AM, Adel Boutros wrote:
>>>>> Thanks Ted,
>>>>>
>>>>> I will try to change linkCapacity. However, I was wondering if there is a way to "calculate an optimal value for linkCapacity". What factors can impact this field?
>>>>>
>>>>> Regards,
>>>>> Adel
>>>>>
>>>>>> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
>>>>>> To: users@qpid.apache.org
>>>>>> From: tross@redhat.com
>>>>>> Date: Tue, 26 Jul 2016 09:44:43 -0400
>>>>>>
>>>>>> Adel,
>>>>>>
>>>>>> The number of workers should be related to the number of available
>>>>>> processor cores, not the volume of work or number of connections. 4 is
>>>>>> probably a good number for testing.
>>>>>>
>>>>>> I'm not sure what the default link credit is for the Java broker (it's
>>>>>> 500 for the c++ broker) or the clients you're using.
>>>>>>
>>>>>> The metric you should adjust is the linkCapacity for the listener and
>>>>>> route-container connector. LinkCapacity is the number of deliveries
>>>>>> that can be in-flight (unsettled) on each link. Qpid Dispatch Router
>>>>>> defaults linkCapacity to 250. Depending on the volumes in your test,
>>>>>> this might account for the discrepancy. You should try increasing this
>>>>>> value.
>>>>>>
>>>>>> Note that linkCapacity is used to set initial credit for your links.
>>>>>>
>>>>>> -Ted
>>>>>>
>>>>>> On 07/25/2016 12:10 PM, Adel Boutros wrote:
>>>>>>> Hello,We are actually running some performance benchmarks in an architecture consisting of a Java Broker connected to a Qpid dispatch router. We also have 3 producers and 3 consumers in the test. The producers send message to a topic which has a binding on a queue with a filter and the consumers receives message from that queue.
>>>>>>> We have noticed a significant loss of performance in this architecture compared to an architecture composed of a simple Java Broker. The throughput of the producers is down to half and there are a lot of oscillations in the presence of the dispatcher.
>>>>>>>
>>>>>>> I have tried to double the number of workers on the dispatcher but it had no impact.
>>>>>>>
>>>>>>> Can you please help us find the cause of this issue?
>>>>>>>
>>>>>>> Dispacth router config
>>>>>>> router {
>>>>>>> id: router.10454
>>>>>>> mode: interior
>>>>>>> worker-threads: 4
>>>>>>> }
>>>>>>>
>>>>>>> listener {
>>>>>>> host: 0.0.0.0
>>>>>>> port: 10454
>>>>>>> role: normal
>>>>>>> saslMechanisms: ANONYMOUS
>>>>>>> requireSsl: no
>>>>>>> authenticatePeer: no
>>>>>>> }
>>>>>>>
>>>>>>> Java Broker config
>>>>>>> export QPID_JAVA_MEM="-Xmx16g -Xms2g"
>>>>>>> 1 Topic + 1 Queue
>>>>>>> 1 AMQP port without any authentication mechanism (ANONYMOUS)
>>>>>>>
>>>>>>> Qdmanage on Dispatcher
>>>>>>> qdmanage -b amqp://localhost:10454 create --type=address prefix=perfQueue waypoint=true name=perf.queue.addr
>>>>>>> qdmanage -b amqp://localhost:10454 create --type=address prefix=perf.topic waypoint=true name=perf.topic.addr
>>>>>>> qdmanage -b amqp://localhost:10454 create --type=connector role=route-container addr=localhost port=10455 name=localhost.broker.10455.connector
>>>>>>> qdmanage -b amqp://localhost:10454 create --type=autoLink addr=perfQueue dir=in connection=localhost.broker.10455.connector name=localhost.broker.10455.perfQueue.in
>>>>>>> qdmanage -b amqp://localhost:10454 create --type=autoLink addr=perf.topic dir=out connection=localhost.broker.10455.connector name=localhost.broker.10455.perf.topic.out
>>>>>>>
>>>>>>> Combined producer throughput
>>>>>>> 1 Broker: http://hpics.li/a9d6efa
>>>>>>> 1 Broker + 1 Dispatcher: http://hpics.li/189299b
>>>>>>>
>>>>>>> Regards,
>>>>>>> Adel
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> ---------------------------------------------------------------------
>>>>>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>>>>>> For additional commands, e-mail: users-help@qpid.apache.org
>>>>>>
>>>>>
>>>>>
>>>>
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>>>> For additional commands, e-mail: users-help@qpid.apache.org
>>>>
>>>
>>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>> For additional commands, e-mail: users-help@qpid.apache.org
>>
>
>
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org
RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
Java Broker 6.0.0
Posted by Adel Boutros <ad...@live.com>.
Hello Ted, Gordon,
When I say the JMS producers are sending synchronously, I mean they don't set any options to the connection URL such as jms.forceAsyncSend. So I guess this means the producer will wait for the settlement before sending message X + 1.
When I say it fails, I mean that with 1 producer, I have 2500 msg/s. When I add a second producer, I am at 4800 msg/s (Which is roughly twice the throughput of a single producer). But when I add a 3rd producer, I am at 5100 msg/s while I except it to be around 7500 msg/s. So for me the scaling stops working when adding a 3rd producer and above.
What you both explained to me about the single connection is indeed a plausible candidate because in the tests of "broker only", the throughput of a single connection is around 3 500 msg/s. So on a single connection, I shouldn't go above that figure which is what I am seeing. I imagine that when I add more producers/consumers, the throughput will shrink even more because the same connection is used by all the producers and the consumers.
Do you think it might be an a good idea if the connections were per workerThread and not only a single connection?
Another solution would be to use a maximum of 3 clients (producer or consumer) per dispatcher and have a network of interconnected dispatchers but I find it very heavy and hard to maintain and support on the client-side. Do you agree?
JMS Producer code
ConnectionFactory connectionFactory = new JmsConnectionFactory("amqp://machine:port");
Connection connection = connectionFactory.createConnection();
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
Topic topic = session.createTopic("perf.topic");
messageProducer = session.createProducer(topic);
messageProducer.send(message);
Regards,
Adel
> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> To: users@qpid.apache.org
> From: tross@redhat.com
> Date: Tue, 2 Aug 2016 13:42:24 -0400
>
>
>
> On 07/29/2016 08:40 AM, Adel Boutros wrote:
> > Hello Ted,
> >
> > Increasing the link capacity had no impact. So, I have
> > done a series of tests to try and isolate the issue.
> > We tested 3 different architecture without any consumers:
> > Producer --> Broker
> > Producer --> Dispatcher
> > Producer --> Dispatcher --> Broker
> > In every test, we sent 100 000 messages which contained a byte array of 100 bytes. The producers are sending in synchronous mode and with AUTO_ACKNOWLEDGE.
> >
> > Our benchmark machines have 20 cores and 396 Gb Ram each. We have
> > currently put consumers/producers on 1 machine and dispatcher/brokers on another machine. They are both connected with a 10 Gbps ethernet connection. Nothing else is using the machines.
> >
> > The results are in
> > the table below.
> >
> > What I could observe:
> > The broker alone scales well when I add producers
> > The dispatcher alone scales well when I add producersThe dispatcher connected to a broker scales well with 2 producersThe dispatcher connected to a broker fails when having 3 producers or more
>
> In what way does it fail?
>
> >
> > I
> > also did some "qdstat -l" while the test was running and at max had 5
> > unsettled deliveries. So I don't think the problem comes with the
> > linkCapacity.
>
> You mentioned that you are running in synchronous mode. Does this mean
> that each producer is waiting for settlement on message X before sending
> message X+1?
>
> >
> > What else can we look at? How does the dispatcher connect the producers to the broker? Does it open a new connection with each new producer? Or does it use some sort of a connection pool?
>
> The router multiplexes the broker traffic over a single connection to
> the broker.
>
> >
> > Could the issue come from the capacity configuration of the link in the connection between the broker and the dispatcher?
>
> Probably not in your case since the backlogs are much smaller than the
> default capacity.
>
> >
> >
> >
> >
> >
> >
> >
> >
> > Number of Producers
> > Broker
> > Dispatcher
> > Combined Producer Throughput (msg/s)
> > Combined Producer Latency (micros)
> >
> >
> > 1
> > YES
> >
> > NO
> >
> > 3 500
> > 370
> >
> >
> > 4
> > YES
> > NO
> >
> > 9 200
> > 420
> >
> >
> > 1
> > NO
> > YES
> > 6 000
> > 180
> >
> >
> > 2
> > NO
> > YES
> > 12 000
> > 192
> >
> >
> > 3
> > NO
> > YES
> > 16 000
> > 201
> >
> >
> > 1
> > YES
> > YES
> > 2 500
> > 360
> >
> >
> > 2
> > YES
> > YES
> > 4 800
> > 400
> >
> >
> > 3
> > YES
> > YES
> > 5 200
> > 540
> >
> >
> > qdstat -l
> > bash$ qdstat -b dell445srv:10254 -l
> > Router Links
> > type dir conn id id peer class addr phs cap undel unsettled deliveries admin oper
> > =======================================================================================================================
> > endpoint in 19 46 mobile perfQueue 1 250 0 0 0 enabled up
> > endpoint out 19 54 mobile perf.topic 0 250 0 2 4994922 enabled up
> > endpoint in 27 57 mobile perf.topic 0 250 0 1 1678835 enabled up
> > endpoint in 28 58 mobile perf.topic 0 250 0 1 1677653 enabled up
> > endpoint in 29 59 mobile perf.topic 0 250 0 0 1638434 enabled up
> > endpoint in 47 94 mobile $management 0 250 0 0 1 enabled up
> > endpoint out 47 95 local temp.2u+DSi+26jT3hvZ 250 0 0 0 enabled up
> >
> > Regards,
> > Adel
> >
> >> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> >> To: users@qpid.apache.org
> >> From: tross@redhat.com
> >> Date: Tue, 26 Jul 2016 10:32:29 -0400
> >>
> >> Adel,
> >>
> >> That's a good question. I think it's highly dependent on your
> >> requirements and the environment. Here are some random thoughts:
> >>
> >> - There's a trade-off between memory use (message buffering) and
> >> throughput. If you have many clients sharing the message bus,
> >> smaller values of linkCapacity will protect the router memory. If
> >> you have relatively few clients wanting to go fast, a larger
> >> linkCapacity is appropriate.
> >> - If the underlying network has high latency (satellite links, long
> >> distances, etc.), larger values of linkCapacity will be needed to
> >> protect against stalling caused by delayed settlement.
> >> - The default of 250 is considered a reasonable compromise. I think a
> >> value around 10 is better for a shared bus, but 500-1000 might be
> >> better for throughput with few clients.
> >>
> >> -Ted
> >>
> >>
> >> On 07/26/2016 10:08 AM, Adel Boutros wrote:
> >>> Thanks Ted,
> >>>
> >>> I will try to change linkCapacity. However, I was wondering if there is a way to "calculate an optimal value for linkCapacity". What factors can impact this field?
> >>>
> >>> Regards,
> >>> Adel
> >>>
> >>>> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> >>>> To: users@qpid.apache.org
> >>>> From: tross@redhat.com
> >>>> Date: Tue, 26 Jul 2016 09:44:43 -0400
> >>>>
> >>>> Adel,
> >>>>
> >>>> The number of workers should be related to the number of available
> >>>> processor cores, not the volume of work or number of connections. 4 is
> >>>> probably a good number for testing.
> >>>>
> >>>> I'm not sure what the default link credit is for the Java broker (it's
> >>>> 500 for the c++ broker) or the clients you're using.
> >>>>
> >>>> The metric you should adjust is the linkCapacity for the listener and
> >>>> route-container connector. LinkCapacity is the number of deliveries
> >>>> that can be in-flight (unsettled) on each link. Qpid Dispatch Router
> >>>> defaults linkCapacity to 250. Depending on the volumes in your test,
> >>>> this might account for the discrepancy. You should try increasing this
> >>>> value.
> >>>>
> >>>> Note that linkCapacity is used to set initial credit for your links.
> >>>>
> >>>> -Ted
> >>>>
> >>>> On 07/25/2016 12:10 PM, Adel Boutros wrote:
> >>>>> Hello,We are actually running some performance benchmarks in an architecture consisting of a Java Broker connected to a Qpid dispatch router. We also have 3 producers and 3 consumers in the test. The producers send message to a topic which has a binding on a queue with a filter and the consumers receives message from that queue.
> >>>>> We have noticed a significant loss of performance in this architecture compared to an architecture composed of a simple Java Broker. The throughput of the producers is down to half and there are a lot of oscillations in the presence of the dispatcher.
> >>>>>
> >>>>> I have tried to double the number of workers on the dispatcher but it had no impact.
> >>>>>
> >>>>> Can you please help us find the cause of this issue?
> >>>>>
> >>>>> Dispacth router config
> >>>>> router {
> >>>>> id: router.10454
> >>>>> mode: interior
> >>>>> worker-threads: 4
> >>>>> }
> >>>>>
> >>>>> listener {
> >>>>> host: 0.0.0.0
> >>>>> port: 10454
> >>>>> role: normal
> >>>>> saslMechanisms: ANONYMOUS
> >>>>> requireSsl: no
> >>>>> authenticatePeer: no
> >>>>> }
> >>>>>
> >>>>> Java Broker config
> >>>>> export QPID_JAVA_MEM="-Xmx16g -Xms2g"
> >>>>> 1 Topic + 1 Queue
> >>>>> 1 AMQP port without any authentication mechanism (ANONYMOUS)
> >>>>>
> >>>>> Qdmanage on Dispatcher
> >>>>> qdmanage -b amqp://localhost:10454 create --type=address prefix=perfQueue waypoint=true name=perf.queue.addr
> >>>>> qdmanage -b amqp://localhost:10454 create --type=address prefix=perf.topic waypoint=true name=perf.topic.addr
> >>>>> qdmanage -b amqp://localhost:10454 create --type=connector role=route-container addr=localhost port=10455 name=localhost.broker.10455.connector
> >>>>> qdmanage -b amqp://localhost:10454 create --type=autoLink addr=perfQueue dir=in connection=localhost.broker.10455.connector name=localhost.broker.10455.perfQueue.in
> >>>>> qdmanage -b amqp://localhost:10454 create --type=autoLink addr=perf.topic dir=out connection=localhost.broker.10455.connector name=localhost.broker.10455.perf.topic.out
> >>>>>
> >>>>> Combined producer throughput
> >>>>> 1 Broker: http://hpics.li/a9d6efa
> >>>>> 1 Broker + 1 Dispatcher: http://hpics.li/189299b
> >>>>>
> >>>>> Regards,
> >>>>> Adel
> >>>>>
> >>>>>
> >>>>>
> >>>>
> >>>> ---------------------------------------------------------------------
> >>>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> >>>> For additional commands, e-mail: users-help@qpid.apache.org
> >>>>
> >>>
> >>>
> >>
> >> ---------------------------------------------------------------------
> >> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> >> For additional commands, e-mail: users-help@qpid.apache.org
> >>
> >
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
>
Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
Java Broker 6.0.0
Posted by Ted Ross <tr...@redhat.com>.
On 07/29/2016 08:40 AM, Adel Boutros wrote:
> Hello Ted,
>
> Increasing the link capacity had no impact. So, I have
> done a series of tests to try and isolate the issue.
> We tested 3 different architecture without any consumers:
> Producer --> Broker
> Producer --> Dispatcher
> Producer --> Dispatcher --> Broker
> In every test, we sent 100 000 messages which contained a byte array of 100 bytes. The producers are sending in synchronous mode and with AUTO_ACKNOWLEDGE.
>
> Our benchmark machines have 20 cores and 396 Gb Ram each. We have
> currently put consumers/producers on 1 machine and dispatcher/brokers on another machine. They are both connected with a 10 Gbps ethernet connection. Nothing else is using the machines.
>
> The results are in
> the table below.
>
> What I could observe:
> The broker alone scales well when I add producers
> The dispatcher alone scales well when I add producersThe dispatcher connected to a broker scales well with 2 producersThe dispatcher connected to a broker fails when having 3 producers or more
In what way does it fail?
>
> I
> also did some "qdstat -l" while the test was running and at max had 5
> unsettled deliveries. So I don't think the problem comes with the
> linkCapacity.
You mentioned that you are running in synchronous mode. Does this mean
that each producer is waiting for settlement on message X before sending
message X+1?
>
> What else can we look at? How does the dispatcher connect the producers to the broker? Does it open a new connection with each new producer? Or does it use some sort of a connection pool?
The router multiplexes the broker traffic over a single connection to
the broker.
>
> Could the issue come from the capacity configuration of the link in the connection between the broker and the dispatcher?
Probably not in your case since the backlogs are much smaller than the
default capacity.
>
>
>
>
>
>
>
>
> Number of Producers
> Broker
> Dispatcher
> Combined Producer Throughput (msg/s)
> Combined Producer Latency (micros)
>
>
> 1
> YES
>
> NO
>
> 3 500
> 370
>
>
> 4
> YES
> NO
>
> 9 200
> 420
>
>
> 1
> NO
> YES
> 6 000
> 180
>
>
> 2
> NO
> YES
> 12 000
> 192
>
>
> 3
> NO
> YES
> 16 000
> 201
>
>
> 1
> YES
> YES
> 2 500
> 360
>
>
> 2
> YES
> YES
> 4 800
> 400
>
>
> 3
> YES
> YES
> 5 200
> 540
>
>
> qdstat -l
> bash$ qdstat -b dell445srv:10254 -l
> Router Links
> type dir conn id id peer class addr phs cap undel unsettled deliveries admin oper
> =======================================================================================================================
> endpoint in 19 46 mobile perfQueue 1 250 0 0 0 enabled up
> endpoint out 19 54 mobile perf.topic 0 250 0 2 4994922 enabled up
> endpoint in 27 57 mobile perf.topic 0 250 0 1 1678835 enabled up
> endpoint in 28 58 mobile perf.topic 0 250 0 1 1677653 enabled up
> endpoint in 29 59 mobile perf.topic 0 250 0 0 1638434 enabled up
> endpoint in 47 94 mobile $management 0 250 0 0 1 enabled up
> endpoint out 47 95 local temp.2u+DSi+26jT3hvZ 250 0 0 0 enabled up
>
> Regards,
> Adel
>
>> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
>> To: users@qpid.apache.org
>> From: tross@redhat.com
>> Date: Tue, 26 Jul 2016 10:32:29 -0400
>>
>> Adel,
>>
>> That's a good question. I think it's highly dependent on your
>> requirements and the environment. Here are some random thoughts:
>>
>> - There's a trade-off between memory use (message buffering) and
>> throughput. If you have many clients sharing the message bus,
>> smaller values of linkCapacity will protect the router memory. If
>> you have relatively few clients wanting to go fast, a larger
>> linkCapacity is appropriate.
>> - If the underlying network has high latency (satellite links, long
>> distances, etc.), larger values of linkCapacity will be needed to
>> protect against stalling caused by delayed settlement.
>> - The default of 250 is considered a reasonable compromise. I think a
>> value around 10 is better for a shared bus, but 500-1000 might be
>> better for throughput with few clients.
>>
>> -Ted
>>
>>
>> On 07/26/2016 10:08 AM, Adel Boutros wrote:
>>> Thanks Ted,
>>>
>>> I will try to change linkCapacity. However, I was wondering if there is a way to "calculate an optimal value for linkCapacity". What factors can impact this field?
>>>
>>> Regards,
>>> Adel
>>>
>>>> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
>>>> To: users@qpid.apache.org
>>>> From: tross@redhat.com
>>>> Date: Tue, 26 Jul 2016 09:44:43 -0400
>>>>
>>>> Adel,
>>>>
>>>> The number of workers should be related to the number of available
>>>> processor cores, not the volume of work or number of connections. 4 is
>>>> probably a good number for testing.
>>>>
>>>> I'm not sure what the default link credit is for the Java broker (it's
>>>> 500 for the c++ broker) or the clients you're using.
>>>>
>>>> The metric you should adjust is the linkCapacity for the listener and
>>>> route-container connector. LinkCapacity is the number of deliveries
>>>> that can be in-flight (unsettled) on each link. Qpid Dispatch Router
>>>> defaults linkCapacity to 250. Depending on the volumes in your test,
>>>> this might account for the discrepancy. You should try increasing this
>>>> value.
>>>>
>>>> Note that linkCapacity is used to set initial credit for your links.
>>>>
>>>> -Ted
>>>>
>>>> On 07/25/2016 12:10 PM, Adel Boutros wrote:
>>>>> Hello,We are actually running some performance benchmarks in an architecture consisting of a Java Broker connected to a Qpid dispatch router. We also have 3 producers and 3 consumers in the test. The producers send message to a topic which has a binding on a queue with a filter and the consumers receives message from that queue.
>>>>> We have noticed a significant loss of performance in this architecture compared to an architecture composed of a simple Java Broker. The throughput of the producers is down to half and there are a lot of oscillations in the presence of the dispatcher.
>>>>>
>>>>> I have tried to double the number of workers on the dispatcher but it had no impact.
>>>>>
>>>>> Can you please help us find the cause of this issue?
>>>>>
>>>>> Dispacth router config
>>>>> router {
>>>>> id: router.10454
>>>>> mode: interior
>>>>> worker-threads: 4
>>>>> }
>>>>>
>>>>> listener {
>>>>> host: 0.0.0.0
>>>>> port: 10454
>>>>> role: normal
>>>>> saslMechanisms: ANONYMOUS
>>>>> requireSsl: no
>>>>> authenticatePeer: no
>>>>> }
>>>>>
>>>>> Java Broker config
>>>>> export QPID_JAVA_MEM="-Xmx16g -Xms2g"
>>>>> 1 Topic + 1 Queue
>>>>> 1 AMQP port without any authentication mechanism (ANONYMOUS)
>>>>>
>>>>> Qdmanage on Dispatcher
>>>>> qdmanage -b amqp://localhost:10454 create --type=address prefix=perfQueue waypoint=true name=perf.queue.addr
>>>>> qdmanage -b amqp://localhost:10454 create --type=address prefix=perf.topic waypoint=true name=perf.topic.addr
>>>>> qdmanage -b amqp://localhost:10454 create --type=connector role=route-container addr=localhost port=10455 name=localhost.broker.10455.connector
>>>>> qdmanage -b amqp://localhost:10454 create --type=autoLink addr=perfQueue dir=in connection=localhost.broker.10455.connector name=localhost.broker.10455.perfQueue.in
>>>>> qdmanage -b amqp://localhost:10454 create --type=autoLink addr=perf.topic dir=out connection=localhost.broker.10455.connector name=localhost.broker.10455.perf.topic.out
>>>>>
>>>>> Combined producer throughput
>>>>> 1 Broker: http://hpics.li/a9d6efa
>>>>> 1 Broker + 1 Dispatcher: http://hpics.li/189299b
>>>>>
>>>>> Regards,
>>>>> Adel
>>>>>
>>>>>
>>>>>
>>>>
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>>>> For additional commands, e-mail: users-help@qpid.apache.org
>>>>
>>>
>>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>> For additional commands, e-mail: users-help@qpid.apache.org
>>
>
>
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org
RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
Java Broker 6.0.0
Posted by Adel Boutros <ad...@live.com>.
Hello Ted,
Increasing the link capacity had no impact. So, I have
done a series of tests to try and isolate the issue.
We tested 3 different architecture without any consumers:
Producer --> Broker
Producer --> Dispatcher
Producer --> Dispatcher --> Broker
In every test, we sent 100 000 messages which contained a byte array of 100 bytes. The producers are sending in synchronous mode and with AUTO_ACKNOWLEDGE.
Our benchmark machines have 20 cores and 396 Gb Ram each. We have
currently put consumers/producers on 1 machine and dispatcher/brokers on another machine. They are both connected with a 10 Gbps ethernet connection. Nothing else is using the machines.
The results are in
the table below.
What I could observe:
The broker alone scales well when I add producers
The dispatcher alone scales well when I add producersThe dispatcher connected to a broker scales well with 2 producersThe dispatcher connected to a broker fails when having 3 producers or more
I
also did some "qdstat -l" while the test was running and at max had 5
unsettled deliveries. So I don't think the problem comes with the
linkCapacity.
What else can we look at? How does the dispatcher connect the producers to the broker? Does it open a new connection with each new producer? Or does it use some sort of a connection pool?
Could the issue come from the capacity configuration of the link in the connection between the broker and the dispatcher?
Number of Producers
Broker
Dispatcher
Combined Producer Throughput (msg/s)
Combined Producer Latency (micros)
1
YES
NO
3 500
370
4
YES
NO
9 200
420
1
NO
YES
6 000
180
2
NO
YES
12 000
192
3
NO
YES
16 000
201
1
YES
YES
2 500
360
2
YES
YES
4 800
400
3
YES
YES
5 200
540
qdstat -l
bash$ qdstat -b dell445srv:10254 -l
Router Links
type dir conn id id peer class addr phs cap undel unsettled deliveries admin oper
=======================================================================================================================
endpoint in 19 46 mobile perfQueue 1 250 0 0 0 enabled up
endpoint out 19 54 mobile perf.topic 0 250 0 2 4994922 enabled up
endpoint in 27 57 mobile perf.topic 0 250 0 1 1678835 enabled up
endpoint in 28 58 mobile perf.topic 0 250 0 1 1677653 enabled up
endpoint in 29 59 mobile perf.topic 0 250 0 0 1638434 enabled up
endpoint in 47 94 mobile $management 0 250 0 0 1 enabled up
endpoint out 47 95 local temp.2u+DSi+26jT3hvZ 250 0 0 0 enabled up
Regards,
Adel
> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> To: users@qpid.apache.org
> From: tross@redhat.com
> Date: Tue, 26 Jul 2016 10:32:29 -0400
>
> Adel,
>
> That's a good question. I think it's highly dependent on your
> requirements and the environment. Here are some random thoughts:
>
> - There's a trade-off between memory use (message buffering) and
> throughput. If you have many clients sharing the message bus,
> smaller values of linkCapacity will protect the router memory. If
> you have relatively few clients wanting to go fast, a larger
> linkCapacity is appropriate.
> - If the underlying network has high latency (satellite links, long
> distances, etc.), larger values of linkCapacity will be needed to
> protect against stalling caused by delayed settlement.
> - The default of 250 is considered a reasonable compromise. I think a
> value around 10 is better for a shared bus, but 500-1000 might be
> better for throughput with few clients.
>
> -Ted
>
>
> On 07/26/2016 10:08 AM, Adel Boutros wrote:
> > Thanks Ted,
> >
> > I will try to change linkCapacity. However, I was wondering if there is a way to "calculate an optimal value for linkCapacity". What factors can impact this field?
> >
> > Regards,
> > Adel
> >
> >> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> >> To: users@qpid.apache.org
> >> From: tross@redhat.com
> >> Date: Tue, 26 Jul 2016 09:44:43 -0400
> >>
> >> Adel,
> >>
> >> The number of workers should be related to the number of available
> >> processor cores, not the volume of work or number of connections. 4 is
> >> probably a good number for testing.
> >>
> >> I'm not sure what the default link credit is for the Java broker (it's
> >> 500 for the c++ broker) or the clients you're using.
> >>
> >> The metric you should adjust is the linkCapacity for the listener and
> >> route-container connector. LinkCapacity is the number of deliveries
> >> that can be in-flight (unsettled) on each link. Qpid Dispatch Router
> >> defaults linkCapacity to 250. Depending on the volumes in your test,
> >> this might account for the discrepancy. You should try increasing this
> >> value.
> >>
> >> Note that linkCapacity is used to set initial credit for your links.
> >>
> >> -Ted
> >>
> >> On 07/25/2016 12:10 PM, Adel Boutros wrote:
> >>> Hello,We are actually running some performance benchmarks in an architecture consisting of a Java Broker connected to a Qpid dispatch router. We also have 3 producers and 3 consumers in the test. The producers send message to a topic which has a binding on a queue with a filter and the consumers receives message from that queue.
> >>> We have noticed a significant loss of performance in this architecture compared to an architecture composed of a simple Java Broker. The throughput of the producers is down to half and there are a lot of oscillations in the presence of the dispatcher.
> >>>
> >>> I have tried to double the number of workers on the dispatcher but it had no impact.
> >>>
> >>> Can you please help us find the cause of this issue?
> >>>
> >>> Dispacth router config
> >>> router {
> >>> id: router.10454
> >>> mode: interior
> >>> worker-threads: 4
> >>> }
> >>>
> >>> listener {
> >>> host: 0.0.0.0
> >>> port: 10454
> >>> role: normal
> >>> saslMechanisms: ANONYMOUS
> >>> requireSsl: no
> >>> authenticatePeer: no
> >>> }
> >>>
> >>> Java Broker config
> >>> export QPID_JAVA_MEM="-Xmx16g -Xms2g"
> >>> 1 Topic + 1 Queue
> >>> 1 AMQP port without any authentication mechanism (ANONYMOUS)
> >>>
> >>> Qdmanage on Dispatcher
> >>> qdmanage -b amqp://localhost:10454 create --type=address prefix=perfQueue waypoint=true name=perf.queue.addr
> >>> qdmanage -b amqp://localhost:10454 create --type=address prefix=perf.topic waypoint=true name=perf.topic.addr
> >>> qdmanage -b amqp://localhost:10454 create --type=connector role=route-container addr=localhost port=10455 name=localhost.broker.10455.connector
> >>> qdmanage -b amqp://localhost:10454 create --type=autoLink addr=perfQueue dir=in connection=localhost.broker.10455.connector name=localhost.broker.10455.perfQueue.in
> >>> qdmanage -b amqp://localhost:10454 create --type=autoLink addr=perf.topic dir=out connection=localhost.broker.10455.connector name=localhost.broker.10455.perf.topic.out
> >>>
> >>> Combined producer throughput
> >>> 1 Broker: http://hpics.li/a9d6efa
> >>> 1 Broker + 1 Dispatcher: http://hpics.li/189299b
> >>>
> >>> Regards,
> >>> Adel
> >>>
> >>>
> >>>
> >>
> >> ---------------------------------------------------------------------
> >> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> >> For additional commands, e-mail: users-help@qpid.apache.org
> >>
> >
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
>
Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
Java Broker 6.0.0
Posted by Ted Ross <tr...@redhat.com>.
Adel,
That's a good question. I think it's highly dependent on your
requirements and the environment. Here are some random thoughts:
- There's a trade-off between memory use (message buffering) and
throughput. If you have many clients sharing the message bus,
smaller values of linkCapacity will protect the router memory. If
you have relatively few clients wanting to go fast, a larger
linkCapacity is appropriate.
- If the underlying network has high latency (satellite links, long
distances, etc.), larger values of linkCapacity will be needed to
protect against stalling caused by delayed settlement.
- The default of 250 is considered a reasonable compromise. I think a
value around 10 is better for a shared bus, but 500-1000 might be
better for throughput with few clients.
-Ted
On 07/26/2016 10:08 AM, Adel Boutros wrote:
> Thanks Ted,
>
> I will try to change linkCapacity. However, I was wondering if there is a way to "calculate an optimal value for linkCapacity". What factors can impact this field?
>
> Regards,
> Adel
>
>> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
>> To: users@qpid.apache.org
>> From: tross@redhat.com
>> Date: Tue, 26 Jul 2016 09:44:43 -0400
>>
>> Adel,
>>
>> The number of workers should be related to the number of available
>> processor cores, not the volume of work or number of connections. 4 is
>> probably a good number for testing.
>>
>> I'm not sure what the default link credit is for the Java broker (it's
>> 500 for the c++ broker) or the clients you're using.
>>
>> The metric you should adjust is the linkCapacity for the listener and
>> route-container connector. LinkCapacity is the number of deliveries
>> that can be in-flight (unsettled) on each link. Qpid Dispatch Router
>> defaults linkCapacity to 250. Depending on the volumes in your test,
>> this might account for the discrepancy. You should try increasing this
>> value.
>>
>> Note that linkCapacity is used to set initial credit for your links.
>>
>> -Ted
>>
>> On 07/25/2016 12:10 PM, Adel Boutros wrote:
>>> Hello,We are actually running some performance benchmarks in an architecture consisting of a Java Broker connected to a Qpid dispatch router. We also have 3 producers and 3 consumers in the test. The producers send message to a topic which has a binding on a queue with a filter and the consumers receives message from that queue.
>>> We have noticed a significant loss of performance in this architecture compared to an architecture composed of a simple Java Broker. The throughput of the producers is down to half and there are a lot of oscillations in the presence of the dispatcher.
>>>
>>> I have tried to double the number of workers on the dispatcher but it had no impact.
>>>
>>> Can you please help us find the cause of this issue?
>>>
>>> Dispacth router config
>>> router {
>>> id: router.10454
>>> mode: interior
>>> worker-threads: 4
>>> }
>>>
>>> listener {
>>> host: 0.0.0.0
>>> port: 10454
>>> role: normal
>>> saslMechanisms: ANONYMOUS
>>> requireSsl: no
>>> authenticatePeer: no
>>> }
>>>
>>> Java Broker config
>>> export QPID_JAVA_MEM="-Xmx16g -Xms2g"
>>> 1 Topic + 1 Queue
>>> 1 AMQP port without any authentication mechanism (ANONYMOUS)
>>>
>>> Qdmanage on Dispatcher
>>> qdmanage -b amqp://localhost:10454 create --type=address prefix=perfQueue waypoint=true name=perf.queue.addr
>>> qdmanage -b amqp://localhost:10454 create --type=address prefix=perf.topic waypoint=true name=perf.topic.addr
>>> qdmanage -b amqp://localhost:10454 create --type=connector role=route-container addr=localhost port=10455 name=localhost.broker.10455.connector
>>> qdmanage -b amqp://localhost:10454 create --type=autoLink addr=perfQueue dir=in connection=localhost.broker.10455.connector name=localhost.broker.10455.perfQueue.in
>>> qdmanage -b amqp://localhost:10454 create --type=autoLink addr=perf.topic dir=out connection=localhost.broker.10455.connector name=localhost.broker.10455.perf.topic.out
>>>
>>> Combined producer throughput
>>> 1 Broker: http://hpics.li/a9d6efa
>>> 1 Broker + 1 Dispatcher: http://hpics.li/189299b
>>>
>>> Regards,
>>> Adel
>>>
>>>
>>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>> For additional commands, e-mail: users-help@qpid.apache.org
>>
>
>
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org
RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
Java Broker 6.0.0
Posted by Adel Boutros <ad...@live.com>.
Thanks Ted,
I will try to change linkCapacity. However, I was wondering if there is a way to "calculate an optimal value for linkCapacity". What factors can impact this field?
Regards,
Adel
> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
> To: users@qpid.apache.org
> From: tross@redhat.com
> Date: Tue, 26 Jul 2016 09:44:43 -0400
>
> Adel,
>
> The number of workers should be related to the number of available
> processor cores, not the volume of work or number of connections. 4 is
> probably a good number for testing.
>
> I'm not sure what the default link credit is for the Java broker (it's
> 500 for the c++ broker) or the clients you're using.
>
> The metric you should adjust is the linkCapacity for the listener and
> route-container connector. LinkCapacity is the number of deliveries
> that can be in-flight (unsettled) on each link. Qpid Dispatch Router
> defaults linkCapacity to 250. Depending on the volumes in your test,
> this might account for the discrepancy. You should try increasing this
> value.
>
> Note that linkCapacity is used to set initial credit for your links.
>
> -Ted
>
> On 07/25/2016 12:10 PM, Adel Boutros wrote:
> > Hello,We are actually running some performance benchmarks in an architecture consisting of a Java Broker connected to a Qpid dispatch router. We also have 3 producers and 3 consumers in the test. The producers send message to a topic which has a binding on a queue with a filter and the consumers receives message from that queue.
> > We have noticed a significant loss of performance in this architecture compared to an architecture composed of a simple Java Broker. The throughput of the producers is down to half and there are a lot of oscillations in the presence of the dispatcher.
> >
> > I have tried to double the number of workers on the dispatcher but it had no impact.
> >
> > Can you please help us find the cause of this issue?
> >
> > Dispacth router config
> > router {
> > id: router.10454
> > mode: interior
> > worker-threads: 4
> > }
> >
> > listener {
> > host: 0.0.0.0
> > port: 10454
> > role: normal
> > saslMechanisms: ANONYMOUS
> > requireSsl: no
> > authenticatePeer: no
> > }
> >
> > Java Broker config
> > export QPID_JAVA_MEM="-Xmx16g -Xms2g"
> > 1 Topic + 1 Queue
> > 1 AMQP port without any authentication mechanism (ANONYMOUS)
> >
> > Qdmanage on Dispatcher
> > qdmanage -b amqp://localhost:10454 create --type=address prefix=perfQueue waypoint=true name=perf.queue.addr
> > qdmanage -b amqp://localhost:10454 create --type=address prefix=perf.topic waypoint=true name=perf.topic.addr
> > qdmanage -b amqp://localhost:10454 create --type=connector role=route-container addr=localhost port=10455 name=localhost.broker.10455.connector
> > qdmanage -b amqp://localhost:10454 create --type=autoLink addr=perfQueue dir=in connection=localhost.broker.10455.connector name=localhost.broker.10455.perfQueue.in
> > qdmanage -b amqp://localhost:10454 create --type=autoLink addr=perf.topic dir=out connection=localhost.broker.10455.connector name=localhost.broker.10455.perf.topic.out
> >
> > Combined producer throughput
> > 1 Broker: http://hpics.li/a9d6efa
> > 1 Broker + 1 Dispatcher: http://hpics.li/189299b
> >
> > Regards,
> > Adel
> >
> >
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
>
Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
Java Broker 6.0.0
Posted by Ted Ross <tr...@redhat.com>.
Adel,
The number of workers should be related to the number of available
processor cores, not the volume of work or number of connections. 4 is
probably a good number for testing.
I'm not sure what the default link credit is for the Java broker (it's
500 for the c++ broker) or the clients you're using.
The metric you should adjust is the linkCapacity for the listener and
route-container connector. LinkCapacity is the number of deliveries
that can be in-flight (unsettled) on each link. Qpid Dispatch Router
defaults linkCapacity to 250. Depending on the volumes in your test,
this might account for the discrepancy. You should try increasing this
value.
Note that linkCapacity is used to set initial credit for your links.
-Ted
On 07/25/2016 12:10 PM, Adel Boutros wrote:
> Hello,We are actually running some performance benchmarks in an architecture consisting of a Java Broker connected to a Qpid dispatch router. We also have 3 producers and 3 consumers in the test. The producers send message to a topic which has a binding on a queue with a filter and the consumers receives message from that queue.
> We have noticed a significant loss of performance in this architecture compared to an architecture composed of a simple Java Broker. The throughput of the producers is down to half and there are a lot of oscillations in the presence of the dispatcher.
>
> I have tried to double the number of workers on the dispatcher but it had no impact.
>
> Can you please help us find the cause of this issue?
>
> Dispacth router config
> router {
> id: router.10454
> mode: interior
> worker-threads: 4
> }
>
> listener {
> host: 0.0.0.0
> port: 10454
> role: normal
> saslMechanisms: ANONYMOUS
> requireSsl: no
> authenticatePeer: no
> }
>
> Java Broker config
> export QPID_JAVA_MEM="-Xmx16g -Xms2g"
> 1 Topic + 1 Queue
> 1 AMQP port without any authentication mechanism (ANONYMOUS)
>
> Qdmanage on Dispatcher
> qdmanage -b amqp://localhost:10454 create --type=address prefix=perfQueue waypoint=true name=perf.queue.addr
> qdmanage -b amqp://localhost:10454 create --type=address prefix=perf.topic waypoint=true name=perf.topic.addr
> qdmanage -b amqp://localhost:10454 create --type=connector role=route-container addr=localhost port=10455 name=localhost.broker.10455.connector
> qdmanage -b amqp://localhost:10454 create --type=autoLink addr=perfQueue dir=in connection=localhost.broker.10455.connector name=localhost.broker.10455.perfQueue.in
> qdmanage -b amqp://localhost:10454 create --type=autoLink addr=perf.topic dir=out connection=localhost.broker.10455.connector name=localhost.broker.10455.perf.topic.out
>
> Combined producer throughput
> 1 Broker: http://hpics.li/a9d6efa
> 1 Broker + 1 Dispatcher: http://hpics.li/189299b
>
> Regards,
> Adel
>
>
>
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org