You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@qpid.apache.org by smartdog <jw...@gmail.com> on 2014/06/17 20:10:52 UTC

Why performance of sending durable messages to qpid queue is really bad?

With Proton c++ client, it seems sending an undurable message to a qpid queue
takes 1-3ms, while sending a durable message takes static 1000ms. Is it by
design? Why does it take so much time?

My code:
pn_message_set_durable(message, true);

  for(i=0;i<10;i++){
   gettimeofday(&start, NULL);
   printf("sending %d ", i);
   pn_messenger_put(messenger, message);
   messageTracker = pn_messenger_outgoing_tracker(messenger);
   pn_messenger_send(messenger, -1);

   pn_status_t trackerStatus = pn_messenger_status(messenger,
messageTracker);
   if(trackerStatus != PN_STATUS_ACCEPTED) printf("send Azure failed! %d\n",
trackerStatus);
   else pn_messenger_settle(messenger,messageTracker,0);

   gettimeofday(&end, NULL);
   seconds  = end.tv_sec  - start.tv_sec;
   useconds = end.tv_usec - start.tv_usec;
   mtime = ((seconds) * 1000 + useconds/1000.0) + 0.5;
   printf(" after send one Elapsed time: %ld milliseconds\n", mtime);
  }



--
View this message in context: http://qpid.2158936.n2.nabble.com/Why-performance-of-sending-durable-messages-to-qpid-queue-is-really-bad-tp7609368.html
Sent from the Apache Qpid users mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: Why performance of sending durable messages to qpid queue is really bad?

Posted by smartdog <jw...@gmail.com>.
I am not able to do that. 

./qpidd --daemon  --wcache-page-size 4 --config /etc/qpid/qpidd.conf
[Broker] critical Unexpected error: Error in command line options: ambiguous
option wcache-page-size
Use --help to see valid options




--
View this message in context: http://qpid.2158936.n2.nabble.com/Why-performance-of-sending-durable-messages-to-qpid-queue-is-really-bad-tp7609368p7609502.html
Sent from the Apache Qpid users mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: Why performance of sending durable messages to qpid queue is really bad?

Posted by Pavel Moravec <pm...@redhat.com>.
Durable messages will be everytime (much) slower to process than transient, as disk I/O operations are much slower than keeping the message just in memory.

I would rather attempt to improve general I/O performance of the filesystem/disk you use for journals. I.e. older versions of ext4 filesystems provided poor performance for direct AIO unaligned operations the store module performs. (newer versions should be fine, afaik).

Apart from that, on qpidd level, you can try tuning one broker option:


  --wcache-page-size N (32)     Size of the pages in the write page cache in 
                                KiB. Allowable values - powers of 2: 1, 2, 4, 
                                ... , 128. Lower values decrease latency at the
                                expense of throughput.


Kind regards,
Pavel


----- Original Message -----
> From: "smartdog" <jw...@gmail.com>
> To: users@qpid.apache.org
> Sent: Tuesday, June 17, 2014 11:35:22 PM
> Subject: Re: Why performance of sending durable messages to qpid queue is really bad?
> 
> It is the qpid c++ broker. The BDB store is used. Is there a way to improve
> the performance while preserving the persistence?
> 
> qpidd.conf
> data-dir=/var/spool/qpid
> mgmt-enable=yes
> load-module=/usr/local/phonefactor/bin/legacystore.so
> load-module=/usr/local/phonefactor/bin/qpid/store/store.so
> 
> From logs:
> [Store] info > Default files per journal: 8
> [Store] info > Default journal file size: 24 (wpgs)
> [Store] info > Default write cache page size: 32 (KiB)
> [Store] info > Default number of write cache pages: 32
> [Store] info > TPL files per journal: 8
> [Store] info > TPL journal file size: 24 (wpgs)
> [Store] info > TPL write cache page size: 4 (KiB)
> [Store] info > TPL number of write cache pages: 64
> 
> 
> 
> 
> 
> 
> --
> View this message in context:
> http://qpid.2158936.n2.nabble.com/Why-performance-of-sending-durable-messages-to-qpid-queue-is-really-bad-tp7609368p7609374.html
> Sent from the Apache Qpid users mailing list archive at Nabble.com.
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
> 
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: Why performance of sending durable messages to qpid queue is really bad?

Posted by Alan Conway <ac...@redhat.com>.
On Fri, 2014-06-20 at 12:40 +0100, Gordon Sim wrote:
> On 06/19/2014 10:03 PM, smartdog wrote:
> > I am not sending big messages, just a couple of words. Indeed, I use AMQP
> > 1.0. Then it seems the static 1000ms latency comes from qpid timeout for
> > waiting more messages to write to the store. Can I adjust the timeout value
> > in qpid source code? I would be happy if we could reduce it to 100ms for
> > durable messages.
> 
> The timeout is 500ms by default (at least on trunk at present and I 
> doubt its changed very recently). To edit it change the value for 
> MessageStoreImpl::defJournalFlushTimeout in 
> cpp/src/qpid/legacystore/MessageStoreImpl.cpp
> 
> (There is a timer task per journal, though at present configured to use 
> the same periodicity. So it would be possible without too much work to 
> make it configurable on a per queue basis if anyone had the inclination).
> 

That sounds like an unreasonably large default timeout. Even on cheap
hardware network latency is a few milliseconds and I'm guessing disk
latency is a lot less. A half-second timeout to batch messages from
network to disk sounds completely out of line. Batching for high
throughput is one thing, but imposing such a large artificial latency on
low-throughput applications doesn't make sense.

I would guess that a more appropriate timeout is in the milliseconds or
sub-millisecond range (if it is really necessary at all) but it would
require some performance measurements to determine that.

Cheers,
Alan.


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: Why performance of sending durable messages to qpid queue is really bad?

Posted by Gordon Sim <gs...@redhat.com>.
On 06/21/2014 12:09 AM, smartdog wrote:
> Thanks for that. After changing it from 500 to 10, I am able to get 20ms
> latency for a send. Pretty cool.
>
> But I cannot reproduce it on another machine, i.e. after I copied the
> rebuilt qpidd executable with reduced timeout to another machine, the
> latency is still 1000ms on that machine, unless I rebuild qpidd on that
> machine, then all qpidd executables have 20ms latency.
>
> So I guess the build process changed something only on the machine, not the
> qpidd executable, legacystore.so, etc. How to work around this because we
> don't want to build qpidd from src on every machine.

Note that the store is built as a separate library, loaded on demand by 
qpidd. So you would need to use the fixed version of that executable 
(qpidd itself isn't actually affected by the change).


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: Why performance of sending durable messages to qpid queue is really bad?

Posted by smartdog <jw...@gmail.com>.
Thanks for that. After changing it from 500 to 10, I am able to get 20ms
latency for a send. Pretty cool.

But I cannot reproduce it on another machine, i.e. after I copied the
rebuilt qpidd executable with reduced timeout to another machine, the
latency is still 1000ms on that machine, unless I rebuild qpidd on that
machine, then all qpidd executables have 20ms latency.

So I guess the build process changed something only on the machine, not the
qpidd executable, legacystore.so, etc. How to work around this because we
don't want to build qpidd from src on every machine.



--
View this message in context: http://qpid.2158936.n2.nabble.com/Why-performance-of-sending-durable-messages-to-qpid-queue-is-really-bad-tp7609368p7609542.html
Sent from the Apache Qpid users mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: Why performance of sending durable messages to qpid queue is really bad?

Posted by Gordon Sim <gs...@redhat.com>.
On 06/19/2014 10:03 PM, smartdog wrote:
> I am not sending big messages, just a couple of words. Indeed, I use AMQP
> 1.0. Then it seems the static 1000ms latency comes from qpid timeout for
> waiting more messages to write to the store. Can I adjust the timeout value
> in qpid source code? I would be happy if we could reduce it to 100ms for
> durable messages.

The timeout is 500ms by default (at least on trunk at present and I 
doubt its changed very recently). To edit it change the value for 
MessageStoreImpl::defJournalFlushTimeout in 
cpp/src/qpid/legacystore/MessageStoreImpl.cpp

(There is a timer task per journal, though at present configured to use 
the same periodicity. So it would be possible without too much work to 
make it configurable on a per queue basis if anyone had the inclination).


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: Why performance of sending durable messages to qpid queue is really bad?

Posted by smartdog <jw...@gmail.com>.
I am not sending big messages, just a couple of words. Indeed, I use AMQP
1.0. Then it seems the static 1000ms latency comes from qpid timeout for
waiting more messages to write to the store. Can I adjust the timeout value
in qpid source code? I would be happy if we could reduce it to 100ms for
durable messages. We do not want to send messages in batch at this moment.



--
View this message in context: http://qpid.2158936.n2.nabble.com/Why-performance-of-sending-durable-messages-to-qpid-queue-is-really-bad-tp7609368p7609503.html
Sent from the Apache Qpid users mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: Why performance of sending durable messages to qpid queue is really bad?

Posted by Gordon Sim <gs...@redhat.com>.
On 06/18/2014 06:39 PM, Fraser Adams wrote:
> On 18/06/14 09:13, Gordon Sim wrote:
>> On 06/17/2014 10:35 PM, smartdog wrote:
>>> It is the qpid c++ broker. The BDB store is used. Is there a way to
>>> improve
>>> the performance while preserving the persistence?
>>
>> Can you send messages in batches? I.e. don't make every send synchronous.
>>
>> A synchronous send will always be a little slower and for the c++
>> broker if the message is durable there is an added penalty at present
>> over AMQP 1.0 as the broker doesn't flush the disk immediately (it
>> doesn't get any explicit indication that the client is not going to
>> send anything else until it gets back the disposition for the message).
>>
>>
> Pavel & Gordon, smartdog's original comment said "sending an undurable
> message to a qpid queue takes 1-3ms, while sending a durable message
> takes static 1000ms" taking 1 second does seem awfully high even given
> the synchronous and flush comments, clearly the message size may be
> significant, but I'm guessing if he was sending *really big* messages
> he'd have said so it's probably not significant here, so the flush delay
> seems plausible.
>
> what determines when the flush occurs?

Over 0-10, the client can explicitly trigger the flush by sending a sync 
control (or setting the sync flag on the transfer). There is no such 
explicit mechanism in 1.0, so at present the flush happens after a short 
timeout on the broker side.

This was discussed in a thread started by Pavel some weeks ago. The 
broker would need to be altered to allow it to recognise that a given 
message is not part of a batch (e.g. because there is no further data 
and no further messages for that queue in the data received). This isn't 
a trivial change unfortunately.


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: Why performance of sending durable messages to qpid queue is really bad?

Posted by Fraser Adams <fr...@blueyonder.co.uk>.
On 18/06/14 09:13, Gordon Sim wrote:
> On 06/17/2014 10:35 PM, smartdog wrote:
>> It is the qpid c++ broker. The BDB store is used. Is there a way to 
>> improve
>> the performance while preserving the persistence?
>
> Can you send messages in batches? I.e. don't make every send synchronous.
>
> A synchronous send will always be a little slower and for the c++ 
> broker if the message is durable there is an added penalty at present 
> over AMQP 1.0 as the broker doesn't flush the disk immediately (it 
> doesn't get any explicit indication that the client is not going to 
> send anything else until it gets back the disposition for the message).
>
>
Pavel & Gordon, smartdog's original comment said "sending an undurable 
message to a qpid queue takes 1-3ms, while sending a durable message 
takes static 1000ms" taking 1 second does seem awfully high even given 
the synchronous and flush comments, clearly the message size may be 
significant, but I'm guessing if he was sending *really big* messages 
he'd have said so it's probably not significant here, so the flush delay 
seems plausible.

what determines when the flush occurs?


smartdog, as a thought one thing that you could try would be to use the 
qpid spout program to try and send a message. The spout program uses 
qpid::messaging so you could use that approach to rule out any quirks 
that may exist in the interactions with messenger, you can also set it 
to use either AMQP 0.10 or AMQP 1.0 so you should be able to check if 
the AMQP 1.0 flush thing is the most likely issue, basically if it goes 
fast set to AMQP 0.10 and slower when set to AMQP 1.0 then that would 
*probably* point to it being the culprit.


spout lives in <qpid>/cpp/<bld>/examples/messaging
where <qpid> is the root of the source tree and <bld> is your cmake 
build directory.


The basic syntax to send a message in AMQP 1.0 is:

./spout --connection-options {protocol:amqp1.0} -b localhost --content 
"Hello World" "<destination-node>"

where destination-node is the name of the queue/exchange you want to 
send the message to. For AMQP 0.10 it's just:

./spout -b localhost --content "Hello World" "<destination-node>"

To make the message durable I think you simply use the -d option e.g.

./spout --connection-options {protocol:amqp1.0} -d -b localhost 
--content "Hello World" "<destination-node>"

or-

./spout -d -b localhost --content "Hello World" "<destination-node>"

Regards,
Frase




---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: Why performance of sending durable messages to qpid queue is really bad?

Posted by Gordon Sim <gs...@redhat.com>.
On 06/17/2014 10:35 PM, smartdog wrote:
> It is the qpid c++ broker. The BDB store is used. Is there a way to improve
> the performance while preserving the persistence?

Can you send messages in batches? I.e. don't make every send synchronous.

A synchronous send will always be a little slower and for the c++ broker 
if the message is durable there is an added penalty at present over AMQP 
1.0 as the broker doesn't flush the disk immediately (it doesn't get any 
explicit indication that the client is not going to send anything else 
until it gets back the disposition for the message).


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: Why performance of sending durable messages to qpid queue is really bad?

Posted by smartdog <jw...@gmail.com>.
This probably does not matter. I think legacystore.so is the one that is
working.



--
View this message in context: http://qpid.2158936.n2.nabble.com/Why-performance-of-sending-durable-messages-to-qpid-queue-is-really-bad-tp7609368p7609543.html
Sent from the Apache Qpid users mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: Why performance of sending durable messages to qpid queue is really bad?

Posted by Kim van der Riet <ki...@redhat.com>.
On Tue, 2014-06-17 at 14:35 -0700, smartdog wrote:
> load-module=/usr/local/phonefactor/bin/legacystore.so
> load-module=/usr/local/phonefactor/bin/qpid/store/store.so

Do you have *two* stores loaded at the same time?


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: Why performance of sending durable messages to qpid queue is really bad?

Posted by smartdog <jw...@gmail.com>.
It is the qpid c++ broker. The BDB store is used. Is there a way to improve
the performance while preserving the persistence?

qpidd.conf
data-dir=/var/spool/qpid
mgmt-enable=yes
load-module=/usr/local/phonefactor/bin/legacystore.so
load-module=/usr/local/phonefactor/bin/qpid/store/store.so

>From logs:
[Store] info > Default files per journal: 8
[Store] info > Default journal file size: 24 (wpgs)
[Store] info > Default write cache page size: 32 (KiB)
[Store] info > Default number of write cache pages: 32
[Store] info > TPL files per journal: 8
[Store] info > TPL journal file size: 24 (wpgs)
[Store] info > TPL write cache page size: 4 (KiB)
[Store] info > TPL number of write cache pages: 64






--
View this message in context: http://qpid.2158936.n2.nabble.com/Why-performance-of-sending-durable-messages-to-qpid-queue-is-really-bad-tp7609368p7609374.html
Sent from the Apache Qpid users mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: Why performance of sending durable messages to qpid queue is really bad?

Posted by Ted Ross <tr...@redhat.com>.
I don't think that the client library (Proton) has anything to do with
this disparity of latency.  It is simply waiting for settlement from the
broker because of the synchronous send.

What kind of broker are you using and how is the message store on it
configured?

-Ted

On 06/17/2014 02:10 PM, smartdog wrote:
> With Proton c++ client, it seems sending an undurable message to a qpid queue
> takes 1-3ms, while sending a durable message takes static 1000ms. Is it by
> design? Why does it take so much time?
> 
> My code:
> pn_message_set_durable(message, true);
> 
>   for(i=0;i<10;i++){
>    gettimeofday(&start, NULL);
>    printf("sending %d ", i);
>    pn_messenger_put(messenger, message);
>    messageTracker = pn_messenger_outgoing_tracker(messenger);
>    pn_messenger_send(messenger, -1);
> 
>    pn_status_t trackerStatus = pn_messenger_status(messenger,
> messageTracker);
>    if(trackerStatus != PN_STATUS_ACCEPTED) printf("send Azure failed! %d\n",
> trackerStatus);
>    else pn_messenger_settle(messenger,messageTracker,0);
> 
>    gettimeofday(&end, NULL);
>    seconds  = end.tv_sec  - start.tv_sec;
>    useconds = end.tv_usec - start.tv_usec;
>    mtime = ((seconds) * 1000 + useconds/1000.0) + 0.5;
>    printf(" after send one Elapsed time: %ld milliseconds\n", mtime);
>   }
> 
> 
> 
> --
> View this message in context: http://qpid.2158936.n2.nabble.com/Why-performance-of-sending-durable-messages-to-qpid-queue-is-really-bad-tp7609368.html
> Sent from the Apache Qpid users mailing list archive at Nabble.com.
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: Why performance of sending durable messages to qpid queue is really bad?

Posted by Chuck Rolke <cr...@redhat.com>.
Also in passing the original code doesn't calculate the time delta correctly.

>    seconds  = end.tv_sec  - start.tv_sec;
>    useconds = end.tv_usec - start.tv_usec;
>    mtime = ((seconds) * 1000 + useconds/1000.0) + 0.5;

fails to account for the borrow from microseconds into seconds so you will get odd results regularly.
Use timersub() to calculate the delta. Info in man timeradd.

----- Original Message -----
> From: "smartdog" <jw...@gmail.com>
> To: users@qpid.apache.org
> Sent: Tuesday, June 17, 2014 2:10:52 PM
> Subject: Why performance of sending durable messages to qpid queue is really bad?
> 
> With Proton c++ client, it seems sending an undurable message to a qpid queue
> takes 1-3ms, while sending a durable message takes static 1000ms. Is it by
> design? Why does it take so much time?
> 
> My code:
> pn_message_set_durable(message, true);
> 
>   for(i=0;i<10;i++){
>    gettimeofday(&start, NULL);
>    printf("sending %d ", i);
>    pn_messenger_put(messenger, message);
>    messageTracker = pn_messenger_outgoing_tracker(messenger);
>    pn_messenger_send(messenger, -1);
> 
>    pn_status_t trackerStatus = pn_messenger_status(messenger,
> messageTracker);
>    if(trackerStatus != PN_STATUS_ACCEPTED) printf("send Azure failed! %d\n",
> trackerStatus);
>    else pn_messenger_settle(messenger,messageTracker,0);
> 
>    gettimeofday(&end, NULL);
>    seconds  = end.tv_sec  - start.tv_sec;
>    useconds = end.tv_usec - start.tv_usec;
>    mtime = ((seconds) * 1000 + useconds/1000.0) + 0.5;
>    printf(" after send one Elapsed time: %ld milliseconds\n", mtime);
>   }
> 
> 
> 
> --
> View this message in context:
> http://qpid.2158936.n2.nabble.com/Why-performance-of-sending-durable-messages-to-qpid-queue-is-really-bad-tp7609368.html
> Sent from the Apache Qpid users mailing list archive at Nabble.com.
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
> 
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org