You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@qpid.apache.org by Bill Whiting <te...@bellsouth.net> on 2010/03/07 21:46:37 UTC

durable messaging

I'm trying to enable persistence.  If I define an exchange and queue 
with the --durable option then the qpid-config command does not 
complain, but the objects are not re-created when I re-cycle the broker

If I run the broker manually with
     qpidd --data-dir /var/lib/qpidd -t
then the broker notes that persistence is not enabled.  Is there a web 
page or documentation that describes enabling the store for persistent 
messaging?

//Bill

On 11/05/2009 08:31 AM, Carl Trieloff wrote:
>
>
> Mike,
>
> The issue is that with some high core count machines  with multi 
> socket, a few things can go wrong. It
> starts with some of the value add the hardware may do by using a 
> feature called SMIs. These are hardware
> interrupts that stop the CPUs, then load some code into the CPU to do 
> management stuff or ECC checks, power
> (green computing etc...). The bad side is that they 'stop' all the 
> CPUs on the machine. we have plotted SMIs
> up to 1-2ms on some machines. My employer has worked with quite of a 
> few hardware suppliers to certify
> a bunch of machines (remove SMIs for Realtime). note in many cases the 
> SMI don't impact applications, i.e.
> in Java the effects of the GC are larger.
>
> Few other things that go on, NUMA is on a per socket basis, if you run 
> multi socket with high core count
> and the CPU load is not high enough for the scheduler to keep the CPU 
> locality then you can have expensive
> memory access and cache effects come into play also getting less 
> effective locking. If you are RHEL5.4 I can
> provide some setting which will give you NUMA aware  memory 
> allocation, which can  increase throughput up
> to 75%, and improve latency about 25% for NUMA machines.
>
> Thus the quick experiment of setting the worker-threads = to the cores 
> on one socket, increases  the CPU a little for
> those threads, and the probability of scheduling off core going down. 
> This then 'removes some of the hardware effects'
>
> Obviously if it is run on a SMI free or SMI re profiled machine (the 
> fast ones you noted have little or no SMIs)
> and numactl & things like cpuspeed are set then the more powerful 
> machine will beat the slower one. But in this case the
> faster machine is getting in it's own way.
>
> Carl.
>
> Mike D.. wrote:
>> Hi,
>> This "matched flow" behavior is quite interesting and luckily we have 
>> not
>> experienced it when prototyping on our developer machines.
>>
>> Would you mind explain a bit Carl why this would happen and what's your
>> suggestion to user of QPID? Soon we will test the proof of concept on 
>> our
>> servers as well. How can we have QPID utilizing both CPUs (8 
>> processors)?
>>
>>
>> thanks,
>> mike
>>
>>
>>
>>
>> Carl Trieloff wrote:
>>>
>>> I mailed you the deck directly.
>>>
>>> Carl.
>>>
>>>
>>> Andy Li wrote:
>>>> Carl,
>>>>
>>>> Yes, reducing the number of worker threads from #cores + 1 to 4 did 
>>>> switch the data center machines to behavior (1). Looks like you've 
>>>> diagnosed the issue!
>>>>
>>>> Unfortunately, I couldn't find a copy of your talk at HPC anywhere 
>>>> on the web. It's listed on their website, but no files posted.
>>>>
>>>> Thanks,
>>>> Andy
>>>>
>>>>
>>>>
>>>>
>>>>     ok, I think I know what might be going on, I believe it is
>>>>     hardware related -- Take a look at the
>>>>     presentation I did with Lee Fisher at HPC on Wall Street.
>>>>
>>>>     Anyway, try the following and lets see if we can alter the
>>>>     behaviour. On the data centre machines
>>>>     run qpidd with --workerthreads 4
>>>>
>>>>     If that alters the results i'll expand my theory and how to
>>>>     resolve the hardware side.
>>>>     Carl.
>>>>
>>>
>>
>
>


---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project:      http://qpid.apache.org
Use/Interact: mailto:users-subscribe@qpid.apache.org


Re: durable messaging

Posted by Ian kinkade <ki...@idi-middleware.com>.
Hi Bill,

Looks like the persistence store is implemented using a DB: 
http://qpid.et.redhat.com/download.html

I am still looking for meaningful documentation.

Best Regards .............. Ian

-- 
Ian Kinkade
CEO
Information Design, Inc.
145 Durham Road, Suite 11
Madison, CT  06443 USA
URL:   www.idi-middleware.com
Email: kinkade@idi-middleware.com

Work:  203-245-0772 Ext: 6212
Fax:   203-245-1885
Cell:  203-589-1192



On 3/7/2010 3:46 PM, Bill Whiting wrote:
> I'm trying to enable persistence.  If I define an exchange and queue 
> with the --durable option then the qpid-config command does not 
> complain, but the objects are not re-created when I re-cycle the broker
>
> If I run the broker manually with
>     qpidd --data-dir /var/lib/qpidd -t
> then the broker notes that persistence is not enabled.  Is there a web 
> page or documentation that describes enabling the store for persistent 
> messaging?
>
> //Bill
>
> On 11/05/2009 08:31 AM, Carl Trieloff wrote:
>>
>>
>> Mike,
>>
>> The issue is that with some high core count machines  with multi 
>> socket, a few things can go wrong. It
>> starts with some of the value add the hardware may do by using a 
>> feature called SMIs. These are hardware
>> interrupts that stop the CPUs, then load some code into the CPU to do 
>> management stuff or ECC checks, power
>> (green computing etc...). The bad side is that they 'stop' all the 
>> CPUs on the machine. we have plotted SMIs
>> up to 1-2ms on some machines. My employer has worked with quite of a 
>> few hardware suppliers to certify
>> a bunch of machines (remove SMIs for Realtime). note in many cases 
>> the SMI don't impact applications, i.e.
>> in Java the effects of the GC are larger.
>>
>> Few other things that go on, NUMA is on a per socket basis, if you 
>> run multi socket with high core count
>> and the CPU load is not high enough for the scheduler to keep the CPU 
>> locality then you can have expensive
>> memory access and cache effects come into play also getting less 
>> effective locking. If you are RHEL5.4 I can
>> provide some setting which will give you NUMA aware  memory 
>> allocation, which can  increase throughput up
>> to 75%, and improve latency about 25% for NUMA machines.
>>
>> Thus the quick experiment of setting the worker-threads = to the 
>> cores on one socket, increases  the CPU a little for
>> those threads, and the probability of scheduling off core going down. 
>> This then 'removes some of the hardware effects'
>>
>> Obviously if it is run on a SMI free or SMI re profiled machine (the 
>> fast ones you noted have little or no SMIs)
>> and numactl & things like cpuspeed are set then the more powerful 
>> machine will beat the slower one. But in this case the
>> faster machine is getting in it's own way.
>>
>> Carl.
>>
>> Mike D.. wrote:
>>> Hi,
>>> This "matched flow" behavior is quite interesting and luckily we 
>>> have not
>>> experienced it when prototyping on our developer machines.
>>>
>>> Would you mind explain a bit Carl why this would happen and what's your
>>> suggestion to user of QPID? Soon we will test the proof of concept 
>>> on our
>>> servers as well. How can we have QPID utilizing both CPUs (8 
>>> processors)?
>>>
>>>
>>> thanks,
>>> mike
>>>
>>>
>>>
>>>
>>> Carl Trieloff wrote:
>>>>
>>>> I mailed you the deck directly.
>>>>
>>>> Carl.
>>>>
>>>>
>>>> Andy Li wrote:
>>>>> Carl,
>>>>>
>>>>> Yes, reducing the number of worker threads from #cores + 1 to 4 
>>>>> did switch the data center machines to behavior (1). Looks like 
>>>>> you've diagnosed the issue!
>>>>>
>>>>> Unfortunately, I couldn't find a copy of your talk at HPC anywhere 
>>>>> on the web. It's listed on their website, but no files posted.
>>>>>
>>>>> Thanks,
>>>>> Andy
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>     ok, I think I know what might be going on, I believe it is
>>>>>     hardware related -- Take a look at the
>>>>>     presentation I did with Lee Fisher at HPC on Wall Street.
>>>>>
>>>>>     Anyway, try the following and lets see if we can alter the
>>>>>     behaviour. On the data centre machines
>>>>>     run qpidd with --workerthreads 4
>>>>>
>>>>>     If that alters the results i'll expand my theory and how to
>>>>>     resolve the hardware side.
>>>>>     Carl.
>>>>>
>>>>
>>>
>>
>>
>
>
> ---------------------------------------------------------------------
> Apache Qpid - AMQP Messaging Implementation
> Project:      http://qpid.apache.org
> Use/Interact: mailto:users-subscribe@qpid.apache.org
>
>

---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project:      http://qpid.apache.org
Use/Interact: mailto:users-subscribe@qpid.apache.org


Re: durable messaging

Posted by Bill Whiting <te...@bellsouth.net>.
Kim,
Thanks, that fixed my problem.

//Bill

On 03/08/2010 07:20 AM, Kim van der Riet wrote:
> On Sun, 2010-03-07 at 15:46 -0500, Bill Whiting wrote:
>    
>> I'm trying to enable persistence.  If I define an exchange and queue
>> with the --durable option then the qpid-config command does not
>> complain, but the objects are not re-created when I re-cycle the broker
>>
>> If I run the broker manually with
>>       qpidd --data-dir /var/lib/qpidd -t
>> then the broker notes that persistence is not enabled.  Is there a web
>> page or documentation that describes enabling the store for persistent
>> messaging?
>>
>> //Bill
>>
>>      
> Make sure you have the store module loaded, use the --load-module
> path/to/msgstore.so when starting the broker (if it is not already in
> your config file):
>
> $ ./qpidd --load-module /path/to/msgstore.so --auth no --log-enable notice+
> 2010-03-08 07:16:48 notice Journal "TplStore": Created
> 2010-03-08 07:16:48 notice Store module initialized; store-dir=/home/username/.qpidd
> 2010-03-08 07:16:48 notice SASL disabled: No Authentication Performed
> 2010-03-08 07:16:48 notice Listening on TCP port 5672
> 2010-03-08 07:16:48 notice Broker running
>
> Note the first two lines in the log.
>
> http://cwiki.apache.org/confluence/display/qpid/FAQ#FAQ-Persistence
>
> I agree that it would be helpful if the qpid-config program would warn
> if persistence is not loaded and you try to use persistence features.
>
>
>
> ---------------------------------------------------------------------
> Apache Qpid - AMQP Messaging Implementation
> Project:      http://qpid.apache.org
> Use/Interact: mailto:users-subscribe@qpid.apache.org
>
>
>    


---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project:      http://qpid.apache.org
Use/Interact: mailto:users-subscribe@qpid.apache.org


Re: durable messaging

Posted by Kim van der Riet <ki...@redhat.com>.
On Sun, 2010-03-07 at 15:46 -0500, Bill Whiting wrote:
> I'm trying to enable persistence.  If I define an exchange and queue 
> with the --durable option then the qpid-config command does not 
> complain, but the objects are not re-created when I re-cycle the broker
> 
> If I run the broker manually with
>      qpidd --data-dir /var/lib/qpidd -t
> then the broker notes that persistence is not enabled.  Is there a web 
> page or documentation that describes enabling the store for persistent 
> messaging?
> 
> //Bill
> 

Make sure you have the store module loaded, use the --load-module
path/to/msgstore.so when starting the broker (if it is not already in
your config file):

$ ./qpidd --load-module /path/to/msgstore.so --auth no --log-enable notice+
2010-03-08 07:16:48 notice Journal "TplStore": Created
2010-03-08 07:16:48 notice Store module initialized; store-dir=/home/username/.qpidd
2010-03-08 07:16:48 notice SASL disabled: No Authentication Performed
2010-03-08 07:16:48 notice Listening on TCP port 5672
2010-03-08 07:16:48 notice Broker running

Note the first two lines in the log.

http://cwiki.apache.org/confluence/display/qpid/FAQ#FAQ-Persistence

I agree that it would be helpful if the qpid-config program would warn
if persistence is not loaded and you try to use persistence features.



---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project:      http://qpid.apache.org
Use/Interact: mailto:users-subscribe@qpid.apache.org