You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@qpid.apache.org by "GS.Chandra N" <gs...@gmail.com> on 2009/03/08 06:42:12 UTC
Memory pile up on broker
Hi,
I'm stress testing the broker for evaluating the subscription routing
performance and i see that the memory on the server goes up really high. (1
gig RSS, almost same Virtual size).
However i'm not able to attribute the reason behind this pile up's. I cannot
see the pile up attributed to any queues or connections at the server.
I do not create any queues from the server process to send messages and
simply call the python api session.message_transfer to send my messages at a
rate of 3000 odd mesages per sec(500 bytes payload).
How do i figure out what is happening.
This is my test setups
1. A 2 CPU XEON server running the broker
2. Another server running MULTIPLE python scripts (same script / copies of
each other) that publish messages to the server
3. Another server that runs a few clients that creates a total of 20,000 odd
subscriptions at the broker. Only 3000 subscriptions amongst these are
unique rest are copies
The client subscriptions never match the messages sent in (to avoid
bandwidth choking) and therefore the queues created show 0 msges enqued.
Thanks
gs
Re: Memory pile up on broker
Posted by Ted Ross <tr...@redhat.com>.
GS.Chandra N wrote:
> Hi,
>
> I found that if i kept the qpid-tool open long enough all my clients and
> publishers would throw exceptions and come out.
>
qpid-tool keeps a record in-memory of all management objects, including
deleted objects, that it's seen while running. If the broker is
creating and deleting queues, bindings, exchanges, etc. rapidly, the
memory used by qpid-tool will grow.
However, I see no reason why qpid-tool would affect other clients. It
simply creates its own subscription on the broker and binds private
queues to the qpid.management and amq.direct exchanges.
> Earlier when i was performing my tests i had 3 putty windows opened to all
> the 3 servers and it seems as if, i was keeping qpid-tool opened just long
> enough to cause build up but closing it to run top and so on a so forth such
> that the memory piled up but did not cause timeouts at the client either.
>
> Observations
>
> 1.When there are subscriptions and a qpid-tool around messages seem to pile
> up. Why is this so?
>
qpid-tool should have no effect on the queues owned by other clients.
If a subscription exists (i.e. a binding to an exchange that causes
messages to be enqueued), those message will accumulate unless they are
received and acknowledged by a consumer. If there is no binding that
matches incoming messages, those messages will be discarded immediately
upon reception in the broker.
> *
> 2. When there are no subscribers and no qpid-tool the publishing processes
> cause the CPu to go to 90% at the publishing box. But when i started up
> subscribers, the publishing box CPU fell to 0% and all the python processes
> piled up memory.
>
> Concern : I'm not able to find out here if the publishers are still able to
> send out messages at the speed that it should when subscriptions are around.
> Or perhaps the broker is not able to pull enough messages - iam not sure
> which of this is happeneing. Probably latter? How do i tell?
>
> If i use qpid-tool to connect to the broker at this point, every time i hit
> show exchange i get the same stats with the same time stamp. it looks like
> the broker is too busy trying to match subscriptions even to refresh the
> stats.
>
> 3.Concern : Once the subscriptions are made, the CPU at the broker box* *(high
> end dual core XEON with ht) goes to 90%. Even at this level, i'm not sure
> the broker is able to match and discard all the messages fast enough (due to
> 2nd observation). Or how do i tell?
>
If there are subscriptions (bindings with keys that match incoming
messages), the broker does not "match and discard". It will match and
enqueue.
>
> Thanks
> gs
>
> ps : Is there a sure shot way to find out the message rates ? (I currently
> use qpid-tool show exchange command to find the no of msgRecieves and divide
> by time-dfference from 2 times the command is run)
>
There's a cli utility called "qpid-queue-stats" that does this for you.
It watches queue stats and displays the enqueue and dequeue messages rates.
> On Thu, Mar 12, 2009 at 3:11 PM, GS.Chandra N <gs...@gmail.com>wrote:
>
>
>> I'm able to reproduce the issue, when i keep qpid-tool open all the time
>> with the mgmt switches.
>>
>> However i'm not able to do so if qpid-tool is not open (did not try with
>> qpid-tool and no mgmt switches).
>>
>> Thanks
>> gs
>>
>>
>> On Thu, Mar 12, 2009 at 1:31 AM, Gordon Sim <gs...@redhat.com> wrote:
>>
>>
>>> GS.Chandra N wrote:
>>>
>>>
>>>> ps : Please find attached the scripts that create the issue
>>>>
>>>>
>>> I'm afraid I wasn't able to recreate the issue with your tests. Memory
>>> stayed reasonable and constant as the test was running (5 servers, 5
>>> clients).
>>>
>>> I'm baffled at this point as to what it is you are seeing.
>>>
>>> broker - runs on a dedicated box
>>>
>>>> qpidd -d --port=5672 --mgmt-enable=yes --mgmt-pub-interval=1 --auth=no
>>>> --log-source=yes --log-to-file=/tmp/somename
>>>>
>>>>
>>> Do you have qpid-tool or anything else running during the test? Does
>>> turning management off have any impact on the issue?
>>>
>>>
>>>
>>> ---------------------------------------------------------------------
>>> Apache Qpid - AMQP Messaging Implementation
>>> Project: http://qpid.apache.org
>>> Use/Interact: mailto:users-subscribe@qpid.apache.org
>>>
>>>
>>>
>
>
Re: Memory pile up on broker
Posted by "GS.Chandra N" <gs...@gmail.com>.
Hi,
I found that if i kept the qpid-tool open long enough all my clients and
publishers would throw exceptions and come out.
Earlier when i was performing my tests i had 3 putty windows opened to all
the 3 servers and it seems as if, i was keeping qpid-tool opened just long
enough to cause build up but closing it to run top and so on a so forth such
that the memory piled up but did not cause timeouts at the client either.
Observations
1.When there are subscriptions and a qpid-tool around messages seem to pile
up. Why is this so? *
*
2. When there are no subscribers and no qpid-tool the publishing processes
cause the CPu to go to 90% at the publishing box. But when i started up
subscribers, the publishing box CPU fell to 0% and all the python processes
piled up memory.
Concern : I'm not able to find out here if the publishers are still able to
send out messages at the speed that it should when subscriptions are around.
Or perhaps the broker is not able to pull enough messages - iam not sure
which of this is happeneing. Probably latter? How do i tell?
If i use qpid-tool to connect to the broker at this point, every time i hit
show exchange i get the same stats with the same time stamp. it looks like
the broker is too busy trying to match subscriptions even to refresh the
stats.
3.Concern : Once the subscriptions are made, the CPU at the broker box* *(high
end dual core XEON with ht) goes to 90%. Even at this level, i'm not sure
the broker is able to match and discard all the messages fast enough (due to
2nd observation). Or how do i tell?
Thanks
gs
ps : Is there a sure shot way to find out the message rates ? (I currently
use qpid-tool show exchange command to find the no of msgRecieves and divide
by time-dfference from 2 times the command is run)
On Thu, Mar 12, 2009 at 3:11 PM, GS.Chandra N <gs...@gmail.com>wrote:
> I'm able to reproduce the issue, when i keep qpid-tool open all the time
> with the mgmt switches.
>
> However i'm not able to do so if qpid-tool is not open (did not try with
> qpid-tool and no mgmt switches).
>
> Thanks
> gs
>
>
> On Thu, Mar 12, 2009 at 1:31 AM, Gordon Sim <gs...@redhat.com> wrote:
>
>> GS.Chandra N wrote:
>>
>>> ps : Please find attached the scripts that create the issue
>>>
>>
>> I'm afraid I wasn't able to recreate the issue with your tests. Memory
>> stayed reasonable and constant as the test was running (5 servers, 5
>> clients).
>>
>> I'm baffled at this point as to what it is you are seeing.
>>
>> broker - runs on a dedicated box
>>> qpidd -d --port=5672 --mgmt-enable=yes --mgmt-pub-interval=1 --auth=no
>>> --log-source=yes --log-to-file=/tmp/somename
>>>
>>
>> Do you have qpid-tool or anything else running during the test? Does
>> turning management off have any impact on the issue?
>>
>>
>>
>> ---------------------------------------------------------------------
>> Apache Qpid - AMQP Messaging Implementation
>> Project: http://qpid.apache.org
>> Use/Interact: mailto:users-subscribe@qpid.apache.org
>>
>>
>
Re: Memory pile up on broker
Posted by Gordon Sim <gs...@redhat.com>.
GS.Chandra N wrote:
> I'm able to reproduce the issue, when i keep qpid-tool open all the time
> with the mgmt switches.
>
> However i'm not able to do so if qpid-tool is not open (did not try with
> qpid-tool and no mgmt switches).
I suspect that if you wish to have qpid-tool open, a higher
mgmt-pub-interval might well reduce the cost of doing so.
---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project: http://qpid.apache.org
Use/Interact: mailto:users-subscribe@qpid.apache.org
Re: Memory pile up on broker
Posted by "GS.Chandra N" <gs...@gmail.com>.
I'm able to reproduce the issue, when i keep qpid-tool open all the time
with the mgmt switches.
However i'm not able to do so if qpid-tool is not open (did not try with
qpid-tool and no mgmt switches).
Thanks
gs
On Thu, Mar 12, 2009 at 1:31 AM, Gordon Sim <gs...@redhat.com> wrote:
> GS.Chandra N wrote:
>
>> ps : Please find attached the scripts that create the issue
>>
>
> I'm afraid I wasn't able to recreate the issue with your tests. Memory
> stayed reasonable and constant as the test was running (5 servers, 5
> clients).
>
> I'm baffled at this point as to what it is you are seeing.
>
> broker - runs on a dedicated box
>> qpidd -d --port=5672 --mgmt-enable=yes --mgmt-pub-interval=1 --auth=no
>> --log-source=yes --log-to-file=/tmp/somename
>>
>
> Do you have qpid-tool or anything else running during the test? Does
> turning management off have any impact on the issue?
>
>
>
> ---------------------------------------------------------------------
> Apache Qpid - AMQP Messaging Implementation
> Project: http://qpid.apache.org
> Use/Interact: mailto:users-subscribe@qpid.apache.org
>
>
Re: Memory pile up on broker
Posted by Gordon Sim <gs...@redhat.com>.
GS.Chandra N wrote:
> ps : Please find attached the scripts that create the issue
I'm afraid I wasn't able to recreate the issue with your tests. Memory
stayed reasonable and constant as the test was running (5 servers, 5
clients).
I'm baffled at this point as to what it is you are seeing.
> broker - runs on a dedicated box
> qpidd -d --port=5672 --mgmt-enable=yes --mgmt-pub-interval=1 --auth=no
> --log-source=yes --log-to-file=/tmp/somename
Do you have qpid-tool or anything else running during the test? Does
turning management off have any impact on the issue?
---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project: http://qpid.apache.org
Use/Interact: mailto:users-subscribe@qpid.apache.org
Re: Memory pile up on broker
Posted by "GS.Chandra N" <gs...@gmail.com>.
>On Tue, Mar 10, 2009 at 3:37 PM, Gordon Sim <gs...@redhat.com> wrote:
>Returning to the original problem, when the test is in progress, how many
queues does 'list queue' show you? Are there any unexpected queues in that
list?
Gordon,
How do i check if this caching is because the broker / client is not able to
process the messages fast enough ? (check machine configs below)
Thanks for all the help.
gs
ps : Please find attached the scripts that create the issue
client - run a few instances of these from 1 box - I ran 7 of these - each
creates 3K subscriptions
./multiple-small-subc-client.py <qpid-machine-ip> 5672 &
broker - runs on a dedicated box
qpidd -d --port=5672 --mgmt-enable=yes --mgmt-pub-interval=1 --auth=no
--log-source=yes --log-to-file=/tmp/somename
server - run this a few times (I ran 7 of these) on a 3rd box
./exchange_process.py <qpid-machine-ip> 5672 &
Machine configurations from /proc/cpuinfo
Broker - 2333 Mhz XEON Dual core with HT enabled (6MB cache per processor)
8G mem + 16G swap
Server - same
Client - 3.20 Ghz XEON Single processor with HT + 1 MB cache per processor
2Gig Memory + 4G swap
On Tue, Mar 10, 2009 at 3:37 PM, Gordon Sim <gs...@redhat.com> wrote:
> GS.Chandra N wrote:
>
>> I see zero messages en queued on my server side queue though the
>> corresponding binding shows multiple matchings. See the qpid-tool output
>> below.
>>
>
> Thanks for that output which clearly shows a bug (Jira raised [1]).
>
> Returning to the original problem, when the test is in progress, how many
> queues does 'list queue' show you? Are there any unexpected queues in that
> list?
>
> [1] https://issues.apache.org/jira/browse/QPID-1723
>
>
> ---------------------------------------------------------------------
> Apache Qpid - AMQP Messaging Implementation
> Project: http://qpid.apache.org
> Use/Interact: mailto:users-subscribe@qpid.apache.org
>
>
Re: Memory pile up on broker
Posted by "GS.Chandra N" <gs...@gmail.com>.
On Tue, Mar 10, 2009 at 5:36 PM, Ted Ross <tr...@redhat.com> wrote:
>
> One thing that jumps out at me when looking at your qpid-tool output is the
> extremely high number of bindings on your queue and exchange (22K bindings
> on the exchange, 3K bindings on one queue). This accounts for the high
> ratio of msgRoutes to msgReceives on the exchange.
>
> Is your test environment repeatedly creating bindings without cleaning them
> up?
>
> -Ted
>
>
I have a single server running on the default port on a single box and i
stop the server every time i run my tests (qpidd -q) and ensure all my
current client scripts throw an exception and come out before re-running the
tests.
The queues are created by appending them with a unqie id so they cant be the
same each time either.
As for the high subscription numbers - they are intended to be so. The
clients are designed to create 3K bindings and my tests are centered around
performance in the heavy subscription model.
Next up for me would be to increase the number of clients and distribute
these 22K bindings around all of those (500-600) and have the data flowing
to all those clients and see how the performance is. I did not get to that
stage before my tests where blocked due to the memory issue i have
reported.
Thanks
gs
ps : The CPU used to be a problem at 90% earlier during the tests as all the
22K subscriptions where totally dissimilar. But the CPU has gone to 75% once
i changed my tests such that only 3K of these are unique and rest all are
copies, which is fine by me coz my probable deployment mimcs the same
situation. I'm hoping the 75% cpu is due to the obviously buggy wrongly
configured memory pile up.
Re: Memory pile up on broker
Posted by Ted Ross <tr...@redhat.com>.
GS.Chandra N wrote:
>> On Tue, Mar 10, 2009 at 3:37 PM, Gordon Sim <gs...@redhat.com> wrote:
>> Returning to the original problem, when the test is in progress, how many
>>
> queues does 'list queue' show you? Are there any unexpected queues in that
> list?
>
> Thanks for the replies Gordon. Here is the information you requested.
>
> My publisher does not create any queues and as expected i do not see any
> queues
>
> My client creates a server queue (name pyclient-feeds-queue-<uuid>) and a
> local queue (named local_feeds-<uuid>). I see all the server queues
> (recognized from the name). The no of server queues also correct.
>
> In the brief time i took to rerun the tests my broker has gone from startup
> to 900 Meg. (3000 odd messages per sec / 500 byte message size)
>
> Thanks
> gs
>
> qpid: list queue
> Objects of type org.apache.qpid.broker:queue
> ID Created Destroyed Index
>
> =========================================================================================
> 129 18:23:00 -
> 103.pyclient-feeds-queue4d58340c-be70-b547-9083-380419a62374
> 921 18:23:02 -
> 103.pyclient-feeds-queuea2f411cd-62f5-cf45-81b7-843b2d191ffb
> 1713 18:23:04 -
> 103.pyclient-feeds-queuef67ce21b-0b7a-754f-8737-485c718e9636
> 2429 18:23:05 -
> 103.pyclient-feeds-queue8aa3f2c0-b6eb-094b-a190-0612424d85d6
> 3138 18:23:06 -
> 103.pyclient-feeds-queue97429810-49c5-dd42-83c4-59f9ba8aea2d
> 3818 18:23:07 -
> 103.pyclient-feeds-queue924172a0-2991-584e-b0d3-db977dae2067
> 4792 18:23:09 -
> 103.pyclient-feeds-queue89c0d1bc-d1b0-5c43-a213-a2efea82e9fa
>
> qpid: show 129
> Object of type org.apache.qpid.broker:queue: (last sample time: 18:23:56)
> Type Element 129
>
> ============================================================================================
> property vhostRef 103
> property name
> pyclient-feeds-queue4d58340c-be70-b547-9083-380419a62374
> property durable False
> property autoDelete False
> property exclusive True
> property arguments {}
> statistic msgTotalEnqueues 0 messages
> statistic msgTotalDequeues 0
> statistic msgTxnEnqueues 0
> statistic msgTxnDequeues 0
> statistic msgPersistEnqueues 0
> statistic msgPersistDequeues 0
> statistic msgDepth 0
> statistic byteDepth 0 octets
> statistic byteTotalEnqueues 0
> statistic byteTotalDequeues 0
> statistic byteTxnEnqueues 0
> statistic byteTxnDequeues 0
> statistic bytePersistEnqueues 0
> statistic bytePersistDequeues 0
> statistic consumerCount 1 consumer
> statistic consumerCountHigh 1
> statistic consumerCountLow 1
> statistic bindingCount 3181 bindings
> statistic bindingCountHigh 3181
> statistic bindingCountLow 3181
> statistic unackedMessages 0 messages
> statistic unackedMessagesHigh 0
> statistic unackedMessagesLow 0
> statistic messageLatencySamples 0
> statistic messageLatencyMin 0
> statistic messageLatencyMax 0
> statistic messageLatencyAverage 0
>
> qpid: show 110
> Object of type org.apache.qpid.broker:exchange: (last sample time: 18:30:01)
> Type Element 110
> ============================================
> property vhostRef 103
> property name Feeds
> property type headers
> property durable False
> property arguments {}
> statistic producerCount 0
> statistic producerCountHigh 0
> statistic producerCountLow 0
> statistic bindingCount 22260
> statistic bindingCountHigh 22260
> statistic bindingCountLow 22260
> statistic msgReceives 75920
> statistic msgDrops 32152
> statistic msgRoutes 703583062
> statistic byteReceives 36252990
> statistic byteDrops 14725616
> statistic byteRoutes 321308904240
>
>
> On Tue, Mar 10, 2009 at 3:37 PM, Gordon Sim <gs...@redhat.com> wrote:
>
>
>> GS.Chandra N wrote:
>>
>>
>>> I see zero messages en queued on my server side queue though the
>>> corresponding binding shows multiple matchings. See the qpid-tool output
>>> below.
>>>
>>>
>> Thanks for that output which clearly shows a bug (Jira raised [1]).
>>
>> Returning to the original problem, when the test is in progress, how many
>> queues does 'list queue' show you? Are there any unexpected queues in that
>> list?
>>
>> [1] https://issues.apache.org/jira/browse/QPID-1723
>>
>>
>> ---------------------------------------------------------------------
>> Apache Qpid - AMQP Messaging Implementation
>> Project: http://qpid.apache.org
>> Use/Interact: mailto:users-subscribe@qpid.apache.org
>>
>>
>>
>
>
One thing that jumps out at me when looking at your qpid-tool output is
the extremely high number of bindings on your queue and exchange (22K
bindings on the exchange, 3K bindings on one queue). This accounts for
the high ratio of msgRoutes to msgReceives on the exchange.
Is your test environment repeatedly creating bindings without cleaning
them up?
-Ted
Re: Memory pile up on broker
Posted by "GS.Chandra N" <gs...@gmail.com>.
>On Tue, Mar 10, 2009 at 3:37 PM, Gordon Sim <gs...@redhat.com> wrote:
>Returning to the original problem, when the test is in progress, how many
queues does 'list queue' show you? Are there any unexpected queues in that
list?
Thanks for the replies Gordon. Here is the information you requested.
My publisher does not create any queues and as expected i do not see any
queues
My client creates a server queue (name pyclient-feeds-queue-<uuid>) and a
local queue (named local_feeds-<uuid>). I see all the server queues
(recognized from the name). The no of server queues also correct.
In the brief time i took to rerun the tests my broker has gone from startup
to 900 Meg. (3000 odd messages per sec / 500 byte message size)
Thanks
gs
qpid: list queue
Objects of type org.apache.qpid.broker:queue
ID Created Destroyed Index
=========================================================================================
129 18:23:00 -
103.pyclient-feeds-queue4d58340c-be70-b547-9083-380419a62374
921 18:23:02 -
103.pyclient-feeds-queuea2f411cd-62f5-cf45-81b7-843b2d191ffb
1713 18:23:04 -
103.pyclient-feeds-queuef67ce21b-0b7a-754f-8737-485c718e9636
2429 18:23:05 -
103.pyclient-feeds-queue8aa3f2c0-b6eb-094b-a190-0612424d85d6
3138 18:23:06 -
103.pyclient-feeds-queue97429810-49c5-dd42-83c4-59f9ba8aea2d
3818 18:23:07 -
103.pyclient-feeds-queue924172a0-2991-584e-b0d3-db977dae2067
4792 18:23:09 -
103.pyclient-feeds-queue89c0d1bc-d1b0-5c43-a213-a2efea82e9fa
qpid: show 129
Object of type org.apache.qpid.broker:queue: (last sample time: 18:23:56)
Type Element 129
============================================================================================
property vhostRef 103
property name
pyclient-feeds-queue4d58340c-be70-b547-9083-380419a62374
property durable False
property autoDelete False
property exclusive True
property arguments {}
statistic msgTotalEnqueues 0 messages
statistic msgTotalDequeues 0
statistic msgTxnEnqueues 0
statistic msgTxnDequeues 0
statistic msgPersistEnqueues 0
statistic msgPersistDequeues 0
statistic msgDepth 0
statistic byteDepth 0 octets
statistic byteTotalEnqueues 0
statistic byteTotalDequeues 0
statistic byteTxnEnqueues 0
statistic byteTxnDequeues 0
statistic bytePersistEnqueues 0
statistic bytePersistDequeues 0
statistic consumerCount 1 consumer
statistic consumerCountHigh 1
statistic consumerCountLow 1
statistic bindingCount 3181 bindings
statistic bindingCountHigh 3181
statistic bindingCountLow 3181
statistic unackedMessages 0 messages
statistic unackedMessagesHigh 0
statistic unackedMessagesLow 0
statistic messageLatencySamples 0
statistic messageLatencyMin 0
statistic messageLatencyMax 0
statistic messageLatencyAverage 0
qpid: show 110
Object of type org.apache.qpid.broker:exchange: (last sample time: 18:30:01)
Type Element 110
============================================
property vhostRef 103
property name Feeds
property type headers
property durable False
property arguments {}
statistic producerCount 0
statistic producerCountHigh 0
statistic producerCountLow 0
statistic bindingCount 22260
statistic bindingCountHigh 22260
statistic bindingCountLow 22260
statistic msgReceives 75920
statistic msgDrops 32152
statistic msgRoutes 703583062
statistic byteReceives 36252990
statistic byteDrops 14725616
statistic byteRoutes 321308904240
On Tue, Mar 10, 2009 at 3:37 PM, Gordon Sim <gs...@redhat.com> wrote:
> GS.Chandra N wrote:
>
>> I see zero messages en queued on my server side queue though the
>> corresponding binding shows multiple matchings. See the qpid-tool output
>> below.
>>
>
> Thanks for that output which clearly shows a bug (Jira raised [1]).
>
> Returning to the original problem, when the test is in progress, how many
> queues does 'list queue' show you? Are there any unexpected queues in that
> list?
>
> [1] https://issues.apache.org/jira/browse/QPID-1723
>
>
> ---------------------------------------------------------------------
> Apache Qpid - AMQP Messaging Implementation
> Project: http://qpid.apache.org
> Use/Interact: mailto:users-subscribe@qpid.apache.org
>
>
Re: Memory pile up on broker
Posted by Gordon Sim <gs...@redhat.com>.
GS.Chandra N wrote:
> I see zero messages en queued on my server side queue though the
> corresponding binding shows multiple matchings. See the qpid-tool output
> below.
Thanks for that output which clearly shows a bug (Jira raised [1]).
Returning to the original problem, when the test is in progress, how
many queues does 'list queue' show you? Are there any unexpected queues
in that list?
[1] https://issues.apache.org/jira/browse/QPID-1723
---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project: http://qpid.apache.org
Use/Interact: mailto:users-subscribe@qpid.apache.org
Re: Memory pile up on broker
Posted by "GS.Chandra N" <gs...@gmail.com>.
On Tue, Mar 10, 2009 at 2:22 PM, Gordon Sim <gs...@redhat.com> wrote:
> For those bindings which show a non-zero msgMatched, the queue for that
> binding must show a non-zero number of messages enqueued. So for those
> binings with a large msgMatched, see what stats are shown for the
> corresponding queue.
Gordon,
I see zero messages en queued on my server side queue though the
corresponding binding shows multiple matchings. See the qpid-tool output
below.
Thanks
gs
qpid: show 8386
Object of type org.apache.qpid.broker:binding: (last sample time: 13:55:35)
Type Element 8386
===================================================================================
property exchangeRef 110
property queueRef 4338
property bindingKey
property arguments {u'SPECIES': 'DOG23', u'TYPE': 'ANIMAL',
u'x-match': 'all'}
property origin <NULL>
statistic msgMatched 4193496
qpid: show 4338
Object of type org.apache.qpid.broker:queue: (last sample time: 00:20:06)
Type Element 4338
============================================================================================
property vhostRef 103
property name
pyclient-feeds-queuec9a401f7-413c-ab48-b955-5ca55bcdd7c6
property durable False
property autoDelete False
property exclusive True
property arguments {}
statistic msgTotalEnqueues 0 messages
statistic msgTotalDequeues 0
statistic msgTxnEnqueues 0
statistic msgTxnDequeues 0
statistic msgPersistEnqueues 0
statistic msgPersistDequeues 0
statistic msgDepth 0
statistic byteDepth 0 octets
statistic byteTotalEnqueues 0
statistic byteTotalDequeues 0
statistic byteTxnEnqueues 0
statistic byteTxnDequeues 0
statistic bytePersistEnqueues 0
statistic bytePersistDequeues 0
statistic consumerCount 0 consumers
statistic consumerCountHigh 0
statistic consumerCountLow 0
statistic bindingCount 2749 bindings
statistic bindingCountHigh 2749
statistic bindingCountLow 2749
statistic unackedMessages 0 messages
statistic unackedMessagesHigh 0
statistic unackedMessagesLow 0
statistic messageLatencySamples 0
statistic messageLatencyMin 0
statistic messageLatencyMax 0
statistic messageLatencyAverage 0
qpid: show 110
Object of type org.apache.qpid.broker:exchange: (last sample time: 13:55:35)
Type Element 110
==============================================
property vhostRef 103
property name Feeds
property type headers
property durable False
property arguments {}
statistic producerCount 0
statistic producerCountHigh 0
statistic producerCountLow 0
statistic bindingCount 21239
statistic bindingCountHigh 21239
statistic bindingCountLow 21239
statistic msgReceives 3779362
statistic msgDrops 27321
statistic msgRoutes 88668976947
statistic byteReceives 1853290588
statistic byteDrops 12513018
statistic byteRoutes 40810861233002
On Tue, Mar 10, 2009 at 2:22 PM, Gordon Sim <gs...@redhat.com> wrote:
> GS.Chandra N wrote:
>
>> On Tue, Mar 10, 2009 at 1:29 AM, Gordon Sim <gs...@redhat.com> wrote:
>>> If they don't match any bindings they will be dropped by the exchange. If
>>> the dropped count is staying the same as the routed count goes up, then
>>> the
>>> messages are matching a subscription.
>>>
>>
>> My subscription says arguments={ "x-match":"all", "Type" = "Animal",
>> "Species" = "Alien"}.
>> My messages have Type="Animal" and couple of other headers but Species are
>> anything but Alien.
>>
>> And about what you said, about subscriptions being dropped - thats scary.
>> How can subscriptions be dropped by the borker?
>>
>
> Subscriptions are _not_ dropped; _messages_ for which there is no matching
> binding (i.e. no interested subscriber) are dropped.
>
> How would it know if
>> sometime in the future, messages that match those subscriptions are not
>> going to start coming in?
>>
>> However it sounds like maybe there are queues that you don't expect to be
>>> receiving messages that are bound to your exchange. Does 'list binding'
>>> show
>>> anything up?
>>>
>>
>> All those subscriptions are around even after the clients that have
>> created
>> them have exitted and list bindings show that the statistic msgMatched
>> shows
>> a big number against them
>>
>
> For those bindings which show a non-zero msgMatched, the queue for that
> binding must show a non-zero number of messages enqueued. So for those
> binings with a large msgMatched, see what stats are shown for the
> corresponding queue.
>
> None of my clients recieve any messages (as i have a print in them to
>> check
>> if they have recieved any) and the queues also show the msgDepth to be
>> zero
>> at all times.
>>
>
> ---------------------------------------------------------------------
> Apache Qpid - AMQP Messaging Implementation
> Project: http://qpid.apache.org
> Use/Interact: mailto:users-subscribe@qpid.apache.org
>
>
Re: Memory pile up on broker
Posted by Gordon Sim <gs...@redhat.com>.
GS.Chandra N wrote:
>> On Tue, Mar 10, 2009 at 1:29 AM, Gordon Sim <gs...@redhat.com> wrote:
>> If they don't match any bindings they will be dropped by the exchange. If
>> the dropped count is staying the same as the routed count goes up, then the
>> messages are matching a subscription.
>
> My subscription says arguments={ "x-match":"all", "Type" = "Animal",
> "Species" = "Alien"}.
> My messages have Type="Animal" and couple of other headers but Species are
> anything but Alien.
>
> And about what you said, about subscriptions being dropped - thats scary.
> How can subscriptions be dropped by the borker?
Subscriptions are _not_ dropped; _messages_ for which there is no
matching binding (i.e. no interested subscriber) are dropped.
> How would it know if
> sometime in the future, messages that match those subscriptions are not
> going to start coming in?
>
>> However it sounds like maybe there are queues that you don't expect to be
>> receiving messages that are bound to your exchange. Does 'list binding' show
>> anything up?
>
> All those subscriptions are around even after the clients that have created
> them have exitted and list bindings show that the statistic msgMatched shows
> a big number against them
For those bindings which show a non-zero msgMatched, the queue for that
binding must show a non-zero number of messages enqueued. So for those
binings with a large msgMatched, see what stats are shown for the
corresponding queue.
> None of my clients recieve any messages (as i have a print in them to check
> if they have recieved any) and the queues also show the msgDepth to be zero
> at all times.
---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project: http://qpid.apache.org
Use/Interact: mailto:users-subscribe@qpid.apache.org
Re: Memory pile up on broker
Posted by "GS.Chandra N" <gs...@gmail.com>.
>On Tue, Mar 10, 2009 at 1:29 AM, Gordon Sim <gs...@redhat.com> wrote:
>If they don't match any bindings they will be dropped by the exchange. If
the dropped count is staying the same as the routed count goes up, then the
messages are matching a subscription.
My subscription says arguments={ "x-match":"all", "Type" = "Animal",
"Species" = "Alien"}.
My messages have Type="Animal" and couple of other headers but Species are
anything but Alien.
And about what you said, about subscriptions being dropped - thats scary.
How can subscriptions be dropped by the borker? How would it know if
sometime in the future, messages that match those subscriptions are not
going to start coming in?
>However it sounds like maybe there are queues that you don't expect to be
receiving messages that are bound to your exchange. Does 'list binding' show
anything up?
All those subscriptions are around even after the clients that have created
them have exitted and list bindings show that the statistic msgMatched shows
a big number against them
None of my clients recieve any messages (as i have a print in them to check
if they have recieved any) and the queues also show the msgDepth to be zero
at all times.
Thanks in advance
gs
On Tue, Mar 10, 2009 at 1:29 AM, Gordon Sim <gs...@redhat.com> wrote:
> GS.Chandra N wrote:
>
>> On Mon, Mar 9, 2009 at 1:51 PM, Gordon Sim <gs...@redhat.com> wrote:
>>> If you use the tool to look at the subscription queues while the memory
>>> is
>>>
>> climbing you can see the queue depth for these.
>>
>> That's exactly what I'm confused about because the messages i sent will
>> NOT
>> match any of the subscriptions as the tests I'm conducting is to establish
>> the load due to a high no of subscriptions.
>>
>
> If they don't match any bindings they will be dropped by the exchange. If
> the dropped count is staying the same as the routed count goes up, then the
> messages are matching a subscription.
>
>
>> The queues show 0 for everything except the binding count and the consumer
>> count (1).
>>
>> Are you accepting the messages after the subscriber receives them (or
>>> using
>>>
>> accept_mode=not_required)? The broker will not dequeue messages from the
>> queue until you do so.
>>
>> I do not do anything special here since i do not accept any messages but
>> the
>> queu autoDelete property shows false. Not sure what this means or if it
>> has
>> any significance since msgDepth shows 0.
>>
>> The python code i use to accept messageslook like this
>>
>> def dump_queue(queue):
>>
>> content = "" # Content of the last message read
>> message = 0
>>
>> while 1:
>> try:
>> message = queue.get(timeout=10000000000)
>> content = message.body
>> session.message_accept(RangedSet(message.id))
>> print "Recieved a message and sent ack"
>> except Empty:
>> continue
>>
>
>
> Ok, the message_accept there is what will trigger the dequeues.
>
> However it sounds like maybe there are queues that you don't expect to be
> receiving messages that are bound to your exchange. Does 'list binding' show
> anything up?
>
>
> ---------------------------------------------------------------------
> Apache Qpid - AMQP Messaging Implementation
> Project: http://qpid.apache.org
> Use/Interact: mailto:users-subscribe@qpid.apache.org
>
>
Re: Memory pile up on broker
Posted by Gordon Sim <gs...@redhat.com>.
GS.Chandra N wrote:
>> On Mon, Mar 9, 2009 at 1:51 PM, Gordon Sim <gs...@redhat.com> wrote:
>> If you use the tool to look at the subscription queues while the memory is
> climbing you can see the queue depth for these.
>
> That's exactly what I'm confused about because the messages i sent will NOT
> match any of the subscriptions as the tests I'm conducting is to establish
> the load due to a high no of subscriptions.
If they don't match any bindings they will be dropped by the exchange.
If the dropped count is staying the same as the routed count goes up,
then the messages are matching a subscription.
>
> The queues show 0 for everything except the binding count and the consumer
> count (1).
>
>> Are you accepting the messages after the subscriber receives them (or using
> accept_mode=not_required)? The broker will not dequeue messages from the
> queue until you do so.
>
> I do not do anything special here since i do not accept any messages but the
> queu autoDelete property shows false. Not sure what this means or if it has
> any significance since msgDepth shows 0.
>
> The python code i use to accept messageslook like this
>
> def dump_queue(queue):
>
> content = "" # Content of the last message read
> message = 0
>
> while 1:
> try:
> message = queue.get(timeout=10000000000)
> content = message.body
> session.message_accept(RangedSet(message.id))
> print "Recieved a message and sent ack"
> except Empty:
> continue
Ok, the message_accept there is what will trigger the dequeues.
However it sounds like maybe there are queues that you don't expect to
be receiving messages that are bound to your exchange. Does 'list
binding' show anything up?
---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project: http://qpid.apache.org
Use/Interact: mailto:users-subscribe@qpid.apache.org
Re: Memory pile up on broker
Posted by "GS.Chandra N" <gs...@gmail.com>.
>On Mon, Mar 9, 2009 at 1:51 PM, Gordon Sim <gs...@redhat.com> wrote:
>If you use the tool to look at the subscription queues while the memory is
climbing you can see the queue depth for these.
That's exactly what I'm confused about because the messages i sent will NOT
match any of the subscriptions as the tests I'm conducting is to establish
the load due to a high no of subscriptions.
The queues show 0 for everything except the binding count and the consumer
count (1).
>Are you accepting the messages after the subscriber receives them (or using
accept_mode=not_required)? The broker will not dequeue messages from the
queue until you do so.
I do not do anything special here since i do not accept any messages but the
queu autoDelete property shows false. Not sure what this means or if it has
any significance since msgDepth shows 0.
The python code i use to accept messageslook like this
def dump_queue(queue):
content = "" # Content of the last message read
message = 0
while 1:
try:
message = queue.get(timeout=10000000000)
content = message.body
session.message_accept(RangedSet(message.id))
print "Recieved a message and sent ack"
except Empty:
continue
Thanks for the replies, appreciate all the helps.
Rgds
gs
On Mon, Mar 9, 2009 at 1:51 PM, Gordon Sim <gs...@redhat.com> wrote:
> GS.Chandra N wrote:
>
>> Adding some more details -
>>
>> All messages are sent to a new exchange created thus "qpid-config add
>> exchange headers Feeds". The exchange is shown as not durable in
>> qpid-tool.
>>
>> There is no increase in size of memory unless subscriptions are created.
>>
>> When there are no subscriptions and messages are being pumped in ,
>>
>> msgRecieves is equal to msgDrops
>> byteRecieves is equal to byteDrops
>> msgROutes and byteRoutes are zero
>>
>> All other stats for the exchange shows zero
>>
>> Though i have 6 server processes pumping in messages and one qpid-tool I
>> see only 2 queues.They are named mgmt-HOST.port and repl-HOST.port. I
>> suppose these are created by qpid-tool and not by the exchange processes
>> since they do not create any queues for sending messages.
>>
>> When there are subscriptions created and messages are being pumped in,
>>
>> msgRecieves start to climb and msgDrops stay same
>> byteRecieves start to climb and byteDrops stay same
>> msgRoutes and byteRoutes are non zero
>>
>> It is clear that the messages are for some reason being cached though its
>> not clear why.
>>
>
> That is as expected. The msgRoutes count tracks the number of messages that
> are routed to subscriber queues and the msgDrops count tracks the number of
> messages that were dropped due to there being no matching subscriptions.
>
> If you use the tool to look at the subscription queues while the memory is
> climbing you can see the queue depth for these.
>
> Are you accepting the messages after the subscriber receives them (or using
> accept_mode=not_required)? The broker will not dequeue messages from the
> queue until you do so.
>
> ---------------------------------------------------------------------
> Apache Qpid - AMQP Messaging Implementation
> Project: http://qpid.apache.org
> Use/Interact: mailto:users-subscribe@qpid.apache.org
>
>
Re: Memory pile up on broker
Posted by Gordon Sim <gs...@redhat.com>.
GS.Chandra N wrote:
> Adding some more details -
>
> All messages are sent to a new exchange created thus "qpid-config add
> exchange headers Feeds". The exchange is shown as not durable in qpid-tool.
>
> There is no increase in size of memory unless subscriptions are created.
>
> When there are no subscriptions and messages are being pumped in ,
>
> msgRecieves is equal to msgDrops
> byteRecieves is equal to byteDrops
> msgROutes and byteRoutes are zero
>
> All other stats for the exchange shows zero
>
> Though i have 6 server processes pumping in messages and one qpid-tool I
> see only 2 queues.They are named mgmt-HOST.port and repl-HOST.port. I
> suppose these are created by qpid-tool and not by the exchange processes
> since they do not create any queues for sending messages.
>
> When there are subscriptions created and messages are being pumped in,
>
> msgRecieves start to climb and msgDrops stay same
> byteRecieves start to climb and byteDrops stay same
> msgRoutes and byteRoutes are non zero
>
> It is clear that the messages are for some reason being cached though its
> not clear why.
That is as expected. The msgRoutes count tracks the number of messages
that are routed to subscriber queues and the msgDrops count tracks the
number of messages that were dropped due to there being no matching
subscriptions.
If you use the tool to look at the subscription queues while the memory
is climbing you can see the queue depth for these.
Are you accepting the messages after the subscriber receives them (or
using accept_mode=not_required)? The broker will not dequeue messages
from the queue until you do so.
---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project: http://qpid.apache.org
Use/Interact: mailto:users-subscribe@qpid.apache.org
Re: Memory pile up on broker
Posted by "GS.Chandra N" <gs...@gmail.com>.
Adding some more details -
All messages are sent to a new exchange created thus "qpid-config add
exchange headers Feeds". The exchange is shown as not durable in qpid-tool.
There is no increase in size of memory unless subscriptions are created.
When there are no subscriptions and messages are being pumped in ,
msgRecieves is equal to msgDrops
byteRecieves is equal to byteDrops
msgROutes and byteRoutes are zero
All other stats for the exchange shows zero
Though i have 6 server processes pumping in messages and one qpid-tool I
see only 2 queues.They are named mgmt-HOST.port and repl-HOST.port. I
suppose these are created by qpid-tool and not by the exchange processes
since they do not create any queues for sending messages.
When there are subscriptions created and messages are being pumped in,
msgRecieves start to climb and msgDrops stay same
byteRecieves start to climb and byteDrops stay same
msgRoutes and byteRoutes are non zero
It is clear that the messages are for some reason being cached though its
not clear why.
Thanks in advance
gs
On Sun, Mar 8, 2009 at 11:12 AM, GS.Chandra N <gs...@gmail.com>wrote:
> Hi,
>
> I'm stress testing the broker for evaluating the subscription routing
> performance and i see that the memory on the server goes up really high. (1
> gig RSS, almost same Virtual size).
>
> However i'm not able to attribute the reason behind this pile up's. I
> cannot see the pile up attributed to any queues or connections at the
> server.
>
> I do not create any queues from the server process to send messages and
> simply call the python api session.message_transfer to send my messages at a
> rate of 3000 odd mesages per sec(500 bytes payload).
>
> How do i figure out what is happening.
>
> This is my test setups
>
> 1. A 2 CPU XEON server running the broker
> 2. Another server running MULTIPLE python scripts (same script / copies of
> each other) that publish messages to the server
> 3. Another server that runs a few clients that creates a total of 20,000
> odd subscriptions at the broker. Only 3000 subscriptions amongst these are
> unique rest are copies
>
> The client subscriptions never match the messages sent in (to avoid
> bandwidth choking) and therefore the queues created show 0 msges enqued.
>
> Thanks
> gs
>