You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@qpid.apache.org by gregory james marsh <ma...@cse.ohio-state.edu> on 2008/11/23 22:26:52 UTC

AckPolicy in M4 alpha?


Is the AckPolicy object in M3 C++ equivalent to
SubscriptionSettings.autoAck in M4?

It appears AckPolicy has been discontinued for the M4 API, so I
"converted" my code as shown below.  With the M4 code I run an experiment
that I also used with M3, but in M4 the broker crashes.  Details
below.



// Old M3 Code
Session session =  connection.newSession();
AckPolicy subscrptn_ack_policy(0);
SubscriptionManager subscriptions(session);
subscriptions.setAckPolicy(subscrptn_ack_policy);
subscriptions.subscribe(local_my_queue, my_queue_name, my_queue_name);


// M4 Code:  Is this equiv to M3 code?
Session session =  connection.newSession();
SubscriptionSettings subscrptn_setngs;
subscrptn_setngs.autoAck= 0;
SubscriptionManager subscriptions(session);
subscriptions.subscribe(local_my_queue, my_queue_name,
                        subscrptn_setngs, my_queue_name);




I started the broker as follows and both publisher (1) and consumers (4,
8, 16, or 32) connect via TCP and with no delay as well:

$ qpidd --auth no --default-queue-limit 4294967295 --tcp-nodelay



M4 Code yields this error message on the broker.  This did not occur with
same experiment in M3.

resource-limit-exceeded: resource-limit-exceeded: Policy exceeded on
6ed11fcb-3477-4fea-981
b-719306be4fd8 by message 682489 of size 524288 , policy: size:
max=4294967295, current=429
4458544; count: unlimited; type=flow_to_disk
(qpid/broker/QueuePolicy.cpp:74)



Here is the experiment described:  Sequentially send 10000 messages each
of the following byte sizes 1, 2, 4, 8 ,16, 32, 64, 128, 256, 512, 1024,
2048, 4096, 8192 to amq.fanout exchange.  Continue sequential sending but
with 1000 measages each
of the following byte 16384, 32768, 65536, 131072, 262144 and 524288.  The
crash happens during the sending of the 1000, 524288 byte sized messages.

This crash happens at same time/place whether I have 4, 8, 16, or 32
consumers bound to the broker.

My solution was to set subscrptn_setngs.autoAck= 1, and the problem does
not occur.

We were originally using a 0 ack policy setting in M3
in hopes of squeezing out faster performance, but it turns out M4 gives us
slightly better performance even with autoAck= 1.

Thanks in advance for any insights as what is happening and why the
difference between the 2 releases.

Greg




Re: AckPolicy in M4 alpha?

Posted by Gordon Sim <gs...@redhat.com>.
gregory james marsh wrote:
> 
> Is the AckPolicy object in M3 C++ equivalent to
> SubscriptionSettings.autoAck in M4?

Yes, SubscriptionSettings::autoAck does the same thing as the interval 
previously used in AckPolicy. AckPolicy was removed and 
Subscription/SubscriptionSettings introduced to offer more flexibility 
and ease of use.

> It appears AckPolicy has been discontinued for the M4 API, so I
> "converted" my code as shown below.  With the M4 code I run an experiment
> that I also used with M3, but in M4 the broker crashes.  Details
> below.
> 
> 
> 
> // Old M3 Code
> Session session =  connection.newSession();
> AckPolicy subscrptn_ack_policy(0);
> SubscriptionManager subscriptions(session);
> subscriptions.setAckPolicy(subscrptn_ack_policy);
> subscriptions.subscribe(local_my_queue, my_queue_name, my_queue_name);
> 
> 
> // M4 Code:  Is this equiv to M3 code?
> Session session =  connection.newSession();
> SubscriptionSettings subscrptn_setngs;
> subscrptn_setngs.autoAck= 0;

If your intention is to not require acknowledegments at all, you also 
want to set:

   subscrptn_setngs.acceptMode = ACCEPT_MODE_NONE;

The autoAck = 0 merely disables automatic issuing of accepts for the 
messages, it does not affect whether the server expects accepts or not. 
If accepts are required this will cause the messages to remain 
unaccepted (unless the application itself accepts them).

The documentation probably should be clearer on this point.

> SubscriptionManager subscriptions(session);
> subscriptions.subscribe(local_my_queue, my_queue_name,
>                         subscrptn_setngs, my_queue_name);
> 
> 
> 
> 
> I started the broker as follows and both publisher (1) and consumers (4,
> 8, 16, or 32) connect via TCP and with no delay as well:
> 
> $ qpidd --auth no --default-queue-limit 4294967295 --tcp-nodelay
> 
> 
> 
> M4 Code yields this error message on the broker.  This did not occur with
> same experiment in M3.
> 
> resource-limit-exceeded: resource-limit-exceeded: Policy exceeded on
> 6ed11fcb-3477-4fea-981
> b-719306be4fd8 by message 682489 of size 524288 , policy: size:
> max=4294967295, current=429
> 4458544; count: unlimited; type=flow_to_disk
> (qpid/broker/QueuePolicy.cpp:74)

There was previously a bug in the way the limit was checked and 
unaccepted messages were not included. They are now and so the limit is 
being hit when messages are left unaccepted.

> Here is the experiment described:  Sequentially send 10000 messages each
> of the following byte sizes 1, 2, 4, 8 ,16, 32, 64, 128, 256, 512, 1024,
> 2048, 4096, 8192 to amq.fanout exchange.  Continue sequential sending but
> with 1000 measages each
> of the following byte 16384, 32768, 65536, 131072, 262144 and 524288.  The
> crash happens during the sending of the 1000, 524288 byte sized messages.
> 
> This crash happens at same time/place whether I have 4, 8, 16, or 32
> consumers bound to the broker.
> 
> My solution was to set subscrptn_setngs.autoAck= 1, and the problem does
> not occur.

Right, thats because the messages are being accepted and removed. You 
could also try truning off the requirement for acking as above.

> We were originally using a 0 ack policy setting in M3
> in hopes of squeezing out faster performance, but it turns out M4 gives us
> slightly better performance even with autoAck= 1.

You should also see better performance if you increase the autoAck value 
to accept in slightly larger batches.

> Thanks in advance for any insights as what is happening and why the
> difference between the 2 releases.

Hope the above helps explain it, if not let me know.