You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@qpid.apache.org by Gordon Sim <gs...@redhat.com> on 2012/07/19 19:56:41 UTC

proposal to remove certain features from qpidd

I have been looking at what would be required to get AMQP 1.0 support 
alongside AMQP 0-10 support in the c++ broker, i.e. qpidd.

As part of that it became clear some refactoring of the broker codebase 
would be required[1]. That in turn led me to believe that we should 
consider dropping certain features. These would be dropped *after* the 
pending 0.18 release; i.e. they would still be present in 0.18, but that 
would be the last release in which they were present if my proposal were 
accepted.

The purpose of this mail is to list the features I would propose to drop 
and my reasons for doing so. For those who find it overly long, I 
apologise and offer a very short summary at the end!

In each case the basic argument is that I believe the features are not 
very well implemented and keeping them working as part of my refactoring 
would take extra time that I would rather spend on achieving 1.0 support 
making real improvements.

The first feature I propose we drop is the 'legacy' versions of LVQ 
behaviour. These forced a choice in the behaviour of the queue when 
browsers (i.e. not destructive subscribers) received messages from it. 
The choice was to either have browsers miss updates, or to suppress the 
replacing of one message by another with a matching key. This choice was 
really driven by a technical problem with the first implementation. We 
have since already moved to an improved implementation where the 
distinction is not relevant. I see no good reason to keep the old 
behaviour any longer.

The second feature is the old async queue replication mechanism. This is 
very fragile and I believe is no longer necessary given the new and 
improved ha solution that first appeared in 0.16 and has been improved 
significantly for 0.18.

The third feature is the 'last man standing' or 'cluster durable' 
option. The biggest reason for dropping this comes later(!), but 
considered on its own my concern is that there are no system level tests 
for it so it is very hard to guarantee it still works without writing 
all those tests. I am entirely unconvinced by this solution, and think 
that again the new HA mechanism would be a better way to achieve this 
(you could start up a backup node that forced all the replicated 
messages to disk). I am therefore keen to avoid wasting time and effort.

The fourth feature is - wait for it - the clustered broker capability as 
enabled by the cluster.so plugin. I believe this is nearing the end of 
its life anyway. It is currently only available on linux with no real 
prospects of being ported to windows. The design as it turns out was 
very fragile to changes in the codebase and there are still some 
difficult to solve bugs within it. A new HA mechanism has been developed 
(as alluded to above) and I believe that will replace the old cluster. 
The work needed to keep the cluster working through my refactor is 
sizeable. It would in any case have the potential to destabilise the 
cluster (the aforementioned issue with fragility). This seems to me to 
argue strongly for dropping this in releases after 0.18, and for anyone 
affected, that would give them some time to try out the new HA and give 
feedback as well.

The fifth and final feature I propose we drop is the confusingly named 
'flow to disk' feature. Now for this one I have no alternative to offer 
yet. The problem is supporting large queues whose aggregate size far 
exceeds a bounded amount of memory. I believe the current implementation 
is next to useless for the majority of cases as it keeps the headers of 
all messages in memory. It is useless unless your messages are large 
enough that the overhead keeping these headers in memory is outweighed 
by the size of the body (this overhead is significantly larger than the 
transfer size of the headers). Further, since a common cause for large 
queues is a short lived disparity between the rate of inflow and 
outflow, the current solution can compound the problem by radically 
slowing down consumers even more. I believe there is a better solution 
and I'm not convinced the current solution is worth the effort of 
maintaining any further. (I know Kim has been working on a new store 
interface and removing flow to disk would clean that up nicely as well!)

I hope this makes sense. I'm keen to get any thoughts or feedback on 
these points. The purpose is not to deprive anyone of features they are 
using but rather to spend time on more important work.

Summary:

features to drop are:

(i) legacy lvq modes; lvq support would still remain, only the two old 
and peculiar modes would go; I really doubt anyone actually depends on 
these anyway, they were more a limitation than a feature

(ii) asynchronous queue replication; solution is not mature enough for 
real world use anyway due to fragility and inability to resync; new HA 
mechanism as introduced in 0.16 and improved on in 0.18 should address 
the need anyway.

(iii) clustering including last-man-standing mode; design is brittle and 
currently ties it to linux platform; new HA is the long term solution 
here anyway.

(iv) flow to disk; current solution really doesn't solve the problem anyway

--Gordon

[1] If you are interested at all, you kind find my latest patch and some 
notes on the internal changes up on reviewboard: 
https://reviews.apache.org/r/5833/

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: proposal to remove certain features from qpidd

Posted by Rajith Attapattu <ra...@gmail.com>.
Jakub,

I see the difficulties. Looks like it might not be that simple.

Regards,

Rajith

On Mon, Jul 23, 2012 at 11:35 AM, Jakub Scholz <ja...@scholz.cz> wrote:
> Hi Rajith,
>
> Most of the messages are delivered as a broadcasts from one producer
> to multiple receivers. And even when the queue of some receivers is
> full because they are not consuming, we still need to deliver the
> message for the rest of the receivers and at the same time to be aware
> who didn't received the message to eventually resend it later. I'm
> afraid that the producer flow control would not allow this so easily.
> We would probably need to change our producers to send the messages
> individually - N producers for N consumers. Only then we will be able
> to use the producer flow control ...
>
> Regards
> Jakub
>
> On Mon, Jul 23, 2012 at 3:27 PM, Rajith Attapattu <ra...@gmail.com> wrote:
>> Jakub,
>>
>> I wonder if producer flow control can help here.
>> If implemented properly this should (at least theoretically) prevent
>> the broker from going out of memory due to queue growth.
>> As you correctly point out, flow-2-disk just postpones it at best, in
>> addition to the fact that it has a serious impact on perf.
>>
>> While there might be an impact on perf with producer flow control, I'm
>> sure it's way better than flow-2-disk.
>>
>> Regards,
>>
>> Rajith
>>
>> On Mon, Jul 23, 2012 at 7:40 AM, Jakub Scholz <ja...@scholz.cz> wrote:
>>> Yes, the use of flow-to-disk queues unfortunately doesn't solve the
>>> memory issue on 100%. It just decreases the memory consumption, so the
>>> point when the broker runs out of memory is postponed a bit.
>>>
>>> Regards
>>> Jakub
>>>
>>> On Mon, Jul 23, 2012 at 11:09 AM, Gordon Sim <gs...@redhat.com> wrote:
>>>> On 07/22/2012 09:31 PM, Jakub Scholz wrote:
>>>>>
>>>>> We expect the brokers to deliver approximately hundreds of GB of
>>>>> messages per day. Under normal circumstances, most of the messages
>>>>> will be consumed by the clients almost immediately, but in some
>>>>> exceptional situations, they may need to be stored on the broker. And
>>>>> since the performance isn't the biggest issue for us (while on the
>>>>> other hand reliability of the broker is), the flow-to-disk queues kind
>>>>> of help (yes, the headers are still kept in memory).
>>>>
>>>>
>>>> My point was that in such cases, the memory consumed by the queue is not in
>>>> any way bounded. It will keep growing as the messages pile up. If the
>>>> content is a significant part of that (i.e. if the messages are relatively
>>>> large), then flow to disk can at least slow the growth. I assume that slowed
>>>> growth is useful in your case?
>>>>
>>>>
>>>>> Although you can always say that we can get HW with more memory or
>>>>> split the load between multiple brokers if necessary, it would still
>>>>> be better if the flow-to-disk queues are replaced by some better
>>>>> alternative.
>>>>
>>>>
>>>> I certainly agree a better solution is needed. One where the memory required
>>>> for such queues can be bounded in a reliable fashion, regardless of their
>>>> size.
>>>>
>>>>
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>>>> For additional commands, e-mail: users-help@qpid.apache.org
>>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>>> For additional commands, e-mail: users-help@qpid.apache.org
>>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>> For additional commands, e-mail: users-help@qpid.apache.org
>>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: proposal to remove certain features from qpidd

Posted by Jakub Scholz <ja...@scholz.cz>.
Hi Rajith,

Most of the messages are delivered as a broadcasts from one producer
to multiple receivers. And even when the queue of some receivers is
full because they are not consuming, we still need to deliver the
message for the rest of the receivers and at the same time to be aware
who didn't received the message to eventually resend it later. I'm
afraid that the producer flow control would not allow this so easily.
We would probably need to change our producers to send the messages
individually - N producers for N consumers. Only then we will be able
to use the producer flow control ...

Regards
Jakub

On Mon, Jul 23, 2012 at 3:27 PM, Rajith Attapattu <ra...@gmail.com> wrote:
> Jakub,
>
> I wonder if producer flow control can help here.
> If implemented properly this should (at least theoretically) prevent
> the broker from going out of memory due to queue growth.
> As you correctly point out, flow-2-disk just postpones it at best, in
> addition to the fact that it has a serious impact on perf.
>
> While there might be an impact on perf with producer flow control, I'm
> sure it's way better than flow-2-disk.
>
> Regards,
>
> Rajith
>
> On Mon, Jul 23, 2012 at 7:40 AM, Jakub Scholz <ja...@scholz.cz> wrote:
>> Yes, the use of flow-to-disk queues unfortunately doesn't solve the
>> memory issue on 100%. It just decreases the memory consumption, so the
>> point when the broker runs out of memory is postponed a bit.
>>
>> Regards
>> Jakub
>>
>> On Mon, Jul 23, 2012 at 11:09 AM, Gordon Sim <gs...@redhat.com> wrote:
>>> On 07/22/2012 09:31 PM, Jakub Scholz wrote:
>>>>
>>>> We expect the brokers to deliver approximately hundreds of GB of
>>>> messages per day. Under normal circumstances, most of the messages
>>>> will be consumed by the clients almost immediately, but in some
>>>> exceptional situations, they may need to be stored on the broker. And
>>>> since the performance isn't the biggest issue for us (while on the
>>>> other hand reliability of the broker is), the flow-to-disk queues kind
>>>> of help (yes, the headers are still kept in memory).
>>>
>>>
>>> My point was that in such cases, the memory consumed by the queue is not in
>>> any way bounded. It will keep growing as the messages pile up. If the
>>> content is a significant part of that (i.e. if the messages are relatively
>>> large), then flow to disk can at least slow the growth. I assume that slowed
>>> growth is useful in your case?
>>>
>>>
>>>> Although you can always say that we can get HW with more memory or
>>>> split the load between multiple brokers if necessary, it would still
>>>> be better if the flow-to-disk queues are replaced by some better
>>>> alternative.
>>>
>>>
>>> I certainly agree a better solution is needed. One where the memory required
>>> for such queues can be bounded in a reliable fashion, regardless of their
>>> size.
>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>>> For additional commands, e-mail: users-help@qpid.apache.org
>>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>> For additional commands, e-mail: users-help@qpid.apache.org
>>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: proposal to remove certain features from qpidd

Posted by Rajith Attapattu <ra...@gmail.com>.
Jakub,

I wonder if producer flow control can help here.
If implemented properly this should (at least theoretically) prevent
the broker from going out of memory due to queue growth.
As you correctly point out, flow-2-disk just postpones it at best, in
addition to the fact that it has a serious impact on perf.

While there might be an impact on perf with producer flow control, I'm
sure it's way better than flow-2-disk.

Regards,

Rajith

On Mon, Jul 23, 2012 at 7:40 AM, Jakub Scholz <ja...@scholz.cz> wrote:
> Yes, the use of flow-to-disk queues unfortunately doesn't solve the
> memory issue on 100%. It just decreases the memory consumption, so the
> point when the broker runs out of memory is postponed a bit.
>
> Regards
> Jakub
>
> On Mon, Jul 23, 2012 at 11:09 AM, Gordon Sim <gs...@redhat.com> wrote:
>> On 07/22/2012 09:31 PM, Jakub Scholz wrote:
>>>
>>> We expect the brokers to deliver approximately hundreds of GB of
>>> messages per day. Under normal circumstances, most of the messages
>>> will be consumed by the clients almost immediately, but in some
>>> exceptional situations, they may need to be stored on the broker. And
>>> since the performance isn't the biggest issue for us (while on the
>>> other hand reliability of the broker is), the flow-to-disk queues kind
>>> of help (yes, the headers are still kept in memory).
>>
>>
>> My point was that in such cases, the memory consumed by the queue is not in
>> any way bounded. It will keep growing as the messages pile up. If the
>> content is a significant part of that (i.e. if the messages are relatively
>> large), then flow to disk can at least slow the growth. I assume that slowed
>> growth is useful in your case?
>>
>>
>>> Although you can always say that we can get HW with more memory or
>>> split the load between multiple brokers if necessary, it would still
>>> be better if the flow-to-disk queues are replaced by some better
>>> alternative.
>>
>>
>> I certainly agree a better solution is needed. One where the memory required
>> for such queues can be bounded in a reliable fashion, regardless of their
>> size.
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>> For additional commands, e-mail: users-help@qpid.apache.org
>>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: proposal to remove certain features from qpidd

Posted by Carl Trieloff <cc...@redhat.com>.
On 07/23/2012 05:07 PM, Jakub Scholz wrote:
> Hi Carl,
>
> I definitely do not see any problem in sacrificing the features like
> LVQ. I'm not so sure about browsing ... do we need to disable browsing
> to have the real flow to disk queue? If yes, what about multiple
> consumers connected to the same queue or acknowledging the messages
> out of order?
>
>


For me having looked at this a while back, it comes down to
simplification. If we can create a subclass of the queue (based on
gsim's nice refactor) that supports true ' flow to disk' then if we can
eliminate features that require you to cursor though the FIFO on disk
that will make the implementation more efficient. 

So if flow to disk can be functionally isolated and still meet the use
cases, that would be optimal from a broker point of view, however there
is no point doing that if it does not meet users needs.

Carl.

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: proposal to remove certain features from qpidd

Posted by Jakub Scholz <ja...@scholz.cz>.
Hi Carl,

I definitely do not see any problem in sacrificing the features like
LVQ. I'm not so sure about browsing ... do we need to disable browsing
to have the real flow to disk queue? If yes, what about multiple
consumers connected to the same queue or acknowledging the messages
out of order?

Regards
Jakub

On Mon, Jul 23, 2012 at 9:20 PM, Carl Trieloff <cc...@redhat.com> wrote:
> On 07/23/2012 07:40 AM, Jakub Scholz wrote:
>> Yes, the use of flow-to-disk queues unfortunately doesn't solve the
>> memory issue on 100%. It just decreases the memory consumption, so the
>> point when the broker runs out of memory is postponed a bit.
>>
>>
>
> we actually need 'real' flow to disk queue here.
>
> Would a queue type that does not allow browsing for example, but flushes
> everything to disk and only supports FIFO cover your use case.  Issue is
> when you flush everything to disk, any non FIFO operation becomes very
> expensive
>
> Carl.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: proposal to remove certain features from qpidd

Posted by Carl Trieloff <cc...@redhat.com>.
On 07/23/2012 07:40 AM, Jakub Scholz wrote:
> Yes, the use of flow-to-disk queues unfortunately doesn't solve the
> memory issue on 100%. It just decreases the memory consumption, so the
> point when the broker runs out of memory is postponed a bit.
>
>

we actually need 'real' flow to disk queue here.

Would a queue type that does not allow browsing for example, but flushes
everything to disk and only supports FIFO cover your use case.  Issue is
when you flush everything to disk, any non FIFO operation becomes very
expensive

Carl.

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: proposal to remove certain features from qpidd

Posted by Jakub Scholz <ja...@scholz.cz>.
Yes, the use of flow-to-disk queues unfortunately doesn't solve the
memory issue on 100%. It just decreases the memory consumption, so the
point when the broker runs out of memory is postponed a bit.

Regards
Jakub

On Mon, Jul 23, 2012 at 11:09 AM, Gordon Sim <gs...@redhat.com> wrote:
> On 07/22/2012 09:31 PM, Jakub Scholz wrote:
>>
>> We expect the brokers to deliver approximately hundreds of GB of
>> messages per day. Under normal circumstances, most of the messages
>> will be consumed by the clients almost immediately, but in some
>> exceptional situations, they may need to be stored on the broker. And
>> since the performance isn't the biggest issue for us (while on the
>> other hand reliability of the broker is), the flow-to-disk queues kind
>> of help (yes, the headers are still kept in memory).
>
>
> My point was that in such cases, the memory consumed by the queue is not in
> any way bounded. It will keep growing as the messages pile up. If the
> content is a significant part of that (i.e. if the messages are relatively
> large), then flow to disk can at least slow the growth. I assume that slowed
> growth is useful in your case?
>
>
>> Although you can always say that we can get HW with more memory or
>> split the load between multiple brokers if necessary, it would still
>> be better if the flow-to-disk queues are replaced by some better
>> alternative.
>
>
> I certainly agree a better solution is needed. One where the memory required
> for such queues can be bounded in a reliable fashion, regardless of their
> size.
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: proposal to remove certain features from qpidd

Posted by Gordon Sim <gs...@redhat.com>.
On 07/22/2012 09:31 PM, Jakub Scholz wrote:
> We expect the brokers to deliver approximately hundreds of GB of
> messages per day. Under normal circumstances, most of the messages
> will be consumed by the clients almost immediately, but in some
> exceptional situations, they may need to be stored on the broker. And
> since the performance isn't the biggest issue for us (while on the
> other hand reliability of the broker is), the flow-to-disk queues kind
> of help (yes, the headers are still kept in memory).

My point was that in such cases, the memory consumed by the queue is not 
in any way bounded. It will keep growing as the messages pile up. If the 
content is a significant part of that (i.e. if the messages are 
relatively large), then flow to disk can at least slow the growth. I 
assume that slowed growth is useful in your case?

> Although you can always say that we can get HW with more memory or
> split the load between multiple brokers if necessary, it would still
> be better if the flow-to-disk queues are replaced by some better
> alternative.

I certainly agree a better solution is needed. One where the memory 
required for such queues can be bounded in a reliable fashion, 
regardless of their size.

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: proposal to remove certain features from qpidd

Posted by Virgilio Fornazin <vi...@gmail.com>.
That's right Ted. You know about our issues.

This is a thing that will help all qpidd users, and also could be
'platform-independent'
on top of a high-performance disk I/O layer (like Tcp I/O layer).

On Mon, Jul 23, 2012 at 6:13 PM, Ted Ross <tr...@redhat.com> wrote:

> On 07/22/2012 06:33 PM, Virgilio Fornazin wrote:
>
>> We use MRG-M here too, and we are running in trouble sometimes with this
>> confuse flow-to-disk implementation.
>>
>> What we expect to have, to replace it, it's something like a real
>> 'queue-on-disk' with parameters like current
>> implementation of flow-to-disk have (max messages/bytes on memory, max
>> messages/bytes on disk file, etc).
>>
>>  I think this is the right way to think about it.  What we have is a
> store that is optimized for journaling (write-only) performance to support
> message persistence.  Flow-to-disk is a completely different use case and
> should be implemented as a separate feature.  The primary design goal
> should be to have a memory footprint that is not correlated to the queue
> size.
>
> -Ted
>
>
>
> ------------------------------**------------------------------**---------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.**org<us...@qpid.apache.org>
> For additional commands, e-mail: users-help@qpid.apache.org
>
>

Re: proposal to remove certain features from qpidd

Posted by Gordon Sim <gs...@redhat.com>.
On 07/23/2012 10:13 PM, Ted Ross wrote:
> What we have is a store that is optimized for journaling (write-only)
> performance to support message persistence.  Flow-to-disk is a
> completely different use case and should be implemented as a separate
> feature.  The primary design goal should be to have a memory footprint
> that is not correlated to the queue size.

Agreed. It is more of a paging/swapping solution. You have a fixed 
amount of memory to use, with a file backing the queue providing extra 
space. You can then move the portion of that file that is mapped into 
memory at any time in order to deliver messages.

You can bring large chunks of the queue state into memory at a time 
(obviously this may displace other portions of the state) which should 
mean that normal consume operation remains efficient and fast.

You don't need to worry about the reliability of the swapped data - if 
messages need to be durably recorded that is still handled by the 
journal, orthogonal to the paging solution.

Obviously if you scroll through the entire queue, e.g. with a browser, 
then you would have to load each part in turn. Were you to do that with 
two browsers at different positions you would trigger a lot of thrashing.

The interaction with special queue types, LVQ and priority queue for 
example would likely need special code. In the first instance these 
would likely be  mutually exclusive options.

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: proposal to remove certain features from qpidd

Posted by Ted Ross <tr...@redhat.com>.
On 07/22/2012 06:33 PM, Virgilio Fornazin wrote:
> We use MRG-M here too, and we are running in trouble sometimes with this
> confuse flow-to-disk implementation.
>
> What we expect to have, to replace it, it's something like a real
> 'queue-on-disk' with parameters like current
> implementation of flow-to-disk have (max messages/bytes on memory, max
> messages/bytes on disk file, etc).
>
I think this is the right way to think about it.  What we have is a 
store that is optimized for journaling (write-only) performance to 
support message persistence.  Flow-to-disk is a completely different use 
case and should be implemented as a separate feature.  The primary 
design goal should be to have a memory footprint that is not correlated 
to the queue size.

-Ted


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: proposal to remove certain features from qpidd

Posted by Virgilio Fornazin <vi...@gmail.com>.
We use MRG-M here too, and we are running in trouble sometimes with this
confuse flow-to-disk implementation.

What we expect to have, to replace it, it's something like a real
'queue-on-disk' with parameters like current
implementation of flow-to-disk have (max messages/bytes on memory, max
messages/bytes on disk file, etc).


On Sun, Jul 22, 2012 at 5:31 PM, Jakub Scholz <ja...@scholz.cz> wrote:

> We use Qpid (well, MRG-M) as an interface to our costumers. As such,
> we have quite limited control over
> - the amount of messages
> - the time when the messages are consumed (i.e. does the client
> connect to consume the messages or not)
> We expect the brokers to deliver approximately hundreds of GB of
> messages per day. Under normal circumstances, most of the messages
> will be consumed by the clients almost immediately, but in some
> exceptional situations, they may need to be stored on the broker. And
> since the performance isn't the biggest issue for us (while on the
> other hand reliability of the broker is), the flow-to-disk queues kind
> of help (yes, the headers are still kept in memory).
>
> Although you can always say that we can get HW with more memory or
> split the load between multiple brokers if necessary, it would still
> be better if the flow-to-disk queues are replaced by some better
> alternative.
>
> Regards
> Jakub
>
>
> On Fri, Jul 20, 2012 at 11:54 AM, Gordon Sim <gs...@redhat.com> wrote:
> > On 07/20/2012 10:22 AM, Jakub Scholz wrote:
> >>
> >> While I agree with you that the flow-to-disk queues have a lot of
> >> problems, I do not think they are totally useless. If you remove them
> >> without any real alternative, you may block the upgrade path for many
> >> people using them. At least speaking for my self, it probably would be
> >> a problem for some of our brokers.
> >
> >
> > Understood. Can you give any more detail of how you use them? i.e. what
> sort
> > of scenarios the feature is triggered in?
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> > For additional commands, e-mail: users-help@qpid.apache.org
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
>
>

Re: proposal to remove certain features from qpidd

Posted by Jakub Scholz <ja...@scholz.cz>.
We use Qpid (well, MRG-M) as an interface to our costumers. As such,
we have quite limited control over
- the amount of messages
- the time when the messages are consumed (i.e. does the client
connect to consume the messages or not)
We expect the brokers to deliver approximately hundreds of GB of
messages per day. Under normal circumstances, most of the messages
will be consumed by the clients almost immediately, but in some
exceptional situations, they may need to be stored on the broker. And
since the performance isn't the biggest issue for us (while on the
other hand reliability of the broker is), the flow-to-disk queues kind
of help (yes, the headers are still kept in memory).

Although you can always say that we can get HW with more memory or
split the load between multiple brokers if necessary, it would still
be better if the flow-to-disk queues are replaced by some better
alternative.

Regards
Jakub


On Fri, Jul 20, 2012 at 11:54 AM, Gordon Sim <gs...@redhat.com> wrote:
> On 07/20/2012 10:22 AM, Jakub Scholz wrote:
>>
>> While I agree with you that the flow-to-disk queues have a lot of
>> problems, I do not think they are totally useless. If you remove them
>> without any real alternative, you may block the upgrade path for many
>> people using them. At least speaking for my self, it probably would be
>> a problem for some of our brokers.
>
>
> Understood. Can you give any more detail of how you use them? i.e. what sort
> of scenarios the feature is triggered in?
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: proposal to remove certain features from qpidd

Posted by Gordon Sim <gs...@redhat.com>.
On 07/20/2012 10:22 AM, Jakub Scholz wrote:
> While I agree with you that the flow-to-disk queues have a lot of
> problems, I do not think they are totally useless. If you remove them
> without any real alternative, you may block the upgrade path for many
> people using them. At least speaking for my self, it probably would be
> a problem for some of our brokers.

Understood. Can you give any more detail of how you use them? i.e. what 
sort of scenarios the feature is triggered in?

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: proposal to remove certain features from qpidd

Posted by Jakub Scholz <ja...@scholz.cz>.
Hi Gordon,

While I agree with you that the flow-to-disk queues have a lot of
problems, I do not think they are totally useless. If you remove them
without any real alternative, you may block the upgrade path for many
people using them. At least speaking for my self, it probably would be
a problem for some of our brokers.

Regards
Jakub

On Thu, Jul 19, 2012 at 7:56 PM, Gordon Sim <gs...@redhat.com> wrote:
> I have been looking at what would be required to get AMQP 1.0 support
> alongside AMQP 0-10 support in the c++ broker, i.e. qpidd.
>
> As part of that it became clear some refactoring of the broker codebase
> would be required[1]. That in turn led me to believe that we should consider
> dropping certain features. These would be dropped *after* the pending 0.18
> release; i.e. they would still be present in 0.18, but that would be the
> last release in which they were present if my proposal were accepted.
>
> The purpose of this mail is to list the features I would propose to drop and
> my reasons for doing so. For those who find it overly long, I apologise and
> offer a very short summary at the end!
>
> In each case the basic argument is that I believe the features are not very
> well implemented and keeping them working as part of my refactoring would
> take extra time that I would rather spend on achieving 1.0 support making
> real improvements.
>
> The first feature I propose we drop is the 'legacy' versions of LVQ
> behaviour. These forced a choice in the behaviour of the queue when browsers
> (i.e. not destructive subscribers) received messages from it. The choice was
> to either have browsers miss updates, or to suppress the replacing of one
> message by another with a matching key. This choice was really driven by a
> technical problem with the first implementation. We have since already moved
> to an improved implementation where the distinction is not relevant. I see
> no good reason to keep the old behaviour any longer.
>
> The second feature is the old async queue replication mechanism. This is
> very fragile and I believe is no longer necessary given the new and improved
> ha solution that first appeared in 0.16 and has been improved significantly
> for 0.18.
>
> The third feature is the 'last man standing' or 'cluster durable' option.
> The biggest reason for dropping this comes later(!), but considered on its
> own my concern is that there are no system level tests for it so it is very
> hard to guarantee it still works without writing all those tests. I am
> entirely unconvinced by this solution, and think that again the new HA
> mechanism would be a better way to achieve this (you could start up a backup
> node that forced all the replicated messages to disk). I am therefore keen
> to avoid wasting time and effort.
>
> The fourth feature is - wait for it - the clustered broker capability as
> enabled by the cluster.so plugin. I believe this is nearing the end of its
> life anyway. It is currently only available on linux with no real prospects
> of being ported to windows. The design as it turns out was very fragile to
> changes in the codebase and there are still some difficult to solve bugs
> within it. A new HA mechanism has been developed (as alluded to above) and I
> believe that will replace the old cluster. The work needed to keep the
> cluster working through my refactor is sizeable. It would in any case have
> the potential to destabilise the cluster (the aforementioned issue with
> fragility). This seems to me to argue strongly for dropping this in releases
> after 0.18, and for anyone affected, that would give them some time to try
> out the new HA and give feedback as well.
>
> The fifth and final feature I propose we drop is the confusingly named 'flow
> to disk' feature. Now for this one I have no alternative to offer yet. The
> problem is supporting large queues whose aggregate size far exceeds a
> bounded amount of memory. I believe the current implementation is next to
> useless for the majority of cases as it keeps the headers of all messages in
> memory. It is useless unless your messages are large enough that the
> overhead keeping these headers in memory is outweighed by the size of the
> body (this overhead is significantly larger than the transfer size of the
> headers). Further, since a common cause for large queues is a short lived
> disparity between the rate of inflow and outflow, the current solution can
> compound the problem by radically slowing down consumers even more. I
> believe there is a better solution and I'm not convinced the current
> solution is worth the effort of maintaining any further. (I know Kim has
> been working on a new store interface and removing flow to disk would clean
> that up nicely as well!)
>
> I hope this makes sense. I'm keen to get any thoughts or feedback on these
> points. The purpose is not to deprive anyone of features they are using but
> rather to spend time on more important work.
>
> Summary:
>
> features to drop are:
>
> (i) legacy lvq modes; lvq support would still remain, only the two old and
> peculiar modes would go; I really doubt anyone actually depends on these
> anyway, they were more a limitation than a feature
>
> (ii) asynchronous queue replication; solution is not mature enough for real
> world use anyway due to fragility and inability to resync; new HA mechanism
> as introduced in 0.16 and improved on in 0.18 should address the need
> anyway.
>
> (iii) clustering including last-man-standing mode; design is brittle and
> currently ties it to linux platform; new HA is the long term solution here
> anyway.
>
> (iv) flow to disk; current solution really doesn't solve the problem anyway
>
> --Gordon
>
> [1] If you are interested at all, you kind find my latest patch and some
> notes on the internal changes up on reviewboard:
> https://reviews.apache.org/r/5833/
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: proposal to remove certain features from qpidd

Posted by Carl Trieloff <cc...@redhat.com>.
On 07/20/2012 12:20 PM, Gordon Sim wrote:
> On 07/20/2012 05:12 PM, Carl Trieloff wrote:
>> So the thoughts are to have large message support via grouping which
>> would also then need large queue support. I.e. design better larger
>> queue support and have large message support via grouping be a
>> derivative of that case?
>
> Yes that's my view (grouping here may not be the same as the message
> group functionality as currently exposed). 


Great, that seems the most sensible approach to me.

Carl.


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: proposal to remove certain features from qpidd

Posted by Gordon Sim <gs...@redhat.com>.
On 07/20/2012 05:12 PM, Carl Trieloff wrote:
> So the thoughts are to have large message support via grouping which
> would also then need large queue support. I.e. design better larger
> queue support and have large message support via grouping be a
> derivative of that case?

Yes that's my view (grouping here may not be the same as the message 
group functionality as currently exposed).



---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: proposal to remove certain features from qpidd

Posted by Carl Trieloff <cc...@redhat.com>.
On 07/20/2012 05:52 AM, Gordon Sim wrote:
>
> Large message support (distinct from large queue support, which is
> what flow-to-disk currently attempts to address) is not currently
> supported anyway; the maximum size of message is limited by the
> available memory.
>

I know

> I think some form of message 'grouping' is the most promising path
> there; allowing at some level the single logical message to be treated
> as a sequence of smaller messages. That way any support for large
> queues can double as supporting large messages. I'll think some more
> on this in the context of the AMQP 1.0 work. 

So the thoughts are to have large message support via grouping which
would also then need large queue support. I.e. design better larger
queue support and have large message support via grouping be a
derivative of that case?

Carl.

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: proposal to remove certain features from qpidd

Posted by Gordon Sim <gs...@redhat.com>.
On 07/19/2012 09:27 PM, Carl Trieloff wrote:
>
>>
>> features to drop are:
>>
>> (i) legacy lvq modes; lvq support would still remain, only the two old
>> and peculiar modes would go; I really doubt anyone actually depends on
>> these anyway, they were more a limitation than a feature
>
> +1
>
>>
>> (ii) asynchronous queue replication; solution is not mature enough for
>> real world use anyway due to fragility and inability to resync; new HA
>> mechanism as introduced in 0.16 and improved on in 0.18 should address
>> the need anyway.
>
>
> Does the new HA allow async replication -- or is there a way to
> transition the users that user async queue replication to build large
> clusters and do long distance DR between data centres. The guys that use
> this feature have hundreds brokers deployed and they would need a path
> to move forward.

Yes, it does.

>> (iii) clustering including last-man-standing mode; design is brittle
>> and currently ties it to linux platform; new HA is the long term
>> solution here anyway.
>
> +1 to remove last man standing
>
>>
>> (iv) flow to disk; current solution really doesn't solve the problem
>> anyway
>
>
> +1,
>
> -- what is the plan to get large message support?

Large message support (distinct from large queue support, which is what 
flow-to-disk currently attempts to address) is not currently supported 
anyway; the maximum size of message is limited by the available memory.

I think some form of message 'grouping' is the most promising path 
there; allowing at some level the single logical message to be treated 
as a sequence of smaller messages. That way any support for large queues 
can double as supporting large messages. I'll think some more on this in 
the context of the AMQP 1.0 work.

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: proposal to remove certain features from qpidd

Posted by Carl Trieloff <cc...@redhat.com>.
>
> features to drop are:
>
> (i) legacy lvq modes; lvq support would still remain, only the two old
> and peculiar modes would go; I really doubt anyone actually depends on
> these anyway, they were more a limitation than a feature

+1

>
> (ii) asynchronous queue replication; solution is not mature enough for
> real world use anyway due to fragility and inability to resync; new HA
> mechanism as introduced in 0.16 and improved on in 0.18 should address
> the need anyway.


Does the new HA allow async replication -- or is there a way to
transition the users that user async queue replication to build large
clusters and do long distance DR between data centres. The guys that use
this feature have hundreds brokers deployed and they would need a path
to move forward.


>
> (iii) clustering including last-man-standing mode; design is brittle
> and currently ties it to linux platform; new HA is the long term
> solution here anyway.

+1 to remove last man standing

>
> (iv) flow to disk; current solution really doesn't solve the problem
> anyway


+1,

-- what is the plan to get large message support?

>
> --Gordon
>
> [1] If you are interested at all, you kind find my latest patch and
> some notes on the internal changes up on reviewboard:
> https://reviews.apache.org/r/5833/
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
>



---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


RE: Straw Poll: proposal to remove certain features from qpidd

Posted by Gary Jackson <ga...@ijetonboard.com>.
Our team concurs: +1 for (a)

-----Original Message-----
From: Alan Conway [mailto:aconway@redhat.com]
Sent: Tuesday, August 07, 2012 2:36 PM
To: users@qpid.apache.org
Subject: Re: Straw Poll: proposal to remove certain features from qpidd

+1 for (a)

On Tue, 2012-08-07 at 19:11 +0100, Gordon Sim wrote:
> So, to follow up and summarise this thread so far, the only
> contentious point has been the loss of the 'flow to disk' functionality.
>
> Though the current solution doesn't limit the memory used by a large
> queue, it can in certain cases reduce the rate of memory growth which
> in turn may buy a little more time to resolve the root cause. So while
> those using it are less than fully satisfied, they are
> (understandably) concerned at having even this limited solution taken
> away without having any clear plan to offer a replacement.
>
> I have spent a little time thinking through what a better solution
> might look like and how much effort it would take. I believe that for
> ~3-5 weeks work I could get something better in place. It would be, in
> the first instance, posix only[1]. It would be mutually exclusive with
> lvq or priority queue options. However it would be a more effective
> limit on the memory consumed as such a queue grew, and (I hope) would
> have a less drastic performance penalty at larger sizes.
>
> There are a few options for how to proceed, and I'd like to take a
> quick straw poll to see which path the community favours.
>
> (a) go ahead with the refactor, including the removal of features
> mentioned in the previous mail, subsequently focus first on AMQP 1.0
> support, only then return to add paged queue support
>
> (b) go ahead with the refactor, including the removal of features
> mentioned in the previous mail, subsequently focus first on paged
> queue support, then proceed to add AMQP 1.0 support
>
> (c) don't go ahead with the refactor until it can be combined with an
> alternative to flow to disk, and only then proceed with AMQP 1.0
> support
>
> (d) don't go ahead with the refactor at all
>
> I myself favour (a). I think AMQP 1.0 support is more important and
> more work and would like to make more progress on that as soon as
> possible in order to have something ready for the 0.20 release. I
> can't guarantee that this path would result in the 0.20 release having
> the replacement for flow to disk functionality, but if not it would come soon after.
>
> I'm not so keen on (c) because maintain such a large patch against a
> continually moving trunk is a lot of work in itself and I think that
> time can be better spent. I'm not keen on (d) because I honestly don't
> think I can add decent 1.0 support (or fix a number of known issues)
> without something like this refactor.
>
> Anyway, over to you. Let me know what you think, I'm keen we reach
> some agreement by the end of the week. In the meantime I'll try and
> make my proposal for the flow to disk replacement a bit more concrete.
>
> --Gordon.
>
> [1] It will be designed such that it is relatively simple to provide
> alternative implementations for the posix functionality such that
> anyone with interest can easily add windows support for example. From
> what I can tell, it doesn't look like flow to disk is supported on
> windows at present anyway. I could be wrong.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org For
> additional commands, e-mail: users-help@qpid.apache.org
>



---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org For additional commands, e-mail: users-help@qpid.apache.org


This message and its attachments are the property of iJet Technologies, Inc. and are intended solely for the use of the designated recipient(s) and their appointed delegates. This email may contain information that is confidential. If you are not the intended recipient, you are prohibited from printing, copying, forwarding or saving any portion of the message or attachments. Please delete the message and attachments and notify the sender immediately. Thank you for your cooperation.

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: Straw Poll: proposal to remove certain features from qpidd

Posted by Alan Conway <ac...@redhat.com>.
+1 for (a)

On Tue, 2012-08-07 at 19:11 +0100, Gordon Sim wrote:
> So, to follow up and summarise this thread so far, the only contentious 
> point has been the loss of the 'flow to disk' functionality.
> 
> Though the current solution doesn't limit the memory used by a large 
> queue, it can in certain cases reduce the rate of memory growth which in 
> turn may buy a little more time to resolve the root cause. So while 
> those using it are less than fully satisfied, they are (understandably) 
> concerned at having even this limited solution taken away without having 
> any clear plan to offer a replacement.
> 
> I have spent a little time thinking through what a better solution might 
> look like and how much effort it would take. I believe that for ~3-5 
> weeks work I could get something better in place. It would be, in the 
> first instance, posix only[1]. It would be mutually exclusive with lvq 
> or priority queue options. However it would be a more effective limit on 
> the memory consumed as such a queue grew, and (I hope) would have a less 
> drastic performance penalty at larger sizes.
> 
> There are a few options for how to proceed, and I'd like to take a quick 
> straw poll to see which path the community favours.
> 
> (a) go ahead with the refactor, including the removal of features 
> mentioned in the previous mail, subsequently focus first on AMQP 1.0 
> support, only then return to add paged queue support
> 
> (b) go ahead with the refactor, including the removal of features 
> mentioned in the previous mail, subsequently focus first on paged queue 
> support, then proceed to add AMQP 1.0 support
> 
> (c) don't go ahead with the refactor until it can be combined with an 
> alternative to flow to disk, and only then proceed with AMQP 1.0 support
> 
> (d) don't go ahead with the refactor at all
> 
> I myself favour (a). I think AMQP 1.0 support is more important and more 
> work and would like to make more progress on that as soon as possible in 
> order to have something ready for the 0.20 release. I can't guarantee 
> that this path would result in the 0.20 release having the replacement 
> for flow to disk functionality, but if not it would come soon after.
> 
> I'm not so keen on (c) because maintain such a large patch against a 
> continually moving trunk is a lot of work in itself and I think that 
> time can be better spent. I'm not keen on (d) because I honestly don't 
> think I can add decent 1.0 support (or fix a number of known issues) 
> without something like this refactor.
> 
> Anyway, over to you. Let me know what you think, I'm keen we reach some 
> agreement by the end of the week. In the meantime I'll try and make my 
> proposal for the flow to disk replacement a bit more concrete.
> 
> --Gordon.
> 
> [1] It will be designed such that it is relatively simple to provide 
> alternative implementations for the posix functionality such that anyone 
> with interest can easily add windows support for example. From what I 
> can tell, it doesn't look like flow to disk is supported on windows at 
> present anyway. I could be wrong.
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
> 



---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: Straw Poll: proposal to remove certain features from qpidd

Posted by Cliff Jansen <cl...@gmail.com>.
+1 for (a)

On Tue, Aug 7, 2012 at 11:11 AM, Gordon Sim <gs...@redhat.com> wrote:
> So, to follow up and summarise this thread so far, the only contentious
> point has been the loss of the 'flow to disk' functionality.
>
> Though the current solution doesn't limit the memory used by a large queue,
> it can in certain cases reduce the rate of memory growth which in turn may
> buy a little more time to resolve the root cause. So while those using it
> are less than fully satisfied, they are (understandably) concerned at having
> even this limited solution taken away without having any clear plan to offer
> a replacement.
>
> I have spent a little time thinking through what a better solution might
> look like and how much effort it would take. I believe that for ~3-5 weeks
> work I could get something better in place. It would be, in the first
> instance, posix only[1]. It would be mutually exclusive with lvq or priority
> queue options. However it would be a more effective limit on the memory
> consumed as such a queue grew, and (I hope) would have a less drastic
> performance penalty at larger sizes.
>
> There are a few options for how to proceed, and I'd like to take a quick
> straw poll to see which path the community favours.
>
> (a) go ahead with the refactor, including the removal of features mentioned
> in the previous mail, subsequently focus first on AMQP 1.0 support, only
> then return to add paged queue support
>
> (b) go ahead with the refactor, including the removal of features mentioned
> in the previous mail, subsequently focus first on paged queue support, then
> proceed to add AMQP 1.0 support
>
> (c) don't go ahead with the refactor until it can be combined with an
> alternative to flow to disk, and only then proceed with AMQP 1.0 support
>
> (d) don't go ahead with the refactor at all
>
> I myself favour (a). I think AMQP 1.0 support is more important and more
> work and would like to make more progress on that as soon as possible in
> order to have something ready for the 0.20 release. I can't guarantee that
> this path would result in the 0.20 release having the replacement for flow
> to disk functionality, but if not it would come soon after.
>
> I'm not so keen on (c) because maintain such a large patch against a
> continually moving trunk is a lot of work in itself and I think that time
> can be better spent. I'm not keen on (d) because I honestly don't think I
> can add decent 1.0 support (or fix a number of known issues) without
> something like this refactor.
>
> Anyway, over to you. Let me know what you think, I'm keen we reach some
> agreement by the end of the week. In the meantime I'll try and make my
> proposal for the flow to disk replacement a bit more concrete.
>
> --Gordon.
>
> [1] It will be designed such that it is relatively simple to provide
> alternative implementations for the posix functionality such that anyone
> with interest can easily add windows support for example. From what I can
> tell, it doesn't look like flow to disk is supported on windows at present
> anyway. I could be wrong.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: Straw Poll: proposal to remove certain features from qpidd

Posted by "Weston M. Price" <wp...@redhat.com>.
On Aug 8, 2012, at 11:25 AM, Rajith Attapattu wrote:

> +1 for (a).
> 
+1 from me as well. 


> Rajith
> 
> On Tue, Aug 7, 2012 at 2:16 PM, Andy Goldstein
> <an...@redhat.com> wrote:
>> My vote is for (a)
>> 
>> Andy
>> 
>> On Aug 7, 2012, at 2:11 PM, Gordon Sim wrote:
>> 
>>> So, to follow up and summarise this thread so far, the only contentious point has been the loss of the 'flow to disk' functionality.
>>> 
>>> Though the current solution doesn't limit the memory used by a large queue, it can in certain cases reduce the rate of memory growth which in turn may buy a little more time to resolve the root cause. So while those using it are less than fully satisfied, they are (understandably) concerned at having even this limited solution taken away without having any clear plan to offer a replacement.
>>> 
>>> I have spent a little time thinking through what a better solution might look like and how much effort it would take. I believe that for ~3-5 weeks work I could get something better in place. It would be, in the first instance, posix only[1]. It would be mutually exclusive with lvq or priority queue options. However it would be a more effective limit on the memory consumed as such a queue grew, and (I hope) would have a less drastic performance penalty at larger sizes.
>>> 
>>> There are a few options for how to proceed, and I'd like to take a quick straw poll to see which path the community favours.
>>> 
>>> (a) go ahead with the refactor, including the removal of features mentioned in the previous mail, subsequently focus first on AMQP 1.0 support, only then return to add paged queue support
>>> 
>>> (b) go ahead with the refactor, including the removal of features mentioned in the previous mail, subsequently focus first on paged queue support, then proceed to add AMQP 1.0 support
>>> 
>>> (c) don't go ahead with the refactor until it can be combined with an alternative to flow to disk, and only then proceed with AMQP 1.0 support
>>> 
>>> (d) don't go ahead with the refactor at all
>>> 
>>> I myself favour (a). I think AMQP 1.0 support is more important and more work and would like to make more progress on that as soon as possible in order to have something ready for the 0.20 release. I can't guarantee that this path would result in the 0.20 release having the replacement for flow to disk functionality, but if not it would come soon after.
>>> 
>>> I'm not so keen on (c) because maintain such a large patch against a continually moving trunk is a lot of work in itself and I think that time can be better spent. I'm not keen on (d) because I honestly don't think I can add decent 1.0 support (or fix a number of known issues) without something like this refactor.
>>> 
>>> Anyway, over to you. Let me know what you think, I'm keen we reach some agreement by the end of the week. In the meantime I'll try and make my proposal for the flow to disk replacement a bit more concrete.
>>> 
>>> --Gordon.
>>> 
>>> [1] It will be designed such that it is relatively simple to provide alternative implementations for the posix functionality such that anyone with interest can easily add windows support for example. From what I can tell, it doesn't look like flow to disk is supported on windows at present anyway. I could be wrong.
>>> 
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>>> For additional commands, e-mail: users-help@qpid.apache.org
>>> 
>> 
>> 
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>> For additional commands, e-mail: users-help@qpid.apache.org
>> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: Straw Poll: proposal to remove certain features from qpidd

Posted by Rajith Attapattu <ra...@gmail.com>.
+1 for (a).

Rajith

On Tue, Aug 7, 2012 at 2:16 PM, Andy Goldstein
<an...@redhat.com> wrote:
> My vote is for (a)
>
> Andy
>
> On Aug 7, 2012, at 2:11 PM, Gordon Sim wrote:
>
>> So, to follow up and summarise this thread so far, the only contentious point has been the loss of the 'flow to disk' functionality.
>>
>> Though the current solution doesn't limit the memory used by a large queue, it can in certain cases reduce the rate of memory growth which in turn may buy a little more time to resolve the root cause. So while those using it are less than fully satisfied, they are (understandably) concerned at having even this limited solution taken away without having any clear plan to offer a replacement.
>>
>> I have spent a little time thinking through what a better solution might look like and how much effort it would take. I believe that for ~3-5 weeks work I could get something better in place. It would be, in the first instance, posix only[1]. It would be mutually exclusive with lvq or priority queue options. However it would be a more effective limit on the memory consumed as such a queue grew, and (I hope) would have a less drastic performance penalty at larger sizes.
>>
>> There are a few options for how to proceed, and I'd like to take a quick straw poll to see which path the community favours.
>>
>> (a) go ahead with the refactor, including the removal of features mentioned in the previous mail, subsequently focus first on AMQP 1.0 support, only then return to add paged queue support
>>
>> (b) go ahead with the refactor, including the removal of features mentioned in the previous mail, subsequently focus first on paged queue support, then proceed to add AMQP 1.0 support
>>
>> (c) don't go ahead with the refactor until it can be combined with an alternative to flow to disk, and only then proceed with AMQP 1.0 support
>>
>> (d) don't go ahead with the refactor at all
>>
>> I myself favour (a). I think AMQP 1.0 support is more important and more work and would like to make more progress on that as soon as possible in order to have something ready for the 0.20 release. I can't guarantee that this path would result in the 0.20 release having the replacement for flow to disk functionality, but if not it would come soon after.
>>
>> I'm not so keen on (c) because maintain such a large patch against a continually moving trunk is a lot of work in itself and I think that time can be better spent. I'm not keen on (d) because I honestly don't think I can add decent 1.0 support (or fix a number of known issues) without something like this refactor.
>>
>> Anyway, over to you. Let me know what you think, I'm keen we reach some agreement by the end of the week. In the meantime I'll try and make my proposal for the flow to disk replacement a bit more concrete.
>>
>> --Gordon.
>>
>> [1] It will be designed such that it is relatively simple to provide alternative implementations for the posix functionality such that anyone with interest can easily add windows support for example. From what I can tell, it doesn't look like flow to disk is supported on windows at present anyway. I could be wrong.
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>> For additional commands, e-mail: users-help@qpid.apache.org
>>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: Straw Poll: proposal to remove certain features from qpidd

Posted by Andy Goldstein <an...@redhat.com>.
My vote is for (a)

Andy

On Aug 7, 2012, at 2:11 PM, Gordon Sim wrote:

> So, to follow up and summarise this thread so far, the only contentious point has been the loss of the 'flow to disk' functionality.
> 
> Though the current solution doesn't limit the memory used by a large queue, it can in certain cases reduce the rate of memory growth which in turn may buy a little more time to resolve the root cause. So while those using it are less than fully satisfied, they are (understandably) concerned at having even this limited solution taken away without having any clear plan to offer a replacement.
> 
> I have spent a little time thinking through what a better solution might look like and how much effort it would take. I believe that for ~3-5 weeks work I could get something better in place. It would be, in the first instance, posix only[1]. It would be mutually exclusive with lvq or priority queue options. However it would be a more effective limit on the memory consumed as such a queue grew, and (I hope) would have a less drastic performance penalty at larger sizes.
> 
> There are a few options for how to proceed, and I'd like to take a quick straw poll to see which path the community favours.
> 
> (a) go ahead with the refactor, including the removal of features mentioned in the previous mail, subsequently focus first on AMQP 1.0 support, only then return to add paged queue support
> 
> (b) go ahead with the refactor, including the removal of features mentioned in the previous mail, subsequently focus first on paged queue support, then proceed to add AMQP 1.0 support
> 
> (c) don't go ahead with the refactor until it can be combined with an alternative to flow to disk, and only then proceed with AMQP 1.0 support
> 
> (d) don't go ahead with the refactor at all
> 
> I myself favour (a). I think AMQP 1.0 support is more important and more work and would like to make more progress on that as soon as possible in order to have something ready for the 0.20 release. I can't guarantee that this path would result in the 0.20 release having the replacement for flow to disk functionality, but if not it would come soon after.
> 
> I'm not so keen on (c) because maintain such a large patch against a continually moving trunk is a lot of work in itself and I think that time can be better spent. I'm not keen on (d) because I honestly don't think I can add decent 1.0 support (or fix a number of known issues) without something like this refactor.
> 
> Anyway, over to you. Let me know what you think, I'm keen we reach some agreement by the end of the week. In the meantime I'll try and make my proposal for the flow to disk replacement a bit more concrete.
> 
> --Gordon.
> 
> [1] It will be designed such that it is relatively simple to provide alternative implementations for the posix functionality such that anyone with interest can easily add windows support for example. From what I can tell, it doesn't look like flow to disk is supported on windows at present anyway. I could be wrong.
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: Straw Poll: proposal to remove certain features from qpidd

Posted by Rafael Schloming <ra...@redhat.com>.
+1 for (a)

On Tue, 2012-08-07 at 19:11 +0100, Gordon Sim wrote:
> So, to follow up and summarise this thread so far, the only contentious 
> point has been the loss of the 'flow to disk' functionality.
> 
> Though the current solution doesn't limit the memory used by a large 
> queue, it can in certain cases reduce the rate of memory growth which in 
> turn may buy a little more time to resolve the root cause. So while 
> those using it are less than fully satisfied, they are (understandably) 
> concerned at having even this limited solution taken away without having 
> any clear plan to offer a replacement.
> 
> I have spent a little time thinking through what a better solution might 
> look like and how much effort it would take. I believe that for ~3-5 
> weeks work I could get something better in place. It would be, in the 
> first instance, posix only[1]. It would be mutually exclusive with lvq 
> or priority queue options. However it would be a more effective limit on 
> the memory consumed as such a queue grew, and (I hope) would have a less 
> drastic performance penalty at larger sizes.
> 
> There are a few options for how to proceed, and I'd like to take a quick 
> straw poll to see which path the community favours.
> 
> (a) go ahead with the refactor, including the removal of features 
> mentioned in the previous mail, subsequently focus first on AMQP 1.0 
> support, only then return to add paged queue support
> 
> (b) go ahead with the refactor, including the removal of features 
> mentioned in the previous mail, subsequently focus first on paged queue 
> support, then proceed to add AMQP 1.0 support
> 
> (c) don't go ahead with the refactor until it can be combined with an 
> alternative to flow to disk, and only then proceed with AMQP 1.0 support
> 
> (d) don't go ahead with the refactor at all
> 
> I myself favour (a). I think AMQP 1.0 support is more important and more 
> work and would like to make more progress on that as soon as possible in 
> order to have something ready for the 0.20 release. I can't guarantee 
> that this path would result in the 0.20 release having the replacement 
> for flow to disk functionality, but if not it would come soon after.
> 
> I'm not so keen on (c) because maintain such a large patch against a 
> continually moving trunk is a lot of work in itself and I think that 
> time can be better spent. I'm not keen on (d) because I honestly don't 
> think I can add decent 1.0 support (or fix a number of known issues) 
> without something like this refactor.
> 
> Anyway, over to you. Let me know what you think, I'm keen we reach some 
> agreement by the end of the week. In the meantime I'll try and make my 
> proposal for the flow to disk replacement a bit more concrete.
> 
> --Gordon.
> 
> [1] It will be designed such that it is relatively simple to provide 
> alternative implementations for the posix functionality such that anyone 
> with interest can easily add windows support for example. From what I 
> can tell, it doesn't look like flow to disk is supported on windows at 
> present anyway. I could be wrong.
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
> 



---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: Straw Poll: proposal to remove certain features from qpidd

Posted by Robbie Gemmell <ro...@gmail.com>.
+1 for (a)

On 7 August 2012 19:11, Gordon Sim <gs...@redhat.com> wrote:

> So, to follow up and summarise this thread so far, the only contentious
> point has been the loss of the 'flow to disk' functionality.
>
> Though the current solution doesn't limit the memory used by a large
> queue, it can in certain cases reduce the rate of memory growth which in
> turn may buy a little more time to resolve the root cause. So while those
> using it are less than fully satisfied, they are (understandably) concerned
> at having even this limited solution taken away without having any clear
> plan to offer a replacement.
>
> I have spent a little time thinking through what a better solution might
> look like and how much effort it would take. I believe that for ~3-5 weeks
> work I could get something better in place. It would be, in the first
> instance, posix only[1]. It would be mutually exclusive with lvq or
> priority queue options. However it would be a more effective limit on the
> memory consumed as such a queue grew, and (I hope) would have a less
> drastic performance penalty at larger sizes.
>
> There are a few options for how to proceed, and I'd like to take a quick
> straw poll to see which path the community favours.
>
> (a) go ahead with the refactor, including the removal of features
> mentioned in the previous mail, subsequently focus first on AMQP 1.0
> support, only then return to add paged queue support
>
> (b) go ahead with the refactor, including the removal of features
> mentioned in the previous mail, subsequently focus first on paged queue
> support, then proceed to add AMQP 1.0 support
>
> (c) don't go ahead with the refactor until it can be combined with an
> alternative to flow to disk, and only then proceed with AMQP 1.0 support
>
> (d) don't go ahead with the refactor at all
>
> I myself favour (a). I think AMQP 1.0 support is more important and more
> work and would like to make more progress on that as soon as possible in
> order to have something ready for the 0.20 release. I can't guarantee that
> this path would result in the 0.20 release having the replacement for flow
> to disk functionality, but if not it would come soon after.
>
> I'm not so keen on (c) because maintain such a large patch against a
> continually moving trunk is a lot of work in itself and I think that time
> can be better spent. I'm not keen on (d) because I honestly don't think I
> can add decent 1.0 support (or fix a number of known issues) without
> something like this refactor.
>
> Anyway, over to you. Let me know what you think, I'm keen we reach some
> agreement by the end of the week. In the meantime I'll try and make my
> proposal for the flow to disk replacement a bit more concrete.
>
> --Gordon.
>
> [1] It will be designed such that it is relatively simple to provide
> alternative implementations for the posix functionality such that anyone
> with interest can easily add windows support for example. From what I can
> tell, it doesn't look like flow to disk is supported on windows at present
> anyway. I could be wrong.
>
> ------------------------------**------------------------------**---------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.**org<us...@qpid.apache.org>
> For additional commands, e-mail: users-help@qpid.apache.org
>
>

Re: Last chance! (was Re: Straw Poll: proposal to remove certain features from qpidd)

Posted by Gordon Sim <gs...@redhat.com>.
On 08/09/2012 06:52 PM, Gordon Sim wrote:
> Thanks to everyone who has voiced an opinion so far. My plan is to start
> getting the patch on to trunk tomorrow.

This has now been done. I've had one report of some intermittent issues 
with ha, there may other issues that the tests didn't pick up. If you 
come across something let me know or raise a bug (though Jira is a 
little unwell at present it seems).

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Last chance! (was Re: Straw Poll: proposal to remove certain features from qpidd)

Posted by Gordon Sim <gs...@redhat.com>.
Thanks to everyone who has voiced an opinion so far. My plan is to start 
getting the patch on to trunk tomorrow. If anyone is not happy with 
option (a), has other concerns or simply needs more time, make yourself 
heard!

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: Straw Poll: proposal to remove certain features from qpidd

Posted by Chuck Rolke <cr...@redhat.com>.
+1 for (a)

----- Original Message -----
> From: "Gordon Sim" <gs...@redhat.com>
> To: users@qpid.apache.org
> Sent: Tuesday, August 7, 2012 2:11:52 PM
> Subject: Straw Poll: proposal to remove certain features from qpidd
> 
> So, to follow up and summarise this thread so far, the only
> contentious
> point has been the loss of the 'flow to disk' functionality.
> 
> Though the current solution doesn't limit the memory used by a large
> queue, it can in certain cases reduce the rate of memory growth which
> in
> turn may buy a little more time to resolve the root cause. So while
> those using it are less than fully satisfied, they are
> (understandably)
> concerned at having even this limited solution taken away without
> having
> any clear plan to offer a replacement.
> 
> I have spent a little time thinking through what a better solution
> might
> look like and how much effort it would take. I believe that for ~3-5
> weeks work I could get something better in place. It would be, in the
> first instance, posix only[1]. It would be mutually exclusive with
> lvq
> or priority queue options. However it would be a more effective limit
> on
> the memory consumed as such a queue grew, and (I hope) would have a
> less
> drastic performance penalty at larger sizes.
> 
> There are a few options for how to proceed, and I'd like to take a
> quick
> straw poll to see which path the community favours.
> 
> (a) go ahead with the refactor, including the removal of features
> mentioned in the previous mail, subsequently focus first on AMQP 1.0
> support, only then return to add paged queue support
> 
> (b) go ahead with the refactor, including the removal of features
> mentioned in the previous mail, subsequently focus first on paged
> queue
> support, then proceed to add AMQP 1.0 support
> 
> (c) don't go ahead with the refactor until it can be combined with an
> alternative to flow to disk, and only then proceed with AMQP 1.0
> support
> 
> (d) don't go ahead with the refactor at all
> 
> I myself favour (a). I think AMQP 1.0 support is more important and
> more
> work and would like to make more progress on that as soon as possible
> in
> order to have something ready for the 0.20 release. I can't guarantee
> that this path would result in the 0.20 release having the
> replacement
> for flow to disk functionality, but if not it would come soon after.
> 
> I'm not so keen on (c) because maintain such a large patch against a
> continually moving trunk is a lot of work in itself and I think that
> time can be better spent. I'm not keen on (d) because I honestly
> don't
> think I can add decent 1.0 support (or fix a number of known issues)
> without something like this refactor.
> 
> Anyway, over to you. Let me know what you think, I'm keen we reach
> some
> agreement by the end of the week. In the meantime I'll try and make
> my
> proposal for the flow to disk replacement a bit more concrete.
> 
> --Gordon.
> 
> [1] It will be designed such that it is relatively simple to provide
> alternative implementations for the posix functionality such that
> anyone
> with interest can easily add windows support for example. From what I
> can tell, it doesn't look like flow to disk is supported on windows
> at
> present anyway. I could be wrong.
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
> 
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: Straw Poll: proposal to remove certain features from qpidd

Posted by Ken Giusti <kg...@redhat.com>.
I really like the refactor, and would rather see it sooner than later, so I'm good with (a)!
-K

----- Original Message -----
> So, to follow up and summarise this thread so far, the only
> contentious
> point has been the loss of the 'flow to disk' functionality.
> 
> Though the current solution doesn't limit the memory used by a large
> queue, it can in certain cases reduce the rate of memory growth which
> in
> turn may buy a little more time to resolve the root cause. So while
> those using it are less than fully satisfied, they are
> (understandably)
> concerned at having even this limited solution taken away without
> having
> any clear plan to offer a replacement.
> 
> I have spent a little time thinking through what a better solution
> might
> look like and how much effort it would take. I believe that for ~3-5
> weeks work I could get something better in place. It would be, in the
> first instance, posix only[1]. It would be mutually exclusive with
> lvq
> or priority queue options. However it would be a more effective limit
> on
> the memory consumed as such a queue grew, and (I hope) would have a
> less
> drastic performance penalty at larger sizes.
> 
> There are a few options for how to proceed, and I'd like to take a
> quick
> straw poll to see which path the community favours.
> 
> (a) go ahead with the refactor, including the removal of features
> mentioned in the previous mail, subsequently focus first on AMQP 1.0
> support, only then return to add paged queue support
> 
> (b) go ahead with the refactor, including the removal of features
> mentioned in the previous mail, subsequently focus first on paged
> queue
> support, then proceed to add AMQP 1.0 support
> 
> (c) don't go ahead with the refactor until it can be combined with an
> alternative to flow to disk, and only then proceed with AMQP 1.0
> support
> 
> (d) don't go ahead with the refactor at all
> 
> I myself favour (a). I think AMQP 1.0 support is more important and
> more
> work and would like to make more progress on that as soon as possible
> in
> order to have something ready for the 0.20 release. I can't guarantee
> that this path would result in the 0.20 release having the
> replacement
> for flow to disk functionality, but if not it would come soon after.
> 
> I'm not so keen on (c) because maintain such a large patch against a
> continually moving trunk is a lot of work in itself and I think that
> time can be better spent. I'm not keen on (d) because I honestly
> don't
> think I can add decent 1.0 support (or fix a number of known issues)
> without something like this refactor.
> 
> Anyway, over to you. Let me know what you think, I'm keen we reach
> some
> agreement by the end of the week. In the meantime I'll try and make
> my
> proposal for the flow to disk replacement a bit more concrete.
> 
> --Gordon.
> 
> [1] It will be designed such that it is relatively simple to provide
> alternative implementations for the posix functionality such that
> anyone
> with interest can easily add windows support for example. From what I
> can tell, it doesn't look like flow to disk is supported on windows
> at
> present anyway. I could be wrong.
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
> 
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Straw Poll: proposal to remove certain features from qpidd

Posted by Gordon Sim <gs...@redhat.com>.
So, to follow up and summarise this thread so far, the only contentious 
point has been the loss of the 'flow to disk' functionality.

Though the current solution doesn't limit the memory used by a large 
queue, it can in certain cases reduce the rate of memory growth which in 
turn may buy a little more time to resolve the root cause. So while 
those using it are less than fully satisfied, they are (understandably) 
concerned at having even this limited solution taken away without having 
any clear plan to offer a replacement.

I have spent a little time thinking through what a better solution might 
look like and how much effort it would take. I believe that for ~3-5 
weeks work I could get something better in place. It would be, in the 
first instance, posix only[1]. It would be mutually exclusive with lvq 
or priority queue options. However it would be a more effective limit on 
the memory consumed as such a queue grew, and (I hope) would have a less 
drastic performance penalty at larger sizes.

There are a few options for how to proceed, and I'd like to take a quick 
straw poll to see which path the community favours.

(a) go ahead with the refactor, including the removal of features 
mentioned in the previous mail, subsequently focus first on AMQP 1.0 
support, only then return to add paged queue support

(b) go ahead with the refactor, including the removal of features 
mentioned in the previous mail, subsequently focus first on paged queue 
support, then proceed to add AMQP 1.0 support

(c) don't go ahead with the refactor until it can be combined with an 
alternative to flow to disk, and only then proceed with AMQP 1.0 support

(d) don't go ahead with the refactor at all

I myself favour (a). I think AMQP 1.0 support is more important and more 
work and would like to make more progress on that as soon as possible in 
order to have something ready for the 0.20 release. I can't guarantee 
that this path would result in the 0.20 release having the replacement 
for flow to disk functionality, but if not it would come soon after.

I'm not so keen on (c) because maintain such a large patch against a 
continually moving trunk is a lot of work in itself and I think that 
time can be better spent. I'm not keen on (d) because I honestly don't 
think I can add decent 1.0 support (or fix a number of known issues) 
without something like this refactor.

Anyway, over to you. Let me know what you think, I'm keen we reach some 
agreement by the end of the week. In the meantime I'll try and make my 
proposal for the flow to disk replacement a bit more concrete.

--Gordon.

[1] It will be designed such that it is relatively simple to provide 
alternative implementations for the posix functionality such that anyone 
with interest can easily add windows support for example. From what I 
can tell, it doesn't look like flow to disk is supported on windows at 
present anyway. I could be wrong.

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: proposal to remove certain features from qpidd

Posted by Alan Conway <ac...@redhat.com>.
+1 to the entire proposal.

On Thu, 2012-07-19 at 18:56 +0100, Gordon Sim wrote:
> I have been looking at what would be required to get AMQP 1.0 support 
> alongside AMQP 0-10 support in the c++ broker, i.e. qpidd.
> 
> As part of that it became clear some refactoring of the broker codebase 
> would be required[1]. That in turn led me to believe that we should 
> consider dropping certain features. These would be dropped *after* the 
> pending 0.18 release; i.e. they would still be present in 0.18, but that 
> would be the last release in which they were present if my proposal were 
> accepted.
> 
> The purpose of this mail is to list the features I would propose to drop 
> and my reasons for doing so. For those who find it overly long, I 
> apologise and offer a very short summary at the end!
> 
> In each case the basic argument is that I believe the features are not 
> very well implemented and keeping them working as part of my refactoring 
> would take extra time that I would rather spend on achieving 1.0 support 
> making real improvements.
> 
> The first feature I propose we drop is the 'legacy' versions of LVQ 
> behaviour. These forced a choice in the behaviour of the queue when 
> browsers (i.e. not destructive subscribers) received messages from it. 
> The choice was to either have browsers miss updates, or to suppress the 
> replacing of one message by another with a matching key. This choice was 
> really driven by a technical problem with the first implementation. We 
> have since already moved to an improved implementation where the 
> distinction is not relevant. I see no good reason to keep the old 
> behaviour any longer.
> 
> The second feature is the old async queue replication mechanism. This is 
> very fragile and I believe is no longer necessary given the new and 
> improved ha solution that first appeared in 0.16 and has been improved 
> significantly for 0.18.
> 
> The third feature is the 'last man standing' or 'cluster durable' 
> option. The biggest reason for dropping this comes later(!), but 
> considered on its own my concern is that there are no system level tests 
> for it so it is very hard to guarantee it still works without writing 
> all those tests. I am entirely unconvinced by this solution, and think 
> that again the new HA mechanism would be a better way to achieve this 
> (you could start up a backup node that forced all the replicated 
> messages to disk). I am therefore keen to avoid wasting time and effort.
> 
> The fourth feature is - wait for it - the clustered broker capability as 
> enabled by the cluster.so plugin. I believe this is nearing the end of 
> its life anyway. It is currently only available on linux with no real 
> prospects of being ported to windows. The design as it turns out was 
> very fragile to changes in the codebase and there are still some 
> difficult to solve bugs within it. A new HA mechanism has been developed 
> (as alluded to above) and I believe that will replace the old cluster. 
> The work needed to keep the cluster working through my refactor is 
> sizeable. It would in any case have the potential to destabilise the 
> cluster (the aforementioned issue with fragility). This seems to me to 
> argue strongly for dropping this in releases after 0.18, and for anyone 
> affected, that would give them some time to try out the new HA and give 
> feedback as well.
> 
> The fifth and final feature I propose we drop is the confusingly named 
> 'flow to disk' feature. Now for this one I have no alternative to offer 
> yet. The problem is supporting large queues whose aggregate size far 
> exceeds a bounded amount of memory. I believe the current implementation 
> is next to useless for the majority of cases as it keeps the headers of 
> all messages in memory. It is useless unless your messages are large 
> enough that the overhead keeping these headers in memory is outweighed 
> by the size of the body (this overhead is significantly larger than the 
> transfer size of the headers). Further, since a common cause for large 
> queues is a short lived disparity between the rate of inflow and 
> outflow, the current solution can compound the problem by radically 
> slowing down consumers even more. I believe there is a better solution 
> and I'm not convinced the current solution is worth the effort of 
> maintaining any further. (I know Kim has been working on a new store 
> interface and removing flow to disk would clean that up nicely as well!)
> 
> I hope this makes sense. I'm keen to get any thoughts or feedback on 
> these points. The purpose is not to deprive anyone of features they are 
> using but rather to spend time on more important work.
> 
> Summary:
> 
> features to drop are:
> 
> (i) legacy lvq modes; lvq support would still remain, only the two old 
> and peculiar modes would go; I really doubt anyone actually depends on 
> these anyway, they were more a limitation than a feature
> 
> (ii) asynchronous queue replication; solution is not mature enough for 
> real world use anyway due to fragility and inability to resync; new HA 
> mechanism as introduced in 0.16 and improved on in 0.18 should address 
> the need anyway.
> 
> (iii) clustering including last-man-standing mode; design is brittle and 
> currently ties it to linux platform; new HA is the long term solution 
> here anyway.
> 
> (iv) flow to disk; current solution really doesn't solve the problem anyway
> 
> --Gordon
> 
> [1] If you are interested at all, you kind find my latest patch and some 
> notes on the internal changes up on reviewboard: 
> https://reviews.apache.org/r/5833/
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
> 



---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org