You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@qpid.apache.org by Tor Rune Skoglund <tr...@swi.no> on 2014/03/05 18:01:47 UTC

Questions about queues, priorities and multi-plexing....

Situation description:

Let's say we have a central server with a queue for every connected
node. Every node also has its own exchange and queues. Each of the
server's queues is 'routed' to the node's exchange, which delivers
"automagically" to local (incoming message) queue(s) based on message
type/topic/whatever. The node's applications then reads from its right
node queue.

There is only one "physical connection" to a single SERVERIP:PORT,
initiated from each node. The nodes might be on different types of
networks with different characteristics for latency, bandwidth, etc.
E.g. could we have cabled gigabit nodes and GPRS mobile nodes.

It is our current understanding that a simple queue delivers one message
at a time. This means that a message in transfer will block queued
messages that might be more important until the message is fully
transferred - which might take some time on low bandwidth, unstable
networks.

So how do we make sure more important messages gets priority? First idea
was to use the priority queue. Even if the queue will send the higher
prioritized messages first, they will still block other messages while
being transferred. Not good.

Next idea: Use a queue for each priority level of messages to each
client. Still, if there is just one message at a time "on the wire", we
have a problem, unless one in some way can "multiplex" messages from
different queues. Which means that lower priority messages still will
flow through the connection, albeit slower (which is OK) when higher
prioritized messages are also being delivered.

Are our naïve assumptions above correct? Or are we imagining a problem
that there is already a standard solution for?

How would you suggest we setup our system to handle prioritized messages
without blocking any of them, but sharing bandwidth "fairly" based on a
message's priority?

How about control messages while payloads are being delivered? Are they
blocked until messages is delivered, or will they get through in some way?

- Tor Rune Skoglund, learning...


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: Questions about queues, priorities and multi-plexing....

Posted by Gordon Sim <gs...@redhat.com>.
On 03/05/2014 05:01 PM, Tor Rune Skoglund wrote:
> Situation description:
>
> Let's say we have a central server with a queue for every connected
> node. Every node also has its own exchange and queues. Each of the
> server's queues is 'routed' to the node's exchange, which delivers
> "automagically" to local (incoming message) queue(s) based on message
> type/topic/whatever. The node's applications then reads from its right
> node queue.
>
> There is only one "physical connection" to a single SERVERIP:PORT,
> initiated from each node. The nodes might be on different types of
> networks with different characteristics for latency, bandwidth, etc.
> E.g. could we have cabled gigabit nodes and GPRS mobile nodes.
>
> It is our current understanding that a simple queue delivers one message
> at a time. This means that a message in transfer will block queued
> messages that might be more important until the message is fully
> transferred - which might take some time on low bandwidth, unstable
> networks.

I may be misunderstanding what you are saying, but I think this is an 
incorrect assumption.

You can send messages to a queue asynchronously, i.e. send the 
subsequent message before the previous message has been confirmed (or 
even received) by the broker.

The broker will also send out messages to consumers asynchronously.

That is, the fact that it is in the process of delivering a message from 
the queue to one connection does not prevent the message behind that on 
the queue being delivered to another connection on another thread.

For messages going out from the broker over the same connection, there 
is no need to wait for confirmation from the consumer of one message 
before sending another, though the IO writes for any given connection 
will always be serialised, they can be internally batched. You just need 
to make sure that the clients give the broker sufficient credit, i.e. 
they enable sufficient 'prefetch'.

> So how do we make sure more important messages gets priority? First idea
> was to use the priority queue. Even if the queue will send the higher
> prioritized messages first, they will still block other messages while
> being transferred. Not good.
>
> Next idea: Use a queue for each priority level of messages to each
> client. Still, if there is just one message at a time "on the wire", we
> have a problem, unless one in some way can "multiplex" messages from
> different queues. Which means that lower priority messages still will
> flow through the connection, albeit slower (which is OK) when higher
> prioritized messages are also being delivered.
>
> Are our naïve assumptions above correct? Or are we imagining a problem
> that there is already a standard solution for?
>
> How would you suggest we setup our system to handle prioritized messages
> without blocking any of them, but sharing bandwidth "fairly" based on a
> message's priority?
>
> How about control messages while payloads are being delivered? Are they
> blocked until messages is delivered, or will they get through in some way?
>
> - Tor Rune Skoglund, learning...
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
>


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org