You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@activemq.apache.org by Clebert Suconic <cl...@gmail.com> on 2016/12/21 17:32:57 UTC

[DISCUSS] ARTEMIS-24 - Lazy Conversions on messages

Part I: the conversion itself


Currently when a message is sent protocols other than Artemis Core,
the message is converted to core at the server, stored in core format,
and if the consumer is on a different protocol than core.. converted
back to that protocol.


Example:

Producer/Consumer using AMQP:

- Client sends AMQP
- Server receives the message convert to Core, stores in core format
(on journal or paging)
- When sending, the converter towards AMQP is used.. and client receives AMQP

So, I want to make sure the Message stays in bytes format on its
original condition all the way back to the client.




To implement this, I want to create a MessageCode interface, that will
parse the body and properties on demand. That way the conversion would
be more natural instead of the current Message converters we have in
place.

The codec would be something like:

public interface MessageCodec {

   Object getProtocol();

   TypedProperties getProperties();

   MessageCodec setProperties(Iterator<Map.Entry<Object, Object>> properties);

   Object getBody();

   BodyType getBodyType();

   MessageCodec setBody(BodyType type, Object body);

   SimpleString getAddress();

   /** The buffer will belong to this message, until release is called. */
   MessageCodec setBuffer(ByteBuf buffer);

   ByteBuf getBuffer();

   MessageCodec minimalDecode();
}


So, when the message is being converted from AMQP to OpenWire, we
would get the Code for OpenWire at the ProtocolManager and make the
conversion on demand. We wouldn't need any intermediate format for
such thing.. it should always work N<->N on the protocol managers.



Part II: The buffers allocations.

Right now the Core Protocol will reuse the encoding from
Netty->Message and try to make a zero copy all the way back from
receiving, to journal add (or page.add), and back to receiving the
client.

the issue is: this worked really well back when Netty was being built.
It actually still works well (see my blog post).. However, since now
we are crossing protocols we need, and have an opportunity to fix
this.

If I fix the buffer allocations to be more independent, we could now
start using PooledBuffers from Netty more adequately and avoiding a
lot of GC pressure. It would improve things even beyond of what we are
doing now...


This is getting into a bigger scope than I usually like when fixing an
issue, but given that the encoding and buffer reuse is tightly coupled
I need to fix this now.


I was going to post this only in Jan after my holiday's break.. but
since we had a few parallel discussions, I wanted to give a heads up
now. I'm not planning to have any big discussions now.. lets save it
next year. I can even provide more details next year.



Happy holidays everyone!