You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@mina.apache.org by Emmanuel Lecharny <el...@apache.org> on 2009/07/30 21:18:19 UTC

Question about how to efficiently handle reads...

Hi guys,

as I'm trying to document the IoProcessor main loop, I saw that the 
read( Session ) does read a buffer and then immediately push it to the 
chain, regardless there are more bytes in the socket :

    private void read(T session) {
        IoSessionConfig config = session.getConfig();
        IoBuffer buf = IoBuffer.allocate(config.getReadBufferSize());

        try {
            int readBytes = 0;
            int ret;

            try {
                    ret = read(session, buf);
                    if (ret > 0) {
                        readBytes = ret;
                    }
                }
            } finally {
                buf.flip();
            }

            if (readBytes > 0) {
                IoFilterChain filterChain = session.getFilterChain();
                filterChain.fireMessageReceived(buf);


My question is : should we try to read as much data as we can, emptying 
the socket, then pushing the big buffer to the chain, or is it better to 
do as we do right now ?

AFAICT, there are pros and cons for both cases.

1) Reading as much as we can
cons :
- we have to copy potentially a lot of buffers into a bigger one if we 
have, say many Kb of data available
- we may suck a lot of memory to handle those buffers
- we may have more than one message in the buffer
pros :
- less chain processing
- probably less accumulation if we are using a cumulative protocol decoder
- most certainly faster processing of messages

2) Reading a buffer and send it
cons :
- fragmentation of messages
- many chain processing for potentially a single large message
- many loops to process a complete message
pros :
- small buffers means less memory consumption

I may be biased, but I tend to think that the first approach is probably 
better.

thoughts ? as anyone able to conduct a load tests to compare both 
approaches if we implement the first approach ?

Thanks !

-- 
--
cordialement, regards,
Emmanuel Lécharny
www.iktek.com
directory.apache.org



Re: Question about how to efficiently handle reads...

Posted by Emmanuel Lecharny <el...@apache.org>.
Julien Vermillard wrote:
> Hi,
> There is a problem with the fairness against writes no ?
> You can perhaps saturate the IoP with reading ?
>   

yes, probably, but I think it would be better then to use a 
configuration parameter to ensure the fairness, like the maximum size we 
can read before passing to the next connection. Instead of what we have 
a brutal system which limits itself to the buffer size (often very 
small), and does a hell lot of calls to the full stack when it can be 
done in one single call.

For big messages, that would probably improve the throughput...
> Julien
>
> Le Thu, 30 Jul 2009 21:18:19 +0200,
> Emmanuel Lecharny <el...@apache.org> a écrit :
>
>   
>> Hi guys,
>>
>> as I'm trying to document the IoProcessor main loop, I saw that the 
>> read( Session ) does read a buffer and then immediately push it to
>> the chain, regardless there are more bytes in the socket :
>>
>>     private void read(T session) {
>>         IoSessionConfig config = session.getConfig();
>>         IoBuffer buf = IoBuffer.allocate(config.getReadBufferSize());
>>
>>         try {
>>             int readBytes = 0;
>>             int ret;
>>
>>             try {
>>                     ret = read(session, buf);
>>                     if (ret > 0) {
>>                         readBytes = ret;
>>                     }
>>                 }
>>             } finally {
>>                 buf.flip();
>>             }
>>
>>             if (readBytes > 0) {
>>                 IoFilterChain filterChain = session.getFilterChain();
>>                 filterChain.fireMessageReceived(buf);
>>
>>
>> My question is : should we try to read as much data as we can,
>> emptying the socket, then pushing the big buffer to the chain, or is
>> it better to do as we do right now ?
>>
>> AFAICT, there are pros and cons for both cases.
>>
>> 1) Reading as much as we can
>> cons :
>> - we have to copy potentially a lot of buffers into a bigger one if
>> we have, say many Kb of data available
>> - we may suck a lot of memory to handle those buffers
>> - we may have more than one message in the buffer
>> pros :
>> - less chain processing
>> - probably less accumulation if we are using a cumulative protocol
>> decoder
>> - most certainly faster processing of messages
>>
>> 2) Reading a buffer and send it
>> cons :
>> - fragmentation of messages
>> - many chain processing for potentially a single large message
>> - many loops to process a complete message
>> pros :
>> - small buffers means less memory consumption
>>
>> I may be biased, but I tend to think that the first approach is
>> probably better.
>>
>> thoughts ? as anyone able to conduct a load tests to compare both 
>> approaches if we implement the first approach ?
>>
>> Thanks !
>>
>>     


-- 
--
cordialement, regards,
Emmanuel Lécharny
www.iktek.com
directory.apache.org



Re: Question about how to efficiently handle reads...

Posted by Julien Vermillard <jv...@archean.fr>.
Hi,
There is a problem with the fairness against writes no ?
You can perhaps saturate the IoP with reading ?
Julien

Le Thu, 30 Jul 2009 21:18:19 +0200,
Emmanuel Lecharny <el...@apache.org> a écrit :

> Hi guys,
> 
> as I'm trying to document the IoProcessor main loop, I saw that the 
> read( Session ) does read a buffer and then immediately push it to
> the chain, regardless there are more bytes in the socket :
> 
>     private void read(T session) {
>         IoSessionConfig config = session.getConfig();
>         IoBuffer buf = IoBuffer.allocate(config.getReadBufferSize());
> 
>         try {
>             int readBytes = 0;
>             int ret;
> 
>             try {
>                     ret = read(session, buf);
>                     if (ret > 0) {
>                         readBytes = ret;
>                     }
>                 }
>             } finally {
>                 buf.flip();
>             }
> 
>             if (readBytes > 0) {
>                 IoFilterChain filterChain = session.getFilterChain();
>                 filterChain.fireMessageReceived(buf);
> 
> 
> My question is : should we try to read as much data as we can,
> emptying the socket, then pushing the big buffer to the chain, or is
> it better to do as we do right now ?
> 
> AFAICT, there are pros and cons for both cases.
> 
> 1) Reading as much as we can
> cons :
> - we have to copy potentially a lot of buffers into a bigger one if
> we have, say many Kb of data available
> - we may suck a lot of memory to handle those buffers
> - we may have more than one message in the buffer
> pros :
> - less chain processing
> - probably less accumulation if we are using a cumulative protocol
> decoder
> - most certainly faster processing of messages
> 
> 2) Reading a buffer and send it
> cons :
> - fragmentation of messages
> - many chain processing for potentially a single large message
> - many loops to process a complete message
> pros :
> - small buffers means less memory consumption
> 
> I may be biased, but I tend to think that the first approach is
> probably better.
> 
> thoughts ? as anyone able to conduct a load tests to compare both 
> approaches if we implement the first approach ?
> 
> Thanks !
>