You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@trafficserver.apache.org by "Walsh, Peter" <Pe...@disney.com> on 2012/08/01 18:14:49 UTC

TSHttpConnect and chunked flow control

Hello all,
We have a plugin that uses TSHttpConnect and have noticed that when the response is not from cache and is over about 40k, traffic server begins to introduce latencies between its callbacks to the event TS_EVENT_VCONN_READ_READY.  I have only been able to reproduce this in a clustered environment, with both 3.0.1 and 3.2.  It does not happen in a non-clustered environment.  Also, when the object is from cache (so no chunked encoding), it is returned very fast.

When I turned up debugging I found the following messages "(http_chunk_flow) Blocking reenable - flow control in effect" which seem tied to the delays.

When I try the same call using just a remap rule I have not been able to reproduce the issue.  That leads me to believe we are doing something wrong here with our response handling.

Things I have tried:

 *   Increasing size to read with TSVConnRead call
 *   Increasing proxy.config.io.max_buffer_size

Any help would be greatly appreciated.

Code snippet:

        avail = TSIOBufferReaderAvail(mServerResponseBufferReader);
        totalAvail = avail;
        while (avail > 0) {

            blk = TSIOBufferReaderStart(mServerResponseBufferReader);

            buf_ptr = (char *) TSIOBufferBlockReadStart(blk, mServerResponseBufferReader, &read);
            if (read <= 0) {
                break;
            }
            else {
                saveRespData(buf_ptr, read);
                TSIOBufferReaderConsume(mServerResponseBufferReader, read);
                avail -= read;
            }

            blk = TSIOBufferBlockNext(blk);
        }

//NOTE: I have tried with and without this statement
        TSVIONDoneSet(mResponseVio, TSVIONDoneGet(mResponseVio) + totalAvail);

        TSVIOReenable(mResponseVio);


Thank you,
Pete

RE: TSHttpConnect and chunked flow control

Posted by "Walsh, Peter" <Pe...@disney.com>.
Ok thanks James.

We were going to add some trace statements into this method and try adjusting those numbers to see if that fixes our issue.

Pete Walsh
Software Engineer
206-664-4150

-----Original Message-----
From: James Peach [mailto:jpeach@apache.org] 
Sent: Wednesday, August 01, 2012 8:37 PM
To: dev@trafficserver.apache.org
Subject: Re: TSHttpConnect and chunked flow control

On 01/08/2012, at 9:14 AM, "Walsh, Peter" <Pe...@disney.com> wrote:

> Hello all,
> We have a plugin that uses TSHttpConnect and have noticed that when the response is not from cache and is over about 40k, traffic server begins to introduce latencies between its callbacks to the event TS_EVENT_VCONN_READ_READY.  I have only been able to reproduce this in a clustered environment, with both 3.0.1 and 3.2.  It does not happen in a non-clustered environment.  Also, when the object is from cache (so no chunked encoding), it is returned very fast.
> 
> When I turned up debugging I found the following messages "(http_chunk_flow) Blocking reenable - flow control in effect" which seem tied to the delays.

It looks like you will get this message in chunked_reenable() when the following condition is false:
	dbuf->max_read_avail() < max_chunked_ahead_bytes && dbuf->max_block_count() < max_chunked_ahead_blocks

So max_chunked_ahead_bytes is 32K and max_chunked_ahead_blocks is 128. Maybe if the origin is fast, ATS can receive the 40K before the client is able to consume any of it and you end up spuriously activating the flow control.

I don't really have a clue about this part of the code, but that would be my wild guess. You could try recompiling ATS and bumping max_chunked_ahead_bytes ...

> 
> When I try the same call using just a remap rule I have not been able to reproduce the issue.  That leads me to believe we are doing something wrong here with our response handling.
> 
> Things I have tried:
> 
> *   Increasing size to read with TSVConnRead call
> *   Increasing proxy.config.io.max_buffer_size
> 
> Any help would be greatly appreciated.
> 
> Code snippet:
> 
>        avail = TSIOBufferReaderAvail(mServerResponseBufferReader);
>        totalAvail = avail;
>        while (avail > 0) {
> 
>            blk = TSIOBufferReaderStart(mServerResponseBufferReader);
> 
>            buf_ptr = (char *) TSIOBufferBlockReadStart(blk, mServerResponseBufferReader, &read);
>            if (read <= 0) {
>                break;
>            }
>            else {
>                saveRespData(buf_ptr, read);
>                TSIOBufferReaderConsume(mServerResponseBufferReader, read);
>                avail -= read;
>            }
> 
>            blk = TSIOBufferBlockNext(blk);
>        }
> 
> //NOTE: I have tried with and without this statement
>        TSVIONDoneSet(mResponseVio, TSVIONDoneGet(mResponseVio) + totalAvail);
> 
>        TSVIOReenable(mResponseVio);
> 
> 
> Thank you,
> Pete


Re: TSHttpConnect and chunked flow control

Posted by James Peach <jp...@apache.org>.
On 01/08/2012, at 9:14 AM, "Walsh, Peter" <Pe...@disney.com> wrote:

> Hello all,
> We have a plugin that uses TSHttpConnect and have noticed that when the response is not from cache and is over about 40k, traffic server begins to introduce latencies between its callbacks to the event TS_EVENT_VCONN_READ_READY.  I have only been able to reproduce this in a clustered environment, with both 3.0.1 and 3.2.  It does not happen in a non-clustered environment.  Also, when the object is from cache (so no chunked encoding), it is returned very fast.
> 
> When I turned up debugging I found the following messages "(http_chunk_flow) Blocking reenable - flow control in effect" which seem tied to the delays.

It looks like you will get this message in chunked_reenable() when the following condition is false:
	dbuf->max_read_avail() < max_chunked_ahead_bytes && dbuf->max_block_count() < max_chunked_ahead_blocks

So max_chunked_ahead_bytes is 32K and max_chunked_ahead_blocks is 128. Maybe if the origin is fast, ATS can receive the 40K before the client is able to consume any of it and you end up spuriously activating the flow control.

I don't really have a clue about this part of the code, but that would be my wild guess. You could try recompiling ATS and bumping max_chunked_ahead_bytes ...

> 
> When I try the same call using just a remap rule I have not been able to reproduce the issue.  That leads me to believe we are doing something wrong here with our response handling.
> 
> Things I have tried:
> 
> *   Increasing size to read with TSVConnRead call
> *   Increasing proxy.config.io.max_buffer_size
> 
> Any help would be greatly appreciated.
> 
> Code snippet:
> 
>        avail = TSIOBufferReaderAvail(mServerResponseBufferReader);
>        totalAvail = avail;
>        while (avail > 0) {
> 
>            blk = TSIOBufferReaderStart(mServerResponseBufferReader);
> 
>            buf_ptr = (char *) TSIOBufferBlockReadStart(blk, mServerResponseBufferReader, &read);
>            if (read <= 0) {
>                break;
>            }
>            else {
>                saveRespData(buf_ptr, read);
>                TSIOBufferReaderConsume(mServerResponseBufferReader, read);
>                avail -= read;
>            }
> 
>            blk = TSIOBufferBlockNext(blk);
>        }
> 
> //NOTE: I have tried with and without this statement
>        TSVIONDoneSet(mResponseVio, TSVIONDoneGet(mResponseVio) + totalAvail);
> 
>        TSVIOReenable(mResponseVio);
> 
> 
> Thank you,
> Pete