You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@trafficserver.apache.org by ri lu <rs...@gmail.com> on 2017/06/01 06:53:54 UTC

slow client triggered cache fill will slow down the subsequence fast clients when read_while_writer enabled

Hi Team,

For a cache-miss content (let's say 100MB), if the first client is in slow
speed network, let's say 100kbps, then the cache fill will be limited to be
100kbps as well, this will restrict the subsequence clients's downloading
rate if *read_while_writer is enabled*. I think, correct me if I'm wrong,
this is due to the first slow client is attached to the HttpTunnel's http
server producer instead of the cache producer, so if the producer cache
fill too fast it will exhaust memory, while the site effect of this
behavior is the disk cache consumer is paced to the same speed of the slow
client, which in turn affect the subsequence clients' downloading bitrate.
I think this is really a bad thing for a caching proxy product, because the
cache fill speed is depend on the *first* client's speed (Be note, the
read_while_writer has to be enabled in my case.).

My question is: is there're any workaround (except for the flow control
feature) for this issue and any planing for enhancing this, e.g. attach the
first client onto the disk cache producer?

-Thanks
-Ryan

Re: slow client triggered cache fill will slow down the subsequence fast clients when read_while_writer enabled

Posted by ri lu <rs...@gmail.com>.
Alan, thanks for your quick response.

I'm trying to workout a plugin which will hook at the lookup content phase
and put the client transaction into a pending queue if it's cache-miss, and
then fake a client request by issue a request by HttpFetch api and set a
special http request header. Meanwhile, there will be some changes in the
HttpTunnel, which is for this special http request I will trigger the
HttpTunnel's inactive timeout logic immediately instead of wait for 30s by
default, this will detach the client but still continue cache fill the
content. The plugin will resume the pending transaction queue in
TS_FETCH_EVENT_EXT_BODY_DONE. Then, I think all of the real transactions
will be attached to cache producer. Do you think it's a good solution for
this issue?

-Thanks
-Ryan

On Thu, Jun 1, 2017 at 9:50 PM, Alan Carroll <
solidwallofcode@yahoo-inc.com.invalid> wrote:

> There's not really a work around, other than flow control, at this time.
> Changing it would be tricky bit of work as at the time the first
> transaction occurs there is no cache producer to which to attach. After the
> second client, the first transaction / HttpTunnel would somehow need to be
> restructured in a thread safe manner to use a different CacheVC. That
> wouldn't be simple.
> On Thursday, June 1, 2017, 1:54:01 AM CDT, ri lu <rs...@gmail.com>
> wrote:
>
> Hi Team,
>
> For a cache-miss content (let's say 100MB), if the first client is in slow
> speed network, let's say 100kbps, then the cache fill will be limited to be
> 100kbps as well, this will restrict the subsequence clients's downloading
> rate if *read_while_writer is enabled*. I think, correct me if I'm wrong,
> this is due to the first slow client is attached to the HttpTunnel's http
> server producer instead of the cache producer, so if the producer cache
> fill too fast it will exhaust memory, while the site effect of this
> behavior is the disk cache consumer is paced to the same speed of the slow
> client, which in turn affect the subsequence clients' downloading bitrate.
> I think this is really a bad thing for a caching proxy product, because the
> cache fill speed is depend on the *first* client's speed (Be note, the
> read_while_writer has to be enabled in my case.).
>
> My question is: is there're any workaround (except for the flow control
> feature) for this issue and any planing for enhancing this, e.g. attach the
> first client onto the disk cache producer?
>
> -Thanks
> -Ryan
>

Re: slow client triggered cache fill will slow down the subsequence fast clients when read_while_writer enabled

Posted by Alan Carroll <so...@yahoo-inc.com.INVALID>.
There's not really a work around, other than flow control, at this time. Changing it would be tricky bit of work as at the time the first transaction occurs there is no cache producer to which to attach. After the second client, the first transaction / HttpTunnel would somehow need to be restructured in a thread safe manner to use a different CacheVC. That wouldn't be simple.
On Thursday, June 1, 2017, 1:54:01 AM CDT, ri lu <rs...@gmail.com> wrote:

Hi Team,

For a cache-miss content (let's say 100MB), if the first client is in slow
speed network, let's say 100kbps, then the cache fill will be limited to be
100kbps as well, this will restrict the subsequence clients's downloading
rate if *read_while_writer is enabled*. I think, correct me if I'm wrong,
this is due to the first slow client is attached to the HttpTunnel's http
server producer instead of the cache producer, so if the producer cache
fill too fast it will exhaust memory, while the site effect of this
behavior is the disk cache consumer is paced to the same speed of the slow
client, which in turn affect the subsequence clients' downloading bitrate.
I think this is really a bad thing for a caching proxy product, because the
cache fill speed is depend on the *first* client's speed (Be note, the
read_while_writer has to be enabled in my case.).

My question is: is there're any workaround (except for the flow control
feature) for this issue and any planing for enhancing this, e.g. attach the
first client onto the disk cache producer?

-Thanks
-Ryan