You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@trafficserver.apache.org by "Faysal Banna (JIRA)" <ji...@apache.org> on 2014/05/13 13:34:15 UTC

[jira] [Commented] (TS-2761) Weird behavior of read-while-write

    [ https://issues.apache.org/jira/browse/TS-2761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13996285#comment-13996285 ] 

Faysal Banna commented on TS-2761:
----------------------------------

By The way guys 
as of version squid 3.5 collapsed_forwarding (read-while-write) works like a charm 


just to inform you that other projects are working great.
and also i updated some helpers on squid with Lua script blocks and also works like charm with nice speed and performance.

no crashes and no errors. 

so you should build up some momentum at least to fix the crashes.

if i should note that the crashes on 4.2.1 as well as on 5.0.0 are more frequent. The coredumps already sent and attempts were tried to fix it but nothing was really fixed. thanks to you guys i know you are working on them to fix them but to be frank I see others are getting a faster paste in resolving the problems.


much regards 


> Weird behavior of read-while-write 
> -----------------------------------
>
>                 Key: TS-2761
>                 URL: https://issues.apache.org/jira/browse/TS-2761
>             Project: Traffic Server
>          Issue Type: Bug
>          Components: Cache, Performance, Quality
>            Reporter: Faysal Banna
>             Fix For: sometime
>
>
> Hello.
> There is an issue with read-while-write stuff in ATS that is explained as follows :
> If a client starts a file download lets say 10MB file,  and after couple of seconds another client coincidentally requests this same file.
> When client1 terminates the file download for any reason,  supposedly client2 should take over and continue the download till it gets saved, yet
>  whats happening here is that client2 gets connection interrupted after whatever is configured in proxy.config.http.background_fill_active_timeout.
> And then re-initiates another request in a Range header and thus the file is never saved.
> isn't this unpleasant to deal with read_while_revalidate and cache saving process ? 
> Imagine  a 200MB windows update file, defenetly we need that saved in the cache.
> look at the situation where  You have 10 clients watching a movie that am happy that my server is caching it .. suddenly first client who initially requested the movie aborts all the remaining 9 clients would get interrupted and each one requests a new file with range header
> and thus now i shall be getting 9 different requests for the same movie with range header which is never cached and thus instead of saving bandwidth you shall be consuming bandwidth for the same file(object) 9 times.
> In my opinion background fill should take over only if no one is consuming the connection(request) anymore, and thus it may timeout with whatever timeout it holds in config.
> Much Regards 



--
This message was sent by Atlassian JIRA
(v6.2#6252)