You are viewing a plain text version of this content. The canonical link for it is here.
Posted to oak-issues@jackrabbit.apache.org by "Andrei Dulceanu (JIRA)" <ji...@apache.org> on 2017/08/18 13:32:00 UTC

[jira] [Created] (OAK-6565) GetBlobResponseEncoder should not write all chunks at once

Andrei Dulceanu created OAK-6565:
------------------------------------

             Summary: GetBlobResponseEncoder should not write all chunks at once
                 Key: OAK-6565
                 URL: https://issues.apache.org/jira/browse/OAK-6565
             Project: Jackrabbit Oak
          Issue Type: Improvement
          Components: segment-tar
    Affects Versions: 1.6.1
            Reporter: Andrei Dulceanu
            Assignee: Andrei Dulceanu
             Fix For: 1.8, 1.7.6


{{GetBlobResponseEncoder}} writes too fast all the chunks, leaving the channel in a not-writable state, after the first write. The problem is not visible at a first glance, especially when using small blobs for testing. Increasing the blobs size, as done for OAK-6538, revealed the problem. Not only this triggers hidden {{OutOfMemory}} errors on either server or client, but sometimes incomplete blobs are sent along, which are interpreted by the client as valid.

A more elegant solution, which also solves the memory consumption problem, would be to use {{ChunkedWriteHandler}} which employs complex logic on how and when to write the chunks. {{ChunkedWriteHandler}} must be used in conjunction with a custom {{ChunkedInput<ByteBuf>}} implementation to generate {{header}} + {{payload}} chunks from an {{InputStream}}, as done currently. This way the server will send more chunks only when the previous one was consumed by the client.

/cc [~frm]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)